CN108268813B - Lane departure early warning method and device and electronic equipment - Google Patents

Lane departure early warning method and device and electronic equipment Download PDF

Info

Publication number
CN108268813B
CN108268813B CN201611253350.4A CN201611253350A CN108268813B CN 108268813 B CN108268813 B CN 108268813B CN 201611253350 A CN201611253350 A CN 201611253350A CN 108268813 B CN108268813 B CN 108268813B
Authority
CN
China
Prior art keywords
image
lane departure
lane
sample
labeled
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201611253350.4A
Other languages
Chinese (zh)
Other versions
CN108268813A (en
Inventor
陶海
刘倩
林宇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Vion Intelligent Technology Co ltd
Original Assignee
Beijing Vion Intelligent Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Vion Intelligent Technology Co ltd filed Critical Beijing Vion Intelligent Technology Co ltd
Priority to CN201611253350.4A priority Critical patent/CN108268813B/en
Publication of CN108268813A publication Critical patent/CN108268813A/en
Application granted granted Critical
Publication of CN108268813B publication Critical patent/CN108268813B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/588Recognition of the road, e.g. of lane markings; Recognition of the vehicle driving pattern in relation to the road
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/94Hardware or software architectures specially adapted for image or video understanding

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Software Systems (AREA)
  • Multimedia (AREA)
  • Biophysics (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Biomedical Technology (AREA)
  • Health & Medical Sciences (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Image Analysis (AREA)

Abstract

The invention relates to the technical field of pattern recognition, and discloses a lane departure early warning method, a lane departure early warning device and electronic equipment. The lane departure early warning method comprises the following steps: acquiring an image frame to be detected and a lane departure model detector; the lane departure model detector is obtained by training a marked lane departure image sample set through a convolutional neural network; predicting the lane position of the image frame to be detected to obtain a lane area image to be processed of the image frame to be detected; and inputting the to-be-processed lane area image to the lane departure model detector to obtain a lane departure result. By adopting the technical scheme of the invention, the anti-noise capability and the robustness are improved, so that the lane departure early warning precision is improved.

Description

Lane departure early warning method and device and electronic equipment
Technical Field
The invention relates to the technical field of pattern recognition, in particular to a lane departure early warning method and device and electronic equipment.
Background
With the development of social economy, the market demand of the automobile assistant driving system is more and more, and the functions of the automobile assistant driving system are more and more. Solutions for the warning method related to lane line deviation in the driving process of the automobile are also different. In the related lane departure warning method in the prior art, a lane line needs to be detected first, then whether the lane line departs or not is judged according to the detected lane line, and finally warning is carried out according to a judgment result. In the conventional lane line detection method, lane line features are pre-extracted, and then lane line fitting is performed by using a geometric model such as Hough transformation. Common techniques used in this process include boundary detection, binarization, Hough transform, etc.
However, in the process of implementing the lane departure warning, the inventor finds that at least the following technical problems exist in the prior art:
in the prior art, technologies commonly used in the lane line detection process, such as boundary detection, binarization, Hough transformation and the like, are easily interfered by image noise, light, a road surface environment, roadside scenery and the like, so that the lane departure early warning precision is low.
Disclosure of Invention
The invention aims to provide a lane departure early warning method, a lane departure early warning device and electronic equipment, so that lane departure early warning precision is high.
In order to solve the above technical problem, an embodiment of the present invention provides a lane departure warning method, including the following steps:
acquiring an image frame to be detected and a lane departure model detector; the lane departure model detector is obtained by training a marked lane departure image sample set through a convolutional neural network;
predicting the lane position of the image frame to be detected to obtain a lane area image to be processed of the image frame to be detected;
and inputting the to-be-processed lane area image to the lane departure model detector to obtain a lane departure result.
The embodiment of the present invention also provides a lane departure warning device, including:
the information acquisition unit is used for acquiring an image frame to be detected and a lane departure model detector; the lane departure model detector is obtained by training a marked lane departure image sample set through a convolutional neural network;
the area acquisition unit is used for predicting the lane position of the image frame to be detected and acquiring a lane area image to be processed of the image frame to be detected;
and the lane early warning unit is used for inputting the to-be-processed lane area image to the lane departure model detector and acquiring a lane departure result.
An embodiment of the present invention also provides an electronic device, including: the device comprises a shell, a processor, a memory, a circuit board and a power circuit, wherein the circuit board is arranged in a space enclosed by the shell, and the processor and the memory are arranged on the circuit board; a power supply circuit for supplying power to each circuit or device of the electronic apparatus; the memory is used for storing executable program codes; the processor executes a program corresponding to the executable program code by reading the executable program code stored in the memory, for executing the lane departure warning method described above.
Compared with the prior art, the embodiment of the invention obtains the image frame to be detected and the lane departure model detector; the lane departure model detector is obtained by training a marked lane departure image sample set through a convolutional neural network; predicting the lane position of the image frame to be detected to obtain a lane area image to be processed of the image frame to be detected; and inputting the to-be-processed lane area image to the lane departure model detector to obtain a lane departure result. By adopting the technical scheme of the invention, the lane departure early warning method can be simplified, and the anti-noise capability and robustness are improved, so that the lane departure early warning precision is improved.
Drawings
Fig. 1 is a flowchart of a lane departure warning method according to an embodiment of the present invention;
fig. 2 is a flowchart of another lane departure warning method provided in the embodiment of the present invention;
fig. 3 is a schematic structural diagram of a lane departure warning device according to an embodiment of the present invention;
fig. 4 is a schematic structural diagram of an electronic device according to an embodiment of the present invention;
fig. 5 is a schematic diagram of a lane line in an image with a labeled training sample in the lane departure warning method according to the embodiment of the present invention;
fig. 6 is a schematic diagram of foreground and background labels of a lane area image in a training sample image with labels in a lane departure warning method according to an embodiment of the present invention;
fig. 7 is a schematic diagram of coordinates of a lane feature point in a training sample image with a label in the lane departure warning method provided by the embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, embodiments of the present invention will be described in detail below with reference to the accompanying drawings. However, it will be appreciated by those of ordinary skill in the art that numerous technical details are set forth in order to provide a better understanding of the present application in various embodiments of the present invention. However, the technical solutions claimed in the claims of the present application can be implemented without these technical details and with various changes and modifications based on the following embodiments.
A first embodiment of the present invention relates to a lane departure warning method. The specific flow is shown in figure 1. The method comprises the following steps:
101: acquiring an image frame to be detected and a lane departure model detector; the lane departure model detector is obtained by training a marked lane departure image sample set through a convolutional neural network;
102: predicting the lane position of the image frame to be detected to obtain a lane area image to be processed of the image frame to be detected;
103: and inputting the to-be-processed lane area image to the lane departure model detector to obtain a lane departure result.
Compared with the prior art, the embodiment of the invention adopts the convolutional neural network to train the lane departure image sample set with the labels to obtain the lane departure model detector, analyzes the lane area image to be processed of the image frame to be detected by the lane departure model detector, and obtains the lane departure result. By adopting the technical scheme of the invention, the lane departure early warning method flow is simplified, and the anti-noise capability and robustness are improved, so that the lane departure early warning precision is improved.
Based on the above embodiments, a second embodiment of the present invention relates to a lane departure warning method; as shown in fig. 2, the specific implementation steps are as follows:
201: acquiring an original training sample image set with labels; wherein the labeled training sample images in the labeled original training sample image set are labeled; the label includes: left lane departure, no lane departure, right lane departure.
It should be noted that the labeled training sample image in the labeled original training sample image set is labeled; the tag may further include: the coordinates of the lane feature points in the current sample image; the feature points include: one or any combination of boundary points, center points, corner points, n equal division points of the current lane.
It should be further noted that labeled training sample images in the labeled original training sample image set are labeled; the label includes: foreground and background in the current sample image; the foreground is a current lane area, and the background is an area outside the current lane area.
It should be noted that, the original sample image set in step 201 may be obtained from a forward view video saved by an automobile tachograph during driving; the automobile data recorder is not limited to be arranged in a window, and can be arranged in the window, the roof, one side of the window and other visual angles to contain the position of the current lane. The original training sample image set with the labels is obtained by labeling the original sample image set; the specific labeling method for the single-frame image is as follows:
labeling mode 1: the original sample image is labeled as three types of labels, namely, left lane departure label, right lane departure label, as shown in fig. 5.
Labeling mode 2: marking coordinates of relevant feature points of the current lane in the original sample image as shown in fig. 7, wherein the feature points include but are not limited to one or any combination of boundary points, center points, corner points, n equal division points of the current lane; specifically, the horizontal coordinate and the vertical coordinate are marked on the feature points.
Labeling mode 3: performing template marking on the current lane in the original sample image as shown in fig. 6, namely marking the current lane area as a foreground and other areas in the image as a background; on the basis of the template marking, coordinate values can be further marked according to the related characteristic points related in the marking mode 2.
The three labeling modes can be simultaneously used for training the lane departure model detector, and one or two of the three labeling modes can be selected for training the lane departure model detector.
The three marking modes can generate two training labels, wherein the first training label can generate three types of labels of left lane departure, lane departure and right lane departure according to the marking mode 1; secondly, according to the labeling mode 2 or the labeling mode 3, the coordinate values of the relevant feature points of the current lane can be generated to be used as labels used in training. The two labels can be used for the training of the lane departure model detector at the same time, and one of the labels can be selected for the training of the lane departure model detector.
202: performing image processing according to the original training sample image set with the labels to obtain a diversified image sample set; the diverse set of image samples comprises: the image processing method comprises the steps of obtaining a scale diversified image sample set, a direction diversified image sample set, a brightness diversified image sample set, a color diversified image sample set, a stretching diversified image sample set, a translation diversified image sample set and a turning diversified image sample set;
203: and carrying out convolutional neural network training according to the obtained diversified image sample set and the original training sample image set with the labels to obtain a lane departure model detector.
204: acquiring an image frame to be detected and a lane departure model detector; the lane departure model detector is obtained by training a marked lane departure image sample set through a convolutional neural network; in the step, the image frame to be detected can be obtained from a forward visual angle video stored by an automobile data recorder in the driving process; the automobile data recorder is not limited to be arranged in a window, and can be arranged in the window, the roof, one side of the window and other visual angles to contain the position of the current lane.
205: predicting the lane position of the image frame to be detected to obtain a lane area image to be processed of the image frame to be detected;
206: and inputting the to-be-processed lane area image to the lane departure model detector to obtain a lane departure result. For example: if the original sample image in the labeling mode 1 selected in step 201 is labeled with three types of labels, namely "left lane departure", "no lane departure", and "right lane departure", the lane departure result obtained in step 206 is output according to the three types of labels, that is, the lane departure result is "left lane departure", or "no lane departure", or "right lane departure"; if the labeling mode 2 is selected in step 201, the lane departure result obtained in step 206 will be the coordinates of the feature point.
It should be noted that the convolutional neural network adopted by the lane departure model detector has a conventional structure at the front, that is, a combination of a plurality of convolutional layers (convolutional layers), pooling layers (pooling layers) and nonlinear response units, a combination of a fully-connected layer and a nonlinear response unit at the middle, and finally a loss layer for early warning of lane departure. The loss layer is one of the following two modes:
the first loss layer mode: calculating loss by taking three types of labels of left lane departure, right lane departure and left lane departure as references;
loss layer type two: and calculating the loss by taking the position of the current lane related feature point as a reference.
In this example, fig. 5 is an input sample of the convolutional neural network of the lane departure model detector, and correspondingly, the coordinate values of the feature points in fig. 7 are corresponding labels sent to the convolutional neural network.
In this example, after the image frame to be detected is sent to the trained lane departure model detector, the output of the lane departure model detector is the coordinate value of the feature point of the current lane of the image frame to be detected. According to the coordinate values of the characteristic points output by the lane departure model detector, whether the vehicle departs from the lane can be further judged. If the input here is notation 1, then it is directly output whether the deviation is "left lane departure", or "no lane departure", or "right lane departure", i.e. the lane departure result.
The invention can realize the early warning of the end-to-end current lane departure, and can directly output the lane departure judgment result (corresponding to the first loss layer mode) after the image frame to be detected is input; and the characteristic point position of the current lane can be positioned end to end, and after the image to be judged is input, the characteristic point positions of the lane line edge and the like of the current lane can be directly output (corresponding to the second loss layer mode). The end-to-end processing mode is convenient and simple when the product is actually used.
It should be noted that the above image frame to be detected is obtained in real time; the acquisition, labeling and diversification of the training samples and the training of the model can be offline, can be training samples processed in advance, and can be a model trained in advance, namely a lane departure model detector.
A third embodiment of the present invention relates to a lane departure warning apparatus, as shown in fig. 3, including:
an information obtaining unit 301, configured to obtain an image frame to be detected and a lane departure model detector; the lane departure model detector is obtained by training a marked lane departure image sample set through a convolutional neural network;
the area obtaining unit 302 is configured to predict a lane position of the image frame to be detected, and obtain a lane area image to be processed of the image frame to be detected;
and a lane early warning unit 303, configured to input the to-be-processed lane area image to the lane departure model detector, and obtain a lane departure result.
It should be noted that the apparatus further includes:
the system comprises a sample acquisition unit, a data processing unit and a data processing unit, wherein the sample acquisition unit is used for acquiring an original training sample image set with labels; wherein the labeled training sample images in the labeled original training sample image set are labeled; the label includes: left lane departure, right lane departure; and/or labeled training sample images in the labeled original training sample image set are labeled; the label includes: the coordinates of the lane feature points in the current sample image; the feature points include: one or any combination of boundary points, center points, corner points, n equal division points of the current lane.
It should be noted that the labeled training sample image in the labeled original training sample image set is labeled; the label includes: foreground and background in the current sample image; the foreground is a current lane area, and the background is an area outside the current lane area.
The sample processing unit is used for carrying out image processing according to the original training sample image set with the label to obtain a diversified image sample set; the diverse set of image samples comprises: the image processing method comprises the steps of obtaining a scale diversified image sample set, a direction diversified image sample set, a brightness diversified image sample set, a color diversified image sample set, a stretching diversified image sample set, a translation diversified image sample set and a turning diversified image sample set;
and the model acquisition unit is used for carrying out convolutional neural network training according to the acquired diversified image sample set and the original training sample image set with the labels to acquire the lane departure model detector.
Fig. 4 is a schematic structural diagram of an embodiment of an electronic device of the present invention, which can implement the processes of the embodiments shown in fig. 1-2 of the present invention, and as shown in fig. 4, the electronic device may include: a housing 41, a processor 42, a memory 43, a circuit board 44 and a power circuit 45, wherein the circuit board 44 is disposed inside a space enclosed by the housing 41, and the processor 42 and the memory 43 are disposed on the circuit board 44; the power circuit 45 is configured to supply power to each circuit or device of the electronic device; the memory 43 is used for storing executable program codes; the processor 42 reads the executable program code stored in the memory 43 to run a program corresponding to the executable program code, so as to execute the lane departure warning method according to any one of the embodiments.
For a specific execution process of the above steps by the processor 42 and further steps executed by the processor 42 by running an executable program code, reference may be made to the description of the embodiment shown in fig. 1-2 of the present invention, which is not described herein again.
It will be understood by those skilled in the art that all or part of the steps carried by the method for implementing the above embodiments may be implemented by hardware related to instructions of a program, which may be stored in a computer readable storage medium, and when the program is executed, the program includes one or a combination of the steps of the method embodiments.
For convenience of description, the above devices are described separately in terms of functional division into various units/modules. Of course, the functionality of the units/modules may be implemented in one or more software and/or hardware implementations of the invention.
From the above description of the embodiments, it is clear to those skilled in the art that the present invention can be implemented by software plus necessary general hardware platform. Based on such understanding, the technical solutions of the present invention may be embodied in the form of a software product, which may be stored in a storage medium, such as ROM/RAM, magnetic disk, optical disk, etc., and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to execute the method according to the embodiments or some parts of the embodiments.
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by a computer program, which can be stored in a computer-readable storage medium, and when executed, can include the processes of the embodiments of the methods described above. The storage medium may be a magnetic disk, an optical disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), or the like.
The above description is only for the specific embodiment of the present invention, but the scope of the present invention is not limited thereto, and any changes or substitutions that can be easily conceived by those skilled in the art within the technical scope of the present invention are included in the scope of the present invention. Therefore, the protection scope of the present invention shall be subject to the protection scope of the claims.

Claims (8)

1. A lane departure warning method, comprising:
acquiring an image frame to be detected and a lane departure model detector; the lane departure model detector is obtained by training a marked lane departure image sample set through a convolutional neural network;
predicting the lane position of the image frame to be detected to obtain a lane area image to be processed of the image frame to be detected;
inputting the lane area image to be processed into the lane departure model detector to obtain a lane departure result;
acquiring an original training sample image set with labels;
performing image processing according to the original training sample image set with the labels to obtain a diversified image sample set; the diverse set of image samples comprises: the image processing method comprises the steps of obtaining a scale diversified image sample set, a direction diversified image sample set, a brightness diversified image sample set, a color diversified image sample set, a stretching diversified image sample set, a translation diversified image sample set and a turning diversified image sample set;
and carrying out convolutional neural network training according to the obtained diversified image sample set and the original training sample image set with the labels to obtain a lane departure model detector.
2. The method of claim 1, wherein labeled training sample images in the set of labeled original training sample images are labeled; the label includes: left lane departure, no lane departure, right lane departure.
3. The method of claim 1 or 2, wherein labeled training sample images in the set of labeled original training sample images are labeled; the label includes: the coordinates of the lane feature points in the current sample image; the feature points include: one or any combination of boundary points, center points, corner points, n equal division points of the current lane.
4. The method of claim 3, wherein labeled training sample images in the set of labeled original training sample images are labeled; the label includes: foreground and background in the current sample image; the foreground is a current lane area, and the background is an area outside the current lane area.
5. A lane departure warning apparatus, comprising:
the information acquisition unit is used for acquiring an image frame to be detected and a lane departure model detector; the lane departure model detector is obtained by training a marked lane departure image sample set through a convolutional neural network;
the area acquisition unit is used for predicting the lane position of the image frame to be detected and acquiring a lane area image to be processed of the image frame to be detected;
the lane early warning unit is used for inputting the lane area image to be processed into the lane departure model detector and acquiring a lane departure result;
the system comprises a sample acquisition unit, a data processing unit and a data processing unit, wherein the sample acquisition unit is used for acquiring an original training sample image set with labels;
the sample processing unit is used for carrying out image processing according to the original training sample image set with the label to obtain a diversified image sample set; the diverse set of image samples comprises: the image processing method comprises the steps of obtaining a scale diversified image sample set, a direction diversified image sample set, a brightness diversified image sample set, a color diversified image sample set, a stretching diversified image sample set, a translation diversified image sample set and a turning diversified image sample set;
and the model acquisition unit is used for carrying out convolutional neural network training according to the acquired diversified image sample set and the original training sample image set with the labels to acquire the lane departure model detector.
6. The apparatus of claim 5, wherein labeled training sample images in the set of labeled original training sample images are labeled; the label includes: left lane departure, right lane departure; and/or the presence of a gas in the gas,
the marked training sample images in the marked original training sample image set are provided with labels; the label includes: the coordinates of the lane feature points in the current sample image; the feature points include: one or any combination of boundary points, center points, corner points, n equal division points of the current lane.
7. The apparatus of claim 6, wherein labeled training sample images in the set of labeled original training sample images are labeled; the label includes: foreground and background in the current sample image; the foreground is a current lane area, and the background is an area outside the current lane area.
8. An electronic device, comprising: the device comprises a shell, a processor, a memory, a circuit board and a power circuit, wherein the circuit board is arranged in a space enclosed by the shell, and the processor and the memory are arranged on the circuit board; a power supply circuit for supplying power to each circuit or device of the electronic apparatus; the memory is used for storing executable program codes; the processor executes a program corresponding to the executable program code by reading the executable program code stored in the memory, for executing the lane departure warning method according to any one of claims 1 to 4.
CN201611253350.4A 2016-12-30 2016-12-30 Lane departure early warning method and device and electronic equipment Active CN108268813B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201611253350.4A CN108268813B (en) 2016-12-30 2016-12-30 Lane departure early warning method and device and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201611253350.4A CN108268813B (en) 2016-12-30 2016-12-30 Lane departure early warning method and device and electronic equipment

Publications (2)

Publication Number Publication Date
CN108268813A CN108268813A (en) 2018-07-10
CN108268813B true CN108268813B (en) 2021-05-07

Family

ID=62754163

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201611253350.4A Active CN108268813B (en) 2016-12-30 2016-12-30 Lane departure early warning method and device and electronic equipment

Country Status (1)

Country Link
CN (1) CN108268813B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109784234B (en) * 2018-12-29 2022-01-07 阿波罗智能技术(北京)有限公司 Right-angled bend identification method based on forward fisheye lens and vehicle-mounted equipment
CN110232368B (en) * 2019-06-20 2021-08-24 百度在线网络技术(北京)有限公司 Lane line detection method, lane line detection device, electronic device, and storage medium

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2006014974A2 (en) * 2004-07-26 2006-02-09 Automotive Systems Laboratory, Inc. Vulnerable road user protection system
CN101016052A (en) * 2007-01-25 2007-08-15 吉林大学 Warning method and system for preventing deviation for vehicle on high standard highway
CN105046235A (en) * 2015-08-03 2015-11-11 百度在线网络技术(北京)有限公司 Lane line recognition modeling method and apparatus and recognition method and apparatus
CN105427641A (en) * 2015-11-24 2016-03-23 山东大学 Accurate safety driving behavior recording analysis apparatus and analysis method based on internet-of-vehicles Beidou at different fields

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2006014974A2 (en) * 2004-07-26 2006-02-09 Automotive Systems Laboratory, Inc. Vulnerable road user protection system
CN101016052A (en) * 2007-01-25 2007-08-15 吉林大学 Warning method and system for preventing deviation for vehicle on high standard highway
CN105046235A (en) * 2015-08-03 2015-11-11 百度在线网络技术(北京)有限公司 Lane line recognition modeling method and apparatus and recognition method and apparatus
CN105427641A (en) * 2015-11-24 2016-03-23 山东大学 Accurate safety driving behavior recording analysis apparatus and analysis method based on internet-of-vehicles Beidou at different fields

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
基于深度神经网络的车道偏离预警;谈东奎等;《第十七届中国科协年会论文集》;20150715;第1-3节 *
基于驾驶行为的车道偏离预警系统关键技术研究;秦洪懋;《中国博士学位论文全文数据库工程科技Ⅱ辑》;20140815(第8期);第2、6章 *

Also Published As

Publication number Publication date
CN108268813A (en) 2018-07-10

Similar Documents

Publication Publication Date Title
JP6842520B2 (en) Object detection methods, devices, equipment, storage media and vehicles
CN108229386B (en) Method, apparatus, and medium for detecting lane line
CN107507167B (en) Cargo tray detection method and system based on point cloud plane contour matching
CN107992819B (en) Method and device for determining vehicle attribute structural features
CN109284674A (en) A kind of method and device of determining lane line
CN106709475B (en) Obstacle recognition method and device, computer equipment and readable storage medium
CN103716687A (en) Method and system for using fingerprints to track moving objects in video
CN111079638A (en) Target detection model training method, device and medium based on convolutional neural network
CN111311485B (en) Image processing method and related device
CN112927303B (en) Lane line-based automatic driving vehicle-mounted camera pose estimation method and system
CN111950523A (en) Ship detection optimization method and device based on aerial photography, electronic equipment and medium
CN111881832A (en) Lane target detection method, device, equipment and computer readable storage medium
CN110599516A (en) Moving target detection method and device, storage medium and terminal equipment
CN111382637A (en) Pedestrian detection tracking method, device, terminal equipment and medium
CN113793413A (en) Three-dimensional reconstruction method and device, electronic equipment and storage medium
CN108268813B (en) Lane departure early warning method and device and electronic equipment
CN109523570B (en) Motion parameter calculation method and device
CN112767412B (en) Vehicle part classification method and device and electronic equipment
Diaz-Cabrera et al. Traffic light recognition during the night based on fuzzy logic clustering
US20230127338A1 (en) Road deterioration determination device, road deterioration determination method, and storage medium
CN110880003B (en) Image matching method and device, storage medium and automobile
CN111709377A (en) Feature extraction method, target re-identification method and device and electronic equipment
CN116543143A (en) Training method of target detection model, target detection method and device
CN111402185A (en) Image detection method and device
CN112101139B (en) Human shape detection method, device, equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information

Address after: 8 floors of Block E, No.2 Building, 9 Yuan, Fenghao East Road, Haidian District, Beijing 100094

Applicant after: Wen'an Beijing intelligent technology Limited by Share Ltd

Address before: 100085 4th Floor, No. 1 Courtyard, Shangdi East Road, Haidian District, Beijing

Applicant before: Wen'an Beijing intelligent technology Limited by Share Ltd

CB02 Change of applicant information
GR01 Patent grant
GR01 Patent grant