CN113128382A - Method and system for detecting lane line at traffic intersection - Google Patents

Method and system for detecting lane line at traffic intersection Download PDF

Info

Publication number
CN113128382A
CN113128382A CN202110369183.4A CN202110369183A CN113128382A CN 113128382 A CN113128382 A CN 113128382A CN 202110369183 A CN202110369183 A CN 202110369183A CN 113128382 A CN113128382 A CN 113128382A
Authority
CN
China
Prior art keywords
traffic intersection
lane line
line detection
image
real
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110369183.4A
Other languages
Chinese (zh)
Inventor
费东
王堃
王成
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Qingdao Yisa Data Technology Co Ltd
Original Assignee
Qingdao Yisa Data Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Qingdao Yisa Data Technology Co Ltd filed Critical Qingdao Yisa Data Technology Co Ltd
Priority to CN202110369183.4A priority Critical patent/CN113128382A/en
Publication of CN113128382A publication Critical patent/CN113128382A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/588Recognition of the road, e.g. of lane markings; Recognition of the vehicle driving pattern in relation to the road
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/41Higher-level, semantic clustering, classification or understanding of video scenes, e.g. detection, labelling or Markovian modelling of sport events or news items

Abstract

The invention provides a method and a system for detecting a lane line at a traffic intersection, wherein the method comprises the steps of acquiring traffic intersection monitoring videos shot by a plurality of cameras with lenses not moving, and respectively extracting background images of the traffic intersection monitoring videos; respectively extracting a frame of image after extracting the background image from each traffic intersection monitoring video, and labeling the extracted image to obtain a lane line detection training data set; training a lane line detection training data set to generate a traffic intersection lane line detection model; acquiring a real-time video of the traffic intersection shot by a camera in real time, and extracting a background image of the real-time video of the traffic intersection; and detecting each frame of image after extracting the background image from the real-time video of the traffic intersection by using a traffic intersection lane line detection model to obtain a lane line detection result of the real-time video of the traffic intersection. The method can be used for rapidly and efficiently detecting and identifying the lane line.

Description

Method and system for detecting lane line at traffic intersection
Technical Field
The invention belongs to the technical field of intelligent traffic, and particularly relates to a method and a system for detecting lane lines at a traffic intersection.
Background
Along with the rapid development of intelligent traffic in recent years, more and more monitoring devices are provided, the processing capability of a computer is stronger, and the real-time processing requirement on traffic videos is gradually improved. The traffic road condition environment is complex, and the complex traffic environment can be effectively monitored by detecting and extracting the lane lines of the traffic intersection. There are two common lane line detection methods:
1. the image features are extracted mainly through a machine vision algorithm, and then whether the image features are lane lines or not is distinguished through the numerical values of the features. However, the lane marking is of various kinds, and particularly when the vehicle is crowded, the lane marking area is blocked. In the algorithm, a lane line region is segmented firstly usually in a convolution filtering mode, and then lane line detection is performed by combining algorithms such as Hough transform and the like, so that a filtering operator needs to be manually adjusted, parameters are manually adjusted according to different scenes, and otherwise, when the external environment obviously changes, the detection effect of the lane line is poor.
2. By adopting the image recognition method based on the neural network, a large number of samples are trained, so that the method has certain advantages in the aspects of adaptability, feature extraction and the like. Through hardware acceleration, the neural network can obtain a faster processing speed. For real-time video streams in traffic intersection monitoring, the image detection based on the deep convolutional neural network can quickly detect and identify each frame of image. However, when the method is used for processing a road section with crowded traffic, a large number of lane lines are blocked by vehicles, so that the detection effect is poor, so that the method needs a large number of training samples, and each frame of image needs to be manually marked by thousands of images due to different positions of the lane lines blocked by the vehicles, which is a difficult task and requires a large cost.
In summary, because the actual traffic road conditions are complex, how to quickly and efficiently detect and identify the lane lines becomes a technical problem to be solved urgently.
Disclosure of Invention
Aiming at the defects in the prior art, the invention provides a method and a system for detecting a lane line at a traffic intersection, which can quickly and efficiently detect and identify the lane line.
In a first aspect, a method for detecting a lane line at a traffic intersection comprises the following steps:
acquiring traffic intersection monitoring videos shot by cameras with a plurality of lenses not moving, and respectively extracting background images of the traffic intersection monitoring videos;
respectively extracting a frame of image after extracting the background image from each traffic intersection monitoring video, and labeling the extracted image to obtain a lane line detection training data set;
training a lane line detection training data set to generate a traffic intersection lane line detection model;
acquiring a real-time video of the traffic intersection shot by a camera in real time, and extracting a background image of the real-time video of the traffic intersection;
and detecting each frame of image after extracting the background image from the real-time video of the traffic intersection by using a traffic intersection lane line detection model to obtain a lane line detection result of the real-time video of the traffic intersection.
Preferably, the extracting the background image of the traffic intersection monitoring video or the traffic intersection real-time video specifically includes:
and adding the continuous multi-frame images in the traffic intersection monitoring video or the traffic intersection real-time video, and calculating the average value of the continuous multi-frame images to obtain the background image.
Preferably, after extracting the background image of the traffic intersection monitoring video, the method further comprises:
and when the preset background updating time is up or the camera moves, updating the background image of the monitoring video corresponding to the traffic intersection again.
Preferably, the labeling of the extracted image specifically includes:
and manually labeling the extracted image by using a preset sample labeling tool.
Preferably, the training the lane line detection training data set to generate the traffic intersection lane line detection model specifically includes:
segmenting images in the lane line detection training data set;
and carrying out pixel-level classification on the segmented image by a convolution and deconvolution method, extracting lane lines in the image, and training to generate the lane line detection model.
In a second aspect, a traffic intersection lane line detection system includes:
a sample collection unit: the system comprises a camera, a background image acquisition module, a traffic intersection monitoring module and a display module, wherein the camera is used for acquiring traffic intersection monitoring videos shot by the camera with a plurality of lenses not moving and respectively extracting the background images of the traffic intersection monitoring videos; respectively extracting a frame of image after extracting the background image from each traffic intersection monitoring video, and labeling the extracted image to obtain a lane line detection training data set;
a training unit: the system is used for training the lane line detection training data set to generate a traffic intersection lane line detection model;
an identification unit: the system is used for acquiring a real-time traffic intersection video shot by a camera in real time and extracting a background image of the real-time traffic intersection video; and detecting each frame of image after extracting the background image from the real-time video of the traffic intersection by using a traffic intersection lane line detection model to obtain a lane line detection result of the real-time video of the traffic intersection.
Preferably, the sample collection unit is specifically configured to:
and adding the continuous multi-frame images in the traffic intersection monitoring video or the traffic intersection real-time video, and calculating the average value of the continuous multi-frame images to obtain the background image.
Preferably, the sample collection unit is further configured to:
and when the preset background updating time is up or the camera moves, updating the background image of the monitoring video corresponding to the traffic intersection again.
Preferably, the sample collection unit is specifically configured to:
and manually labeling the extracted image by using a preset sample labeling tool.
Preferably, the training unit is specifically configured to:
segmenting images in the lane line detection training data set;
and carrying out pixel-level classification on the segmented image by a convolution and deconvolution method, extracting lane lines in the image, and training to generate the lane line detection model.
According to the technical scheme, the method and the system for detecting the lane lines at the traffic intersection can effectively solve the problem of lane line detection when the traffic flow is large and the lane lines are shielded by adopting the background extraction method, reduce a large amount of lane line labeling work, and improve the robustness and the detection quality of the lane line detection by combining a neural network training model.
Drawings
In order to more clearly illustrate the detailed description of the invention or the technical solutions in the prior art, the drawings that are needed in the detailed description of the invention or the prior art will be briefly described below. Throughout the drawings, like elements or portions are generally identified by like reference numerals. In the drawings, elements or portions are not necessarily drawn to scale.
Fig. 1 is a flowchart of a method according to an embodiment of the present invention.
Fig. 2 is a block diagram of system modules provided in the second embodiment of the present invention.
Detailed Description
Embodiments of the present invention will be described in detail below with reference to the accompanying drawings. The following examples are only for illustrating the technical solutions of the present invention more clearly, and therefore are only examples, and the protection scope of the present invention is not limited thereby. It is to be noted that, unless otherwise specified, technical or scientific terms used herein shall have the ordinary meaning as understood by those skilled in the art to which the invention pertains.
It will be understood that the terms "comprises" and/or "comprising," when used in this specification and the appended claims, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
It is also to be understood that the terminology used in the description of the invention herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the invention. As used in the specification of the present invention and the appended claims, the singular forms "a," "an," and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise.
As used in this specification and the appended claims, the term "if" may be interpreted contextually as "when", "upon" or "in response to a determination" or "in response to a detection". Similarly, the phrase "if it is determined" or "if a [ described condition or event ] is detected" may be interpreted contextually to mean "upon determining" or "in response to determining" or "upon detecting [ described condition or event ]" or "in response to detecting [ described condition or event ]".
The first embodiment is as follows:
a method for detecting a lane line at a traffic intersection, referring to fig. 1, comprises the following steps:
s1: the method includes the steps of obtaining traffic intersection monitoring videos shot by cameras with a plurality of lenses not moving, and respectively extracting background images of the traffic intersection monitoring videos, and specifically includes the following steps:
adding continuous multi-frame images in the traffic intersection monitoring video or the traffic intersection real-time video, and calculating the average value of the continuous multi-frame images to obtain the background image, namely calculating according to the following formula:
Figure BDA0003008573560000051
in the formula (I), the compound is shown in the specification,B nfor the background image when the N-th frame image is continuously extracted, N is the number of image frames, fn,fn-1,…,fn-N+1Are N consecutive frames of images.
Due to the influence of the angle, illumination change, camera movement and other factors in the scene, the background image B needs to be updated regularly (i.e. background update time)nThe update formula is:
Figure BDA0003008573560000052
in the formula, Bn-1The last background image.
S2: respectively extracting a frame of image after extracting the background image from each traffic intersection monitoring video, and labeling the extracted image to obtain a lane line detection training data set;
specifically, the method can utilize a sample labeling tool such as labelme and the like to manually label the extracted image to generate the lane line labeling image. For a camera with a lens not moving, because the shooting angle is not changed, each frame of image in the same traffic intersection monitoring video has the same background image, after the background image is extracted, one frame of image in each traffic intersection monitoring video is extracted for marking, and the marking information of the image is copied to other frame images of the traffic intersection monitoring video, so that the marking of the whole traffic intersection monitoring video can be completed. After the method is processed by using the background extraction technology, the annotation information of one frame of image can be used as the annotation information of all the frames of images in the video. The manual marking is mainly to mark out lane lines.
S3: training a lane line detection training data set to generate a traffic intersection lane line detection model, which specifically comprises the following steps:
segmenting images in the lane line detection training data set by using a full Convolutional neural network (FCN);
and carrying out pixel-level classification on the segmented image by a convolution and deconvolution method, extracting lane lines in the image, and training to generate the lane line detection model.
S4: the monitoring equipment connected with the traffic intersection, such as a camera, acquires the real-time video of the traffic intersection shot by the camera in real time, decodes the real-time video of the traffic intersection, and extracts the background image of the real-time video of the traffic intersection by using the method of the step S1;
s5: and detecting each frame of image after extracting the background image from the real-time video of the traffic intersection by using a traffic intersection lane line detection model to obtain a lane line detection result of the real-time video of the traffic intersection.
The method adopts the background extraction method, can effectively solve the problem of lane line detection when the traffic flow is large and the lane line is shielded, reduces a large amount of lane line labeling work, and improves the robustness and detection quality of the lane line detection by combining a neural network training model.
Example two:
a traffic intersection lane line detection system, see fig. 2, comprising:
a sample collection unit: the system comprises a camera, a background image acquisition module, a traffic intersection monitoring module and a display module, wherein the camera is used for acquiring traffic intersection monitoring videos shot by the camera with a plurality of lenses not moving and respectively extracting the background images of the traffic intersection monitoring videos; respectively extracting a frame of image after extracting the background image from each traffic intersection monitoring video, and labeling the extracted image to obtain a lane line detection training data set;
a training unit: the system is used for training the lane line detection training data set to generate a traffic intersection lane line detection model;
an identification unit: the system is used for acquiring a real-time traffic intersection video shot by a camera in real time and extracting a background image of the real-time traffic intersection video; and detecting each frame of image after extracting the background image from the real-time video of the traffic intersection by using a traffic intersection lane line detection model to obtain a lane line detection result of the real-time video of the traffic intersection.
Preferably, the sample collection unit is specifically configured to:
and adding the continuous multi-frame images in the traffic intersection monitoring video or the traffic intersection real-time video, and calculating the average value of the continuous multi-frame images to obtain the background image.
Preferably, the sample collection unit is further configured to:
and when the preset background updating time is up or the camera moves, updating the background image of the monitoring video corresponding to the traffic intersection again.
Preferably, the sample collection unit is specifically configured to:
and manually labeling the extracted image by using a preset sample labeling tool.
Preferably, the training unit is specifically configured to:
segmenting images in the lane line detection training data set;
and carrying out pixel-level classification on the segmented image by a convolution and deconvolution method, extracting lane lines in the image, and training to generate the lane line detection model.
The system adopts the background extraction method, can effectively solve the problem of lane line detection when the traffic flow is large and the lane line is shielded, reduces a large amount of lane line labeling work, and improves the robustness and detection quality of the lane line detection by combining a neural network training model.
Those of ordinary skill in the art will appreciate that the elements and algorithm steps of the examples described in connection with the embodiments disclosed herein may be embodied in electronic hardware, computer software, or combinations of both, and that the components and steps of the examples have been described in a functional general in the foregoing description for the purpose of illustrating clearly the interchangeability of hardware and software. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention.
In the several embodiments provided in the present application, it should be understood that the disclosed system and method may be implemented in other ways. For example, the above-described apparatus embodiments are merely illustrative, and for example, the division of the units is only one logical division, and other divisions may be realized in practice, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may also be an electric, mechanical or other form of connection.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment of the present invention.
In addition, functional units in the embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated unit, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present invention essentially or partially contributes to the prior art, or all or part of the technical solution can be embodied in the form of a software product stored in a storage medium and including instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.
For the sake of brief description, the system provided by the embodiment of the present invention may refer to the corresponding content in the foregoing embodiments.
Finally, it should be noted that: the above embodiments are only used to illustrate the technical solution of the present invention, and not to limit the same; while the invention has been described in detail and with reference to the foregoing embodiments, it will be understood by those skilled in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some or all of the technical features may be equivalently replaced; such modifications and substitutions do not depart from the spirit and scope of the present invention, and they should be construed as being included in the following claims and description.

Claims (10)

1. A method for detecting a lane line at a traffic intersection is characterized by comprising the following steps:
acquiring traffic intersection monitoring videos shot by cameras with a plurality of lenses not moving, and respectively extracting background images of the traffic intersection monitoring videos;
respectively extracting a frame of image after extracting the background image from each traffic intersection monitoring video, and labeling the extracted image to obtain a lane line detection training data set;
training a lane line detection training data set to generate a traffic intersection lane line detection model;
acquiring a real-time video of the traffic intersection shot by a camera in real time, and extracting a background image of the real-time video of the traffic intersection;
and detecting each frame of image after extracting the background image from the real-time video of the traffic intersection by using a traffic intersection lane line detection model to obtain a lane line detection result of the real-time video of the traffic intersection.
2. The method according to claim 1, wherein the extracting the background image of the traffic intersection surveillance video or the traffic intersection real-time video specifically comprises:
and adding the continuous multi-frame images in the traffic intersection monitoring video or the traffic intersection real-time video, and calculating the average value of the continuous multi-frame images to obtain the background image.
3. The method of detecting a lane line at a traffic intersection according to claim 2, further comprising, after extracting the background image of the traffic intersection surveillance video:
and when the preset background updating time is up or the camera moves, updating the background image of the monitoring video corresponding to the traffic intersection again.
4. The method for detecting the lane line at the traffic intersection according to claim 1, wherein the labeling the extracted image specifically comprises:
and manually labeling the extracted image by using a preset sample labeling tool.
5. The method according to claim 1, wherein the training of the lane line detection training dataset to generate the traffic intersection lane line detection model specifically comprises:
segmenting images in the lane line detection training data set;
and carrying out pixel-level classification on the segmented image by a convolution and deconvolution method, extracting lane lines in the image, and training to generate the lane line detection model.
6. A traffic intersection lane line detection system, comprising:
a sample collection unit: the system comprises a camera, a background image acquisition module, a traffic intersection monitoring module and a display module, wherein the camera is used for acquiring traffic intersection monitoring videos shot by the camera with a plurality of lenses not moving and respectively extracting the background images of the traffic intersection monitoring videos; respectively extracting a frame of image after extracting the background image from each traffic intersection monitoring video, and labeling the extracted image to obtain a lane line detection training data set;
a training unit: the system is used for training the lane line detection training data set to generate a traffic intersection lane line detection model;
an identification unit: the system is used for acquiring a real-time traffic intersection video shot by a camera in real time and extracting a background image of the real-time traffic intersection video; and detecting each frame of image after extracting the background image from the real-time video of the traffic intersection by using a traffic intersection lane line detection model to obtain a lane line detection result of the real-time video of the traffic intersection.
7. The traffic intersection lane line detection system of claim 6, wherein the sample acquisition unit is specifically configured to:
and adding the continuous multi-frame images in the traffic intersection monitoring video or the traffic intersection real-time video, and calculating the average value of the continuous multi-frame images to obtain the background image.
8. The traffic intersection lane line detection system of claim 7, wherein the sample acquisition unit is further configured to:
and when the preset background updating time is up or the camera moves, updating the background image of the monitoring video corresponding to the traffic intersection again.
9. The traffic intersection lane line detection system of claim 6, wherein the sample acquisition unit is specifically configured to:
and manually labeling the extracted image by using a preset sample labeling tool.
10. The traffic intersection lane line detection system of claim 6, wherein the training unit is specifically configured to:
segmenting images in the lane line detection training data set;
and carrying out pixel-level classification on the segmented image by a convolution and deconvolution method, extracting lane lines in the image, and training to generate the lane line detection model.
CN202110369183.4A 2021-04-06 2021-04-06 Method and system for detecting lane line at traffic intersection Pending CN113128382A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110369183.4A CN113128382A (en) 2021-04-06 2021-04-06 Method and system for detecting lane line at traffic intersection

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110369183.4A CN113128382A (en) 2021-04-06 2021-04-06 Method and system for detecting lane line at traffic intersection

Publications (1)

Publication Number Publication Date
CN113128382A true CN113128382A (en) 2021-07-16

Family

ID=76774994

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110369183.4A Pending CN113128382A (en) 2021-04-06 2021-04-06 Method and system for detecting lane line at traffic intersection

Country Status (1)

Country Link
CN (1) CN113128382A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114022863A (en) * 2021-10-28 2022-02-08 广东工业大学 Deep learning-based lane line detection method, system, computer and storage medium

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109345562A (en) * 2018-09-26 2019-02-15 贵州优易合创大数据资产运营有限公司 A kind of traffic picture intelligent dimension system
CN109584295A (en) * 2017-09-29 2019-04-05 阿里巴巴集团控股有限公司 The method, apparatus and system of automatic marking are carried out to target object in image
CN110619747A (en) * 2019-09-27 2019-12-27 山东奥邦交通设施工程有限公司 Intelligent monitoring method and system for highway road
CN111476157A (en) * 2020-04-07 2020-07-31 南京慧视领航信息技术有限公司 Lane guide arrow recognition method under intersection monitoring environment
CN111598069A (en) * 2020-07-27 2020-08-28 之江实验室 Highway vehicle lane change area analysis method based on deep learning

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109584295A (en) * 2017-09-29 2019-04-05 阿里巴巴集团控股有限公司 The method, apparatus and system of automatic marking are carried out to target object in image
CN109345562A (en) * 2018-09-26 2019-02-15 贵州优易合创大数据资产运营有限公司 A kind of traffic picture intelligent dimension system
CN110619747A (en) * 2019-09-27 2019-12-27 山东奥邦交通设施工程有限公司 Intelligent monitoring method and system for highway road
CN111476157A (en) * 2020-04-07 2020-07-31 南京慧视领航信息技术有限公司 Lane guide arrow recognition method under intersection monitoring environment
CN111598069A (en) * 2020-07-27 2020-08-28 之江实验室 Highway vehicle lane change area analysis method based on deep learning

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114022863A (en) * 2021-10-28 2022-02-08 广东工业大学 Deep learning-based lane line detection method, system, computer and storage medium

Similar Documents

Publication Publication Date Title
CN110136449B (en) Deep learning-based traffic video vehicle illegal parking automatic identification snapshot method
US10212397B2 (en) Abandoned object detection apparatus and method and system
CN107977639B (en) Face definition judgment method
CN110189333B (en) Semi-automatic marking method and device for semantic segmentation of picture
CN111967429A (en) Pedestrian re-recognition model training method and device based on active learning
CN111179302B (en) Moving target detection method and device, storage medium and terminal equipment
CN107945523A (en) A kind of road vehicle detection method, DETECTION OF TRAFFIC PARAMETERS method and device
CN112633255B (en) Target detection method, device and equipment
CN107346547A (en) Real-time foreground extracting method and device based on monocular platform
CN111488808A (en) Lane line detection method based on traffic violation image data
CN114049499A (en) Target object detection method, apparatus and storage medium for continuous contour
CN113658131A (en) Tour type ring spinning broken yarn detection method based on machine vision
CN110781853A (en) Crowd abnormality detection method and related device
CN110288629B (en) Target detection automatic labeling method and device based on moving object detection
CN113128382A (en) Method and system for detecting lane line at traffic intersection
CN116704490B (en) License plate recognition method, license plate recognition device and computer equipment
CN114040094A (en) Method and equipment for adjusting preset position based on pan-tilt camera
CN111797832B (en) Automatic generation method and system for image region of interest and image processing method
CN116824135A (en) Atmospheric natural environment test industrial product identification and segmentation method based on machine vision
CN111178244A (en) Method for identifying abnormal production scene
CN115909219A (en) Scene change detection method and system based on video analysis
CN114937248A (en) Vehicle tracking method and device for cross-camera, electronic equipment and storage medium
CN113919393A (en) Parking space identification method, device and equipment
CN114005060A (en) Image data determining method and device
CN113486856A (en) Driver irregular behavior detection method based on semantic segmentation and convolutional neural network

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information

Address after: 266000 Room 302, building 3, Office No. 77, Lingyan Road, Huangdao District, Qingdao, Shandong Province

Applicant after: QINGDAO YISA DATA TECHNOLOGY Co.,Ltd.

Address before: 266000 3rd floor, building 3, optical valley software park, 396 Emeishan Road, Huangdao District, Qingdao City, Shandong Province

Applicant before: QINGDAO YISA DATA TECHNOLOGY Co.,Ltd.

CB02 Change of applicant information
RJ01 Rejection of invention patent application after publication

Application publication date: 20210716

RJ01 Rejection of invention patent application after publication