CN111881823A - Ground target detection and evaluation method - Google Patents

Ground target detection and evaluation method Download PDF

Info

Publication number
CN111881823A
CN111881823A CN202010734693.2A CN202010734693A CN111881823A CN 111881823 A CN111881823 A CN 111881823A CN 202010734693 A CN202010734693 A CN 202010734693A CN 111881823 A CN111881823 A CN 111881823A
Authority
CN
China
Prior art keywords
detection
group
preset
samples
line
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010734693.2A
Other languages
Chinese (zh)
Other versions
CN111881823B (en
Inventor
谢冬梅
叶春兰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Maxieye Automobile Technology Co ltd
Original Assignee
Shanghai Maxieye Automobile Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Maxieye Automobile Technology Co ltd filed Critical Shanghai Maxieye Automobile Technology Co ltd
Priority to CN202010734693.2A priority Critical patent/CN111881823B/en
Publication of CN111881823A publication Critical patent/CN111881823A/en
Application granted granted Critical
Publication of CN111881823B publication Critical patent/CN111881823B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/588Recognition of the road, e.g. of lane markings; Recognition of the vehicle driving pattern in relation to the road
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/25Determination of region of interest [ROI] or a volume of interest [VOI]
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • Computing Systems (AREA)
  • Software Systems (AREA)
  • Molecular Biology (AREA)
  • Computational Linguistics (AREA)
  • Biophysics (AREA)
  • Biomedical Technology (AREA)
  • Mathematical Physics (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Multimedia (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a ground target detection and evaluation method, which comprises the following steps: labeling a label on the detected image, and acquiring the total number of positive samples; expanding the pixel point meter in the 8-neighborhood range of the pixel point into a label same as the pixel point meter by taking the marked label pixel point as a reference; obtaining the total number of detection samples; taking the expanded labeled pixel points as reference pixel points, judging whether at least one pixel point with the same label exists in a corresponding area with the same position of an 8-neighborhood range of each reference pixel point, and if so, increasing the number of correct detection samples by 1; if not, marking as a false detection target; and dividing the number of correct detection samples by the total number of positive samples to obtain a recall rate, dividing the number of false detection targets by the total number of detection samples to obtain a false detection rate, and optimizing and presetting the convolutional neural network based on the recall rate and the false detection rate. The invention optimizes the detection network by calculating the recall rate and the false detection rate, and solves the problem of evaluation method lacking in detection effect.

Description

Ground target detection and evaluation method
Technical Field
The invention relates to the field of intelligent transportation, in particular to a ground target detection and evaluation method.
Background
With the increasing number of automobiles, the intelligent traffic system brings convenience to human life and also leads to adverse consequences such as continuous occurrence of traffic accidents, traffic jam and the like, so that the intelligent traffic system can be operated at the right moment. The most important significance in the intelligent transportation system is to realize unmanned driving. The lane line detection technology, which is one of key technologies in the field of intelligent transportation, has important significance in realizing intelligent driving of vehicles, a lane departure early warning system and a vehicle anti-collision system.
At present, more and more methods based on deep learning target detection realize the detection of ground targets, particularly ground identification lines, but a reasonable evaluation method for evaluating the detection effect is lacked, which influences the development process of unmanned driving to a certain extent.
Disclosure of Invention
In order to find an effective implementation scheme for evaluating ground target detection, the invention provides a ground target detection and evaluation method, which comprises the following steps:
labeling a label on the obtained detection image, counting the number of all pixel points of the labeled label and counting the number of the pixel points as the total number of positive samples, wherein the label is divided into four types, namely a lane line, a road edge, a zebra crossing and a stop line;
expanding pixel points in the 8-neighborhood range of the pixel points into the same labels with the pixel points marked with the labels on the detection image as a reference;
grouping and detecting the acquired detection images in a preset convolutional neural network according to a preset group to obtain a detection result and counting to obtain the total number of detection samples, wherein the total number of the detection samples is the total number of all detected pixel points of the preset convolutional neural network;
taking the pixel points marked with labels in the expanded detection image as reference pixel points, judging whether at least one pixel point with the same label exists in a corresponding area with the same position of 8 neighborhood range of each reference pixel point in the detection result, if so, increasing the number of correct detection samples by 1; if not, marking as a false detection target;
and dividing the number of the correct detection samples by the total number of the positive samples to obtain a recall rate, dividing the counted number of the false detection targets by the total number of the detection samples to obtain a false detection rate, and optimizing the preset convolutional neural network based on the recall rate and the false detection rate.
Preferably, the preset groups comprise a lane line detection group, a road edge detection group and a zebra crossing stop line detection group; the method for grouping and detecting the acquired detection images according to the preset groups in the preset convolutional neural network comprises the following steps:
and carrying out grouping detection on the obtained detection images in a preset convolutional neural network according to the lane line detection group, the road edge detection group and the zebra crossing stop line detection group.
Preferably, the step of obtaining the detection result and counting the total number of the detected samples comprises the following steps:
acquiring three detection results of a lane line detection group, a road edge detection group and a zebra crossing stop line detection group;
processing the three groups of detection results by using a preset lane line group confidence level, a preset road edge group confidence level and a preset zebra crossing stop line confidence level to obtain detection results of the detection images;
and counting the detection result to obtain the total number of the detection samples.
Preferably, the optimizing the preset convolutional neural network based on the recall rate and the false detection rate includes the following steps:
and adjusting the network control information in the preset convolutional neural network based on the recall rate and the false detection rate so as to optimize the preset convolutional neural network.
Preferably, the acquiring three detection results of the lane line detection group, the road edge detection group and the zebra crossing stop line detection group includes the following steps:
acquiring pixels to be detected on the detection image, wherein the pixels to be detected are pixels on the central line of the ground identification line on the detection image or pixels on the road edge;
and acquiring three detection results of a lane line detection group, a road edge detection group and a zebra crossing stop line detection group based on the pixel points to be detected.
Preferably, the ground identification line is a center line of a solid line or a center line of a dashed area on an imaginary line.
Preferably, before labeling the label on the acquired detection image, the method includes the following steps:
and constructing a plane rectangular coordinate system.
Preferably, before counting the number of all pixel points of the label and counting the total number of positive samples, the method includes the following steps:
extracting a region of interest on the detection image;
scaling the region of interest to the same resolution as the inspection image;
and mapping the coordinates of the label of the detected image to the zoomed region of interest.
Preferably, the grouping detection of the acquired detection images in the preset convolutional neural network according to the preset group includes the following steps:
and performing grouping detection on the zoomed region of interest according to a preset group in a preset convolutional neural network.
Preferably, the determining whether at least one pixel with the same label exists in a corresponding region where the detection result is the same as the 8-neighborhood range position of each reference pixel includes the following steps:
confirming an 8-neighborhood range of each reference pixel point based on the coordinates of each reference pixel point;
finding a corresponding area in the detection result;
and judging whether at least one pixel point with the same label exists in the corresponding area of the detection result.
Compared with the prior art, the method for detecting and evaluating the ground targets has the following beneficial effects:
according to the ground target detection and evaluation method, the detection network is optimized by means of grouping detection of the lane lines, the road edges, the zebra crossings and the stop lines to calculate the recall rate and the false detection rate, the problem of an evaluation method lacking in detection effect is solved, and the development of unmanned driving technology is promoted to a certain extent.
Additional aspects and advantages of the invention will be set forth in part in the description which follows, and in part will be obvious from the description, or may be learned by practice of the invention.
Drawings
The foregoing and/or additional aspects and advantages of the present invention will become apparent and readily appreciated from the following description of the embodiments, taken in conjunction with the accompanying drawings of which:
fig. 1 is a schematic flow chart of a method for detecting and evaluating a ground target according to an embodiment of the present invention;
fig. 2 is a schematic diagram of a ground mark line of a method for detecting and evaluating a ground target according to an embodiment of the present invention being a center line of a marked area on a dotted line.
Detailed Description
In order to make the technical solutions of the present invention better understood, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention.
In some of the flows described in the present specification and claims and in the above figures, a number of operations are included that occur in a particular order, but it should be clearly understood that these operations may be performed out of order or in parallel with the order in which they occur, the order of the operations being 202, 204, etc. merely to distinguish between various operations, the order of which does not itself represent any order of performance. Additionally, the flows may include more or fewer operations, and the operations may be performed sequentially or in parallel. It should be noted that, the descriptions of "first", "second", etc. in this document are used for distinguishing different messages, devices, modules, etc., and do not represent a sequential order, nor limit the types of "first" and "second" to be different.
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Referring to fig. 1, fig. 1 is a schematic flow chart illustrating a method for detecting and evaluating a ground target according to an embodiment of the present invention, and as shown in fig. 1, the method for detecting and evaluating a ground target according to an embodiment of the present invention includes steps S101 to S109, which are as follows:
step S101: labeling a label on the obtained detection image, counting the number of all pixel points of the labeled label and counting the number of the pixel points as the total number of positive samples, wherein the label is divided into four types, namely a lane line, a road edge, a zebra crossing and a stop line.
Step S103: and expanding the pixel point meter in the 8 neighborhood range of the pixel point into a label same as the pixel point meter by taking the labeled pixel point on the detection image as a reference.
In some embodiments, in order to facilitate subsequent execution of a component, such as a steering wheel, for controlling a command to a vehicle, before labeling a label on the acquired detection image, the method comprises the following steps:
and constructing a plane rectangular coordinate system.
Referring to fig. 2, fig. 2 shows a method for constructing a rectangular plane coordinate system, wherein the rectangular plane coordinate system is constructed with a certain corner of a detected image a as an origin O, a width direction of the detected image a as an X-axis, and a height direction of the detected image a as a Y-axis. It should be noted that, in an actual implementation process, the construction of the planar rectangular coordinate system may be designed in combination with a project scenario, which is not limited in this embodiment of the present invention.
In some embodiments, to save computation, before counting the number of all pixel points of the label and counting the total number of positive samples, the following steps are included:
extracting a region of interest on the detection image;
scaling the region of interest to the same resolution as the inspection image;
and mapping the coordinates of the label of the detected image to the zoomed region of interest.
For example, the size of the detected image captured by a typical image capturing device is 1920 × 1080 or 1280 × 720, the label is marked on the original detected image, and the extracted region of interest is often smaller than the original detected image, so that in order to avoid distortion, the region of interest needs to be scaled to the same resolution as the detected image, such as 256 × 512. It should be noted that the coordinates of the corresponding label marked on the original image are also translated to the corresponding position on the 256 x 512 image.
Step S105: and performing grouping detection on the obtained detection images in a preset convolutional neural network according to a preset group, obtaining a detection result and counting to obtain the total number of detection samples, wherein the total number of the detection samples is the total number of all detected pixel points of the preset convolutional neural network.
It is worth mentioning that, considering that in an actual road scene, there is a situation that a lane line collides with a road edge or a lane line collides with a zebra crossing, unlike a general method that only a lane line is detected, the embodiment of the present invention detects four types of targets, i.e., the lane line, the road edge, the zebra crossing, and the stop line, in a grouping detection manner. Therefore, the preset groups preferably include a lane line detection group, a road edge detection group, and a zebra crossing stop line detection group.
Specifically, the grouping detection of the acquired detection images in a preset convolutional neural network according to a preset group comprises the following steps:
and carrying out grouping detection on the obtained detection images in a preset convolutional neural network according to the lane line detection group, the road edge detection group and the zebra crossing stop line detection group.
Corresponding to the processing of the region of interest, the embodiment of the present invention performs group detection on the acquired detection images according to a preset group in a preset convolutional neural network, and further includes the following steps:
and carrying out grouping detection on the zoomed region of interest according to a preset group in a preset convolutional neural network.
In some embodiments, obtaining the test results and counting the total number of test samples comprises the following steps:
acquiring three detection results of a lane line detection group, a road edge detection group and a zebra crossing stop line detection group;
processing the three groups of detection results by using the preset lane line group confidence level, the preset road edge group confidence level and the preset zebra crossing stop line confidence level to obtain the detection results of the detection images;
and counting the detection result to obtain the total number of the detection samples.
Specifically, the step of obtaining three detection results of the lane line detection group, the road edge detection group and the zebra crossing stop line detection group includes the following steps:
acquiring pixels to be detected on the detection image, wherein the pixels to be detected are pixels on the central line of the ground identification line on the detection image or pixels on the road edge;
and acquiring three detection results of the lane line detection group, the road edge detection group and the zebra crossing stop line detection group based on the pixel points to be detected.
Preferably, the ground identification line is the center line of a solid line or the center line of a dashed area on an imaginary line. Exemplarily, fig. 2 shows that the ground identification line is a center line of the dashed line with the scribed region, and as shown in fig. 2, the pixel point to be detected is a pixel point a (x, y) of the center line of the dashed line with the scribed region.
In some embodiments, the pre-configured convolutional neural network is preferably a VGG network.
It should be noted that some other conventional convolutional neural network models, such as MobileNet, ResNet, DenseNet, etc., can also be used as a preset convolutional neural network, and considering that the network structure of these neural network models is basically unchanged, the embodiments of the present invention apply them to this field only, and therefore, the detailed network structure thereof is not described in detail.
Step S107: taking the pixel points marked with labels in the expanded detection image as reference pixel points, judging whether at least one pixel point with the same label exists in a corresponding area with the same position of 8 neighborhood range of each reference pixel point in the detection result, if so, increasing the number of correct detection samples by 1; if not, the result is recorded as a false detection target.
In some embodiments, the method for determining whether at least one pixel with the same label exists in a corresponding region with the same detection result and the same position of the 8-neighborhood range of each reference pixel includes the following steps:
confirming an 8-neighborhood range of each reference pixel point based on the coordinates of each reference pixel point;
finding a corresponding area in the detection result;
and judging whether at least one pixel point with the same label exists in the corresponding area of the detection result.
It should be noted that, in the embodiment of the present invention, the cumulative calculation of the number of correct detection samples is performed by using a natural counting method, but the method is not limited to other cumulative statistical methods.
Step S109: and dividing the number of the correct detection samples by the total number of the positive samples to obtain a recall rate, dividing the counted number of the false detection targets by the total number of the detection samples to obtain a false detection rate, and optimizing the preset convolutional neural network based on the recall rate and the false detection rate.
In some embodiments, optimizing the pre-configured convolutional neural network based on recall and false positive rates comprises the steps of:
and adjusting the network control information in the preset convolutional neural network based on the recall rate and the false detection rate so as to optimize the preset convolutional neural network.
In some embodiments, the network control information may be the size of a convolution kernel, the number of channels of convolution output, a computation mode of loss, or the like, or may be a preset convolution neural network structure, a preset convolution neural network depth, an added deconvolution layer, or the like, and the specific adjustment may be according to a use environment, which is not limited in this embodiment of the present invention.
To further understand the principle of packet detection and optimization of the pre-configured convolutional neural network based on recall and false detection rates in the embodiments of the present invention, an example is illustrated below. It should be understood that this example should not be taken as limiting the embodiments of the invention.
For example, the detection result in the detection image not grouped is assumed as follows, wherein the background is other parts than the detection target, such as asphalt on a road:
background Lane line Road edge Zebra crossing Stop line
0.05 0.45 0.35 0.1 0.05
Then, assuming that the confidence is 0.4, the detection result is a lane line; assuming that the confidence is 0.3, the detection results are road edges and lane lines, i.e., all categories are measured by a confidence value, i.e., yes or no. Meanwhile, assume that the detection results grouped by three groups of detection results of the lane marking detection group, the road edge detection group, and the zebra crossing stop line detection group are as follows:
(1) lane line detection group
Background Lane line
0.25 0.75
(2) Road edge detection group
Background Road edge
0.45 0.55
(3) Zebra stop line detection group
Background Zebra crossing Stop line
0.75 0.15 0.1
At this time, three confidences, namely a lane group confidence, a road edge group confidence and a zebra crossing stop line confidence, can be set. What needs to be reminded especially is that, the general curb is more complicated, and it is difficult to detect, appears the false drop easily, and the confidence coefficient can set up higher numerical value relatively, and lane line is more important, and simple relatively. The detection rate needs to be guaranteed and a relatively low confidence level needs to be set. The zebra crossing and the stop line are set according to actual needs. In addition, the confidence levels are also relative, and the sum of each group is 1, so that in the case of a large number of classes, the probability is dispersed, the confidence level cannot be too high, only 2 classes exist, and theoretically the minimum value is 0.5.
Assuming that the confidence of the lane group is 0.5, the confidence of the road edge group is 0.6, the confidence of the zebra crossing stop line is 0.4, and the final detection result is the lane line. At this time, the probability difference between the road edge and the background in the road edge detection group is not large, the probability difference is possibly misdetection of the road edge, if 0.5 is also set, the detection result is the road edge and the lane line, the probability that the road edge is misdetection is possible, the road edge detection group is set to be 0.6, the misdetection of the road edge is filtered, and only the lane line exists.
In practical implementation, the confidence of each group is in the vector output by the detection result, and a reliable detection result is selected through a confidence threshold. If the confidence threshold is higher, the detected target is less, and the false detection is less; the lower the confidence threshold, the more targets detected and the more false positives. Generally, a good model of a model needs to ensure that the number of false detections is large (the recall ratio is high) and the number of false detections is small (the false detection ratio is low), so the threshold setting of the confidence coefficient is also important, and the best confidence coefficient can be selected by calculating the recall ratio and the false detection ratio under the conditions of different confidence coefficient thresholds. In special cases, some scenes are to ensure detection, and then the confidence threshold is properly adjusted down to control false detection within an allowable range. On the contrary, in some scenarios, to ensure less false detection, the threshold of the confidence level needs to be properly adjusted high to control the detection not to be too low. The embodiment of the present invention is not limited thereto.
Compared with the prior art, the method for detecting and evaluating the ground targets has the following beneficial effects:
according to the ground target detection and evaluation method, the detection network is optimized by means of grouping detection of the lane lines, the road edges, the zebra crossings and the stop lines and calculating the recall rate and the false detection rate, the problem of an evaluation method lacking in detection effect is solved, and the development of the unmanned driving technology is promoted to a certain extent.
The foregoing is only a partial embodiment of the present invention, and it should be noted that, for those skilled in the art, various modifications and decorations can be made without departing from the principle of the present invention, and these modifications and decorations should also be regarded as the protection scope of the present invention.

Claims (10)

1. A ground target detection and evaluation method is characterized by comprising the following steps:
labeling a label on the obtained detection image, counting the number of all pixel points of the labeled label and counting the number of the pixel points as the total number of positive samples, wherein the label is divided into four types, namely a lane line, a road edge, a zebra crossing and a stop line;
expanding pixel points in the 8-neighborhood range of the pixel points into the same labels with the pixel points marked with the labels on the detection image as a reference;
grouping and detecting the acquired detection images in a preset convolutional neural network according to a preset group to obtain a detection result and counting to obtain the total number of detection samples, wherein the total number of the detection samples is the total number of all detected pixel points of the preset convolutional neural network;
taking the pixel points marked with labels in the expanded detection image as reference pixel points, judging whether at least one pixel point with the same label exists in a corresponding area with the same position of 8 neighborhood range of each reference pixel point in the detection result, if so, increasing the number of correct detection samples by 1; if not, marking as a false detection target;
and dividing the number of the correct detection samples by the total number of the positive samples to obtain a recall rate, dividing the counted number of the false detection targets by the total number of the detection samples to obtain a false detection rate, and optimizing the preset convolutional neural network based on the recall rate and the false detection rate.
2. The ground target detection and evaluation method according to claim 1, wherein the preset groups comprise a lane line detection group, a road edge detection group and a zebra crossing stop line detection group; the method for grouping and detecting the acquired detection images according to the preset groups in the preset convolutional neural network comprises the following steps:
and carrying out grouping detection on the obtained detection images in a preset convolutional neural network according to the lane line detection group, the road edge detection group and the zebra crossing stop line detection group.
3. The ground object detection and evaluation method according to claim 2, wherein the step of obtaining the detection result and counting the total number of the detection samples comprises the following steps:
acquiring three detection results of a lane line detection group, a road edge detection group and a zebra crossing stop line detection group;
processing the three groups of detection results by using a preset lane line group confidence level, a preset road edge group confidence level and a preset zebra crossing stop line confidence level to obtain detection results of the detection images;
and counting the detection result to obtain the total number of the detection samples.
4. The ground object detection and evaluation method according to claim 3, wherein the optimizing the preset convolutional neural network based on the recall rate and the false positive rate comprises the steps of:
and adjusting the network control information in the preset convolutional neural network based on the recall rate and the false detection rate so as to optimize the preset convolutional neural network.
5. The ground target detection and evaluation method according to claim 3, wherein the obtaining of the three detection results of the lane line detection group, the road edge detection group and the zebra crossing stop line detection group comprises the following steps:
acquiring pixels to be detected on the detection image, wherein the pixels to be detected are pixels on the central line of the ground identification line on the detection image or pixels on the road edge;
and acquiring three detection results of a lane line detection group, a road edge detection group and a zebra crossing stop line detection group based on the pixel points to be detected.
6. The ground object detection and evaluation method according to claim 5, wherein the ground mark line is a center line of a solid line or a center line of a dashed area on an imaginary line.
7. The ground target detection and evaluation method according to claim 1, wherein before labeling the acquired detection image with the label, the method comprises the following steps:
and constructing a plane rectangular coordinate system.
8. The ground target detection and evaluation method according to claim 7, wherein before counting the number of all pixel points of the statistical labeling label and counting the total number of positive samples, the method comprises the following steps:
extracting a region of interest on the detection image;
scaling the region of interest to the same resolution as the inspection image;
and mapping the coordinates of the label of the detected image to the zoomed region of interest.
9. The ground object detection and evaluation method according to claim 8, wherein the grouping detection of the acquired detection images in the preset convolutional neural network according to the preset group comprises the following steps:
and performing grouping detection on the zoomed region of interest according to a preset group in a preset convolutional neural network.
10. The method according to claim 8, wherein said determining whether there is at least one pixel with the same label in a corresponding region where the detection result is the same as the 8-neighborhood range position of each reference pixel comprises:
confirming an 8-neighborhood range of each reference pixel point based on the coordinates of each reference pixel point;
finding a corresponding area in the detection result;
and judging whether at least one pixel point with the same label exists in the corresponding area of the detection result.
CN202010734693.2A 2020-07-27 2020-07-27 Ground target detection and evaluation method Active CN111881823B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010734693.2A CN111881823B (en) 2020-07-27 2020-07-27 Ground target detection and evaluation method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010734693.2A CN111881823B (en) 2020-07-27 2020-07-27 Ground target detection and evaluation method

Publications (2)

Publication Number Publication Date
CN111881823A true CN111881823A (en) 2020-11-03
CN111881823B CN111881823B (en) 2024-07-02

Family

ID=73200708

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010734693.2A Active CN111881823B (en) 2020-07-27 2020-07-27 Ground target detection and evaluation method

Country Status (1)

Country Link
CN (1) CN111881823B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112308055A (en) * 2020-12-30 2021-02-02 北京沃东天骏信息技术有限公司 Evaluation method and device of face retrieval system, electronic equipment and storage medium

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108921040A (en) * 2018-06-08 2018-11-30 Oppo广东移动通信有限公司 Image processing method and device, storage medium, electronic equipment
CN109670376A (en) * 2017-10-13 2019-04-23 神州优车股份有限公司 Lane detection method and system
CN110222591A (en) * 2019-05-16 2019-09-10 天津大学 A kind of method for detecting lane lines based on deep neural network
WO2020048027A1 (en) * 2018-09-06 2020-03-12 惠州市德赛西威汽车电子股份有限公司 Robust lane line detection method based on dynamic region of interest
WO2020103893A1 (en) * 2018-11-21 2020-05-28 北京市商汤科技开发有限公司 Lane line property detection method, device, electronic apparatus, and readable storage medium

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109670376A (en) * 2017-10-13 2019-04-23 神州优车股份有限公司 Lane detection method and system
CN108921040A (en) * 2018-06-08 2018-11-30 Oppo广东移动通信有限公司 Image processing method and device, storage medium, electronic equipment
WO2020048027A1 (en) * 2018-09-06 2020-03-12 惠州市德赛西威汽车电子股份有限公司 Robust lane line detection method based on dynamic region of interest
WO2020103893A1 (en) * 2018-11-21 2020-05-28 北京市商汤科技开发有限公司 Lane line property detection method, device, electronic apparatus, and readable storage medium
CN110222591A (en) * 2019-05-16 2019-09-10 天津大学 A kind of method for detecting lane lines based on deep neural network

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112308055A (en) * 2020-12-30 2021-02-02 北京沃东天骏信息技术有限公司 Evaluation method and device of face retrieval system, electronic equipment and storage medium

Also Published As

Publication number Publication date
CN111881823B (en) 2024-07-02

Similar Documents

Publication Publication Date Title
US11455805B2 (en) Method and apparatus for detecting parking space usage condition, electronic device, and storage medium
CN108681693B (en) License plate recognition method based on trusted area
CN109165549B (en) Road identification obtaining method based on three-dimensional point cloud data, terminal equipment and device
Mithun et al. Detection and classification of vehicles from video using multiple time-spatial images
CN109711264B (en) Method and device for detecting occupation of bus lane
CN111666805B (en) Class marking system for autopilot
CN110619279B (en) Road traffic sign instance segmentation method based on tracking
CN104217427B (en) Lane line localization method in a kind of Traffic Surveillance Video
CN107301405A (en) Method for traffic sign detection under natural scene
JP5223675B2 (en) Vehicle detection device, vehicle detection method, and vehicle detection program
CN109800752B (en) Automobile license plate character segmentation and recognition algorithm based on machine vision
Piao et al. Robust hypothesis generation method using binary blob analysis for multi‐lane detection
CN102243705B (en) Method for positioning license plate based on edge detection
CN112613344B (en) Vehicle track occupation detection method, device, computer equipment and readable storage medium
CN105989334A (en) Road detection method based on monocular vision
CN111242002A (en) Shared bicycle standardized parking judgment method based on computer vision
Li et al. Inverse perspective mapping based urban road markings detection
CN111898491A (en) Method and device for identifying reverse driving of vehicle and electronic equipment
Behrendt et al. Deep learning lane marker segmentation from automatically generated labels
Sahu et al. A comparative analysis of deep learning approach for automatic number plate recognition
CN111008554B (en) Deep learning-based method for identifying pedestrians without giving away in dynamic traffic zebra stripes
CN114332781A (en) Intelligent license plate recognition method and system based on deep learning
Omidi et al. An embedded deep learning-based package for traffic law enforcement
CN113158954B (en) Automatic detection method for zebra crossing region based on AI technology in traffic offsite
CN112115800A (en) Vehicle combination recognition system and method based on deep learning target detection

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information
CB02 Change of applicant information

Country or region after: China

Address after: Room 156, 17th Floor, Science and Technology Innovation Building, No. 777 Zhongguan West Road, Zhuangshi Street, Zhenhai District, Ningbo City, Zhejiang Province, 315201

Applicant after: Zhijia Automotive Technology (Ningbo) Co.,Ltd.

Address before: Room 303-304, 570 shengxia Road, China (Shanghai) pilot Free Trade Zone, Pudong New Area, Shanghai, 201203

Applicant before: SHANGHAI MAXIEYE AUTOMOBILE TECHNOLOGY CO.,LTD.

Country or region before: China

GR01 Patent grant