CN113158976A - Ground arrow recognition method, system, terminal and computer readable storage medium - Google Patents

Ground arrow recognition method, system, terminal and computer readable storage medium Download PDF

Info

Publication number
CN113158976A
CN113158976A CN202110523788.4A CN202110523788A CN113158976A CN 113158976 A CN113158976 A CN 113158976A CN 202110523788 A CN202110523788 A CN 202110523788A CN 113158976 A CN113158976 A CN 113158976A
Authority
CN
China
Prior art keywords
arrow
ground
turning
template
arrows
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110523788.4A
Other languages
Chinese (zh)
Other versions
CN113158976B (en
Inventor
宋京
向卫星
吴子章
王凡
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Zongmu Anchi Intelligent Technology Co ltd
Zongmu Technology Shanghai Co Ltd
Original Assignee
Beijing Zongmu Anchi Intelligent Technology Co ltd
Zongmu Technology Shanghai Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Zongmu Anchi Intelligent Technology Co ltd, Zongmu Technology Shanghai Co Ltd filed Critical Beijing Zongmu Anchi Intelligent Technology Co ltd
Priority to CN202110523788.4A priority Critical patent/CN113158976B/en
Publication of CN113158976A publication Critical patent/CN113158976A/en
Application granted granted Critical
Publication of CN113158976B publication Critical patent/CN113158976B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/588Recognition of the road, e.g. of lane markings; Recognition of the vehicle driving pattern in relation to the road
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/23Clustering techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • G06V10/267Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion by performing operations on regions, e.g. growing, shrinking or watersheds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/56Extraction of image or video features relating to colour
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/75Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
    • G06V10/751Comparing pixel values or logical combinations thereof, or feature values having positional relevance, e.g. template matching

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Multimedia (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Software Systems (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Health & Medical Sciences (AREA)
  • Computing Systems (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • General Health & Medical Sciences (AREA)
  • Databases & Information Systems (AREA)
  • Medical Informatics (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Molecular Biology (AREA)
  • Mathematical Physics (AREA)
  • Image Analysis (AREA)

Abstract

The invention provides a method, a system, a terminal and a computer readable storage medium for identifying a ground arrow, wherein the identification method comprises the following steps: extracting a ground arrow and a lane line, and taking a turning arrow in the ground arrow as an initial template of the turning arrow; performing pixel-level row traversal on the picture after the ground arrow and the lane line are extracted, and detecting a straight arrow in the ground arrow; rotating the initial turning arrow template by a fixed angle to form a turning arrow matching template; traversing pixel-level columns of the image with the ground arrow and the lane line extracted, scaling the steering arrow to generate candidate frames with different sizes, and then performing template matching, position clustering and steering arrow type identification with a steering arrow matching template to identify the steering arrow; the identified straight arrow and the turn arrow are merged to output a ground arrow. The invention does not need a large amount of training sets and pixel level labels, reduces a large workload, improves the recognition rate of the arrow and greatly reduces the CPU occupancy rate.

Description

Ground arrow recognition method, system, terminal and computer readable storage medium
Technical Field
The invention belongs to the technical field of image processing, relates to an identification method and an identification system, and particularly relates to a method, a system, a terminal and a computer-readable storage medium for identifying a ground arrow.
Background
The road sign is one of traffic regulations that must be followed in the driving process of automobiles, and can provide key information for road users, help drivers to drive correctly and safely, and simultaneously maintain the smoothness of road traffic. However, in the driving process, the mark on the road surface is often not noticed for some reasons, or sometimes the driver does not know the specific meaning of a certain mark, which affects the normal traffic order and is easy to cause traffic accidents. The automatic extraction and recognition of the markings on the road by means of the prior art can better assist the driver in driving correctly.
Urban traffic accidents mostly occur near traffic intersections, and the road traffic sign recognition technology is used as an important research branch of a high-grade driving auxiliary system, is mainly used for providing road information and has irreplaceable effects on driving safety. The traditional automatic segmentation of the road traffic signs is mostly based on a plurality of image preprocessing technologies. Urban traffic accidents mostly occur near traffic intersections, and the road traffic sign recognition technology is used as an important research branch of a high-grade driving auxiliary system, is mainly used for providing road information and has irreplaceable effects on driving safety. The traditional automatic segmentation of road traffic signs is mostly based on a plurality of image preprocessing technologies, for example, Foucher P and the like provides a method for identifying pedestrian crosswalks, arrow signs and other signs, and the method mainly comprises two steps: extracting road traffic sign elements and connecting the sign components based on a single mode or repeated rectangular mode. Neural network models are also utilized, but most are trained, validated and optimized on large volumes of data. For supervised training of the model, a large amount of data of ground arrows are needed, and pixel-level labeling is performed on the data, and the labeling process needs a large amount of manpower. When the network model is optimized, the neural network is equivalent to a black box, and the optimization may be improved on the current test data, but the detection interference on the previous data may exist, namely, the local optimization is only performed, and the global optimization is not performed. In terms of CPU occupancy, the CPU occupancy of the ground arrow identification process by using the neural network model is 3 to 7 times higher than that of the traditional feature matching algorithm.
Therefore, how to provide a method, a system, a terminal and a computer readable storage medium for identifying a ground arrow, so as to solve the technical problems that in the prior art, a large amount of ground arrow data needs to be labeled at a pixel level, a large amount of labor is required in the labeling process, the identification rate of the ground arrow in the ground arrow identification process is low, the CPU occupancy rate is too large, and the like.
Disclosure of Invention
In view of the above-mentioned shortcomings of the prior art, an object of the present invention is to provide a method, a system, a terminal and a computer-readable storage medium for identifying a ground arrow, which are used to solve the problems that in the prior art, a large amount of ground arrow data needs to be labeled at a pixel level, a large amount of manpower is required for the labeling process, the arrow identification rate is low during the ground arrow identification process, and the CPU occupancy rate is too large.
To achieve the above and other related objects, an aspect of the present invention provides a method for identifying a ground arrow, including: acquiring a ground ring view; extracting a ground arrow and a lane line from the ground annular view, and taking a turning arrow in the ground arrow as an initial template of the turning arrow; performing pixel-level row traversal on the picture after the ground arrows and the lane lines are extracted, and detecting all straight arrows in the ground arrows; rotating the initial turning arrow template by a fixed angle to form a turning arrow matching template; traversing pixel-level columns of the image with the ground arrow and the lane line extracted, scaling the steering arrow to generate candidate frames with different sizes, and then performing template matching, position clustering and steering arrow type identification with a steering arrow matching template to identify the steering arrow; merging the identified straight arrows and turn arrows to output the ground arrows in the ground ring view.
In an embodiment of the present invention, the step of performing pixel-level row traversal on the picture after extracting the ground arrows and the lane lines, and detecting all the straight arrows in the ground arrows to detect the straight arrows includes: performing pixel-level row traversal on the picture after the ground arrows and the lane lines are extracted, and storing the central position and the row width of each row of arrows; if the continuous gradual change of the line width of the arrow is judged to exist, whether the central point of each line of the arrow can be subjected to straight line fitting is determined; if yes, determining the arrow as a straight arrow; if not, determining that the arrow is not a straight arrow.
In an embodiment of the present invention, the step of determining whether the center point of each row of arrows can be fitted with a straight line includes: calculating an included angle between straight lines formed by every two points in the continuous central points on the basis of continuous gradual change of the line width of each line arrow; judging whether the included angle is larger than an included angle threshold value; if yes, the continuous central points cannot be fitted into a straight line, and the arrow is a non-straight arrow; if not, the continuous central points can be fitted into a straight line, and the arrow is a straight arrow.
In an embodiment of the present invention, before rotating the initial turning arrow template by a fixed angle to form a turning arrow matching template, the method for identifying a ground arrow further includes: traversing the picture with the ground arrow and the lane line extracted at a pixel level, and calculating the slope included angle of the lane line for the central points of two continuous lines; eliminating the maximum included angles and the minimum included angles of all slope included angles, and calculating the average included angle of the remaining slope included angles; calculating a vehicle body course angle according to the average included angle; the body heading angle is defined as the fixed angle of rotation of the initial template of the turning arrow.
In an embodiment of the present invention, the calculation method of the vehicle body heading angle is as follows: θ' ═ pi/2- θ; wherein, theta' is the vehicle body course angle, and theta is the average included angle.
In an embodiment of the present invention, the step of performing pixel-level column traversal on the image after extracting the ground arrow and the lane line, performing template matching, position clustering, and turning arrow type identification on the turning arrow after scaling the turning arrow to generate candidate frames with different sizes, and then performing template matching, position clustering, and turning arrow type identification on the candidate frames with different sizes, so as to identify the turning arrow includes: traversing pixel-level rows of the picture after the ground arrows and the lane lines are extracted, and storing the position of the central point of each row of the arrows; generating a plurality of candidate frames with different sizes by zooming at the central point position; interpolating different candidate frames to make the size of the candidate frames the same as that of the initial template of the turning arrow; calculating the template matching degree between the candidate frame after interpolation and the initial template of the turning arrow, eliminating the candidate frame with the matching degree smaller than the threshold value of the matching degree, and storing the candidate frame with the maximum matching degree, the arrow type and the current position of the candidate frame.
In an embodiment of the present invention, the step of performing pixel-level column traversal on the image after extracting the ground arrow and the lane line, performing template matching, position clustering, and turning arrow type identification on the turning arrow after scaling the turning arrow to generate candidate frames with different sizes, and then performing template matching, position clustering, and turning arrow type identification on the candidate frames with different sizes, so as to identify the turning arrow further includes: classifying the positions of the turning arrows of the matched results of all the templates in each row by using the distance between the positions of the central points; traversing the positions of the center points of the results matched with all the templates, and calculating the distance between every two templates; if the distance is smaller than the distance threshold, the matching result of the two templates is considered as the matching result under the same turning arrow, and the matching result of the two templates is classified into a tuple; if the distance is larger than or equal to the distance threshold, the matching result of the two templates is considered as the matching result under the other turning arrow, and the matching result of the two templates is classified into the other tuple; and selecting the arrow type corresponding to the maximum matching degree from the tuple as the type of the turning arrow at the position.
Another aspect of the present invention provides a system for identifying a ground arrow, including: the acquisition module is used for acquiring a ground annular view; the extraction module is used for extracting a ground arrow and a lane line from the ground annular view, and taking a turning arrow in the ground arrow as a turning arrow initial template; the detection module is used for traversing pixel-level lines of the picture after the ground arrows and the lane lines are extracted, and detecting all straight arrows in the ground arrows; the rotating module is used for rotating the initial turning arrow template by a fixed angle to form a turning arrow matching template; the identification module is used for traversing pixel-level columns of the pictures after the ground arrows and the lane lines are extracted, scaling the turning arrows to generate candidate frames with different sizes, and then performing template matching, position clustering and turning arrow type identification on the candidate frames and a turning arrow matching template to identify the turning arrows; and the output module is used for combining the identified straight arrow and the identified turning arrow so as to output the ground arrow in the ground ring view.
Yet another aspect of the invention provides a computer-readable storage medium, on which a computer program is stored, which program, when being executed by a processor, carries out the method of identifying a ground arrow.
In a final aspect of the present invention, an identification terminal for a ground arrow is provided, including: a processor and a memory; the memory is used for storing computer programs, and the processor is used for executing the computer programs stored by the memory so as to enable the identification terminal to execute the identification method of the ground arrow.
In an embodiment of the invention, the identification terminal of the ground arrow includes a vehicle-mounted terminal.
As described above, the method, the system, the terminal and the computer-readable storage medium for identifying a ground arrow according to the present invention have the following advantages:
firstly, the invention uses triangle feature detection and template matching to identify the arrow, and does not use a neural network model, so that a large amount of training sets and pixel-level labels are not needed, and a large amount of workload is reduced.
Secondly, the invention improves the recognition rate of the arrow, the algorithm is linear in the aspect of space-time complexity, and the CPU occupancy rate is greatly reduced.
Thirdly, the method has better recognition rate under the conditions of processing multiple arrows, abrasion of the arrows and deformation.
Drawings
Fig. 1 is a flowchart illustrating a method for identifying a ground arrow according to an embodiment of the present invention.
FIG. 2 shows an exemplary diagram of the initial template of the turning arrow of the present invention.
FIG. 3 is a diagram illustrating a semantic segmented ground perspective view according to the present invention.
FIG. 4 is a schematic diagram of ground arrows and lane lines extracted by the present invention
FIG. 5 is an exemplary illustration of the rotation of the initial platen by a fixed angle, shown as a turning arrow, in accordance with the present invention.
Fig. 6 is a flow chart illustrating an implementation of S15 according to the present invention.
Fig. 7A is a schematic diagram illustrating the recognition effect of the final ground arrow according to the present invention.
Fig. 7B is a schematic diagram illustrating the recognition effect of the final ground arrow according to the present invention.
Fig. 7C is a schematic diagram illustrating the recognition effect of the final ground arrow according to the present invention.
Fig. 8 is a flowchart illustrating a ground arrow recognition system according to an embodiment of the present invention.
Description of the element reference numerals
8 Recognition system of ground arrow
81 Acquisition module
82 Extraction module
83 Detection module
84 Rotary module
85 Identification module
86 Output module
S11~S16 Step (ii) of
S151~S159 Step (ii) of
Detailed Description
The embodiments of the present invention are described below with reference to specific embodiments, and other advantages and effects of the present invention will be easily understood by those skilled in the art from the disclosure of the present specification. The invention is capable of other and different embodiments and of being practiced or of being carried out in various ways, and its several details are capable of modification in various respects, all without departing from the spirit and scope of the present invention. It is to be noted that the features in the following embodiments and examples may be combined with each other without conflict.
It should be noted that the drawings provided in the following embodiments are only for illustrating the basic idea of the present invention, and the components related to the present invention are only shown in the drawings rather than drawn according to the number, shape and size of the components in actual implementation, and the type, quantity and proportion of the components in actual implementation may be changed freely, and the layout of the components may be more complicated.
Example one
The embodiment provides a method for identifying a ground arrow, which is characterized by comprising the following steps:
acquiring a ground ring view;
extracting a ground arrow and a lane line from the ground annular view, and taking a turning arrow in the ground arrow as an initial template of the turning arrow;
performing pixel-level row traversal on the picture after the ground arrows and the lane lines are extracted, and detecting all straight arrows in the ground arrows; rotating the initial turning arrow template by a fixed angle to form a turning arrow matching template;
traversing pixel-level columns of the image with the ground arrow and the lane line extracted, scaling the steering arrow to generate candidate frames with different sizes, and then performing template matching, position clustering and steering arrow type identification with a steering arrow matching template to identify the steering arrow;
merging the identified straight arrows and turn arrows to output the ground arrows in the ground ring view.
The method for identifying the ground arrow provided in the present embodiment will be described in detail with reference to the drawings. Please refer to fig. 1, which is a flowchart illustrating an identification method of a ground arrow according to an embodiment. As shown in fig. 1, the method for identifying a ground arrow specifically includes the following steps:
and S11, acquiring a ground ring view.
Specifically, the step S11 includes acquiring four ground pictures acquired by four fisheye cameras (front, back, left and right), and splicing the four ground pictures to form a ground ring view.
And S12, extracting a ground arrow and a lane line from the ground ring view, and extracting a turning arrow from the ground arrow to serve as an initial template of the turning arrow. Turning to FIG. 2, an exemplary diagram of an initial template is shown as a turn arrow. As in fig. 2, including a plurality of turning arrow initial templates.
In this embodiment, the ground ring view is semantically segmented through the FCN network (see fig. 3, which is shown as an example of the ground ring view after the semantic segmentation), and ground arrows and lane lines in the ground ring view after the semantic segmentation are extracted (see fig. 4, which is shown as an example of the extracted ground arrows and lane lines). In this embodiment, any segmentation method that can segment the ground ring view to generate the segmentation result shown in fig. 3 is suitable for the present invention.
Specifically, the RGB values of the extracted ground arrows are (255,255,255) and the RGB values of the lane lines are (0,0, 255).
And S13, performing pixel-level row traversal on the picture after the ground arrows and the lane lines are extracted, and detecting all straight arrows in the ground arrows.
In this embodiment, the S13 includes:
and (4) performing pixel-level row traversal (for example, performing row traversal with 3 as a step length) on the picture after the ground arrows and the lane lines are extracted, and storing the central position and the row width of each row of arrows. Specifically, this step is to save the coordinates of all points in each row with RGB values (255 ), calculate the average value of these points, and count the pixel line width of each row arrow.
Judging whether the line width of the arrow continuously and progressively changes, if so, determining whether the central point of each line of the arrow can be subjected to straight line fitting; if the straight line fitting can be carried out, determining that the arrow is a straight arrow; if the straight line fitting cannot be carried out, the arrow is determined to be not a straight arrow. If not, determining that the arrow is not a straight arrow.
In the present embodiment, the continuous gradual change of the line width of the arrow includes continuously increasing t values or continuously decreasing t values.
Theoretically, the center point of each row traversed by the straight arrow is on a straight line, but actually, there is a deviation. Therefore, in this embodiment, whether straight line fitting is possible is determined by calculating a difference between included angles of straight lines formed by two continuous points from three continuous and tapered center points. If the included angle difference is delta theta (| theta (L)i)-θ(Li+1) I) is greater than oneIf the angle threshold value theta is not fit into a straight line, determining that the arrow is a straight arrow; and if the included angle difference delta theta is smaller than or equal to an angle threshold theta, the included angle difference delta theta is considered to be capable of being fitted into a straight line, and the arrow is determined to be a non-straight arrow.
And S14, calculating the fixed angle of the initial turning arrow template, and rotating the initial turning arrow template according to the fixed angle to form a turning arrow matching template.
In this embodiment, the step of calculating the fixed angle of the initial template of the turning arrow to be rotated includes:
the picture from which the ground arrows and the lane lines are extracted is subjected to pixel-level line traversal (for example, line traversal with a step size of 17), coordinates of which RGB values are (0, 255) for each line of the lane lines are saved, and the center positions of the points, i.e., p shown in fig. 5, are calculated1(x,y)…pn(x,y)。
Calculating the slope included angle of the lane line for the central points of two continuous lines;
eliminating the maximum included angles and the minimum included angles of all slope included angles, and calculating the average included angle of the remaining slope included angles;
and calculating the heading angle of the vehicle body according to the average included angle, and defining the heading angle of the vehicle body as the fixed angle of the rotation of the initial template of the steering arrow.
In this embodiment, the calculation method of the vehicle body heading angle is as follows: θ' ═ pi/2- θ; wherein, theta' is the vehicle body course angle, and theta is the average included angle.
In the present embodiment, S13 and S14 are parallel operations as shown in fig. 1. In practical applications, S13 and S14 may be executed in parallel or sequentially.
And S15, traversing the picture after the ground arrow and the lane line are extracted in a pixel level row mode, scaling the steering arrow to generate candidate frames with different sizes, and then performing template matching, position clustering and steering arrow type identification on the candidate frames and the steering arrow matching template to identify the steering arrow. Please refer to fig. 6, which shows an implementation flowchart of S15. As shown in fig. 5, the S15 specifically includes the following steps:
s151, performing pixel-level column traversal (for example, performing column traversal with 3 as a step length) on the picture after the ground arrow and the lane line are extracted, and storing the center point position of each column of the arrow.
Specifically, the coordinates of all points in the column with RGB values of (255 ) are saved, and the positions of the center points of the points are calculated.
S152, zooming at the central point position to generate a plurality of candidate frames with different sizes.
In this embodiment, since it is determined that the size of the divided steering arrow may be different, 5 candidate frames having different sizes are generated in consideration of the influence of the arrow and the vehicle distance in different environments.
And S153, interpolating different candidate frames to enable the size of the candidate frames to be the same as that of the initial template of the turning arrow.
In this embodiment, bilinear interpolation is used for different candidate frames.
S154, calculating the template matching degree between the candidate frame after interpolation and the initial template of the turning arrow, eliminating the candidate frame with the matching degree smaller than the threshold value of the matching degree, and saving the candidate frame with the maximum matching degree in the 5 candidate frames in the column, the arrow type and the current position of the candidate frame, namely confidence, class (x, y').
In practical application, all the matching algorithms capable of calculating the matching degree can be applied to the invention. For example, the present embodiment uses a correlation coefficient template matching algorithm to calculate the template matching degree between the interpolated candidate box and the initial template (e.g., each initial template in fig. 2) of the turn arrow.
And S155, classifying the positions of the turning arrows of the matched results of all the templates in each row by using the distance between the positions of the central points.
S156, traversing the positions of the center points of the results matched with all the templates, and calculating the distance between every two templates;
the distance between the positions of two central points is calculated according to the following formula:
d=(xi-xi+1)2+(yi-yi+1)2
and S157, if the distance is smaller than the distance threshold, considering the matching result of the two templates as the matching result under the same turning arrow, and attributing the matching result of the two templates to a tuple.
S158, if the distance is larger than or equal to the distance threshold, the matching result of the two templates is regarded as the matching result under another turning arrow, and the matching result of the two templates is classified into another tuple;
and S159, selecting the arrow type corresponding to the maximum matching degree from the tuple as the type of the turning arrow at the position.
And S16, merging the identified straight arrow and the identified turning arrow to output the ground arrow in the ground ring view. Referring to fig. 7A, fig. 7B and fig. 7C are schematic diagrams illustrating the recognition effect of the final ground arrow, respectively.
The method for identifying the ground arrow has the following beneficial effects:
first, the method for recognizing a ground arrow in this embodiment uses triangle feature detection and template matching to recognize the arrow, and does not use a neural network model, so that a large amount of training sets and pixel-level labels are not required, and a large amount of workload is reduced.
Secondly, the method for identifying the ground arrow improves the identification rate of the arrow, the algorithms are linear in the aspect of space-time complexity, and the CPU occupancy rate is greatly reduced.
Thirdly, the method for recognizing the ground arrow according to the embodiment has a good recognition rate when the multiple arrows are processed, and the arrows are worn and deformed.
The present embodiment also provides a computer-readable storage medium, on which a computer program is stored, which when executed by a processor implements the identification method as described in fig. 1.
The present application may be embodied as systems, methods, and/or computer program products, in any combination of technical details. The computer program product may include a computer-readable storage medium having computer-readable program instructions embodied thereon for causing a processor to implement various aspects of the present application.
The computer readable storage medium may be a tangible device that can hold and store the instructions for use by the instruction execution device. The computer readable storage medium may be, for example, but is not limited to, an electronic memory device, a magnetic memory device, an optical memory device, an electromagnetic memory device, a semiconductor memory device, or any suitable combination of the foregoing. More specific examples (a non-exhaustive list) of the computer readable storage medium would include the following: a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), a Static Random Access Memory (SRAM), a portable compact disc read-only memory (CD-ROM), a Digital Versatile Disc (DVD), a memory stick, a floppy disk, a mechanical coding device, such as punch cards or in-groove projection structures having instructions stored thereon, and any suitable combination of the foregoing. Computer-readable storage media as used herein is not to be construed as transitory signals per se, such as radio waves or other freely propagating electromagnetic waves, electromagnetic waves propagating through a waveguide or other transmission medium (e.g., optical pulses through a fiber optic cable), or electrical signals transmitted through electrical wires.
The computer-readable programs described herein may be downloaded from a computer-readable storage medium to a variety of computing/processing devices, or to an external computer or external storage device via a network, such as the internet, a local area network, a wide area network, and/or a wireless network. The network may include copper transmission cables, fiber optic transmission, wireless transmission, routers, firewalls, switches, gateway computers and/or edge servers. The network adapter card or network interface in each computing/processing device receives computer-readable program instructions from the network and forwards the computer-readable program instructions for storage in a computer-readable storage medium in the respective computing/processing device. The computer program instructions for carrying out operations of the present application may be assembly instructions, Instruction Set Architecture (ISA) instructions, machine related instructions, microcode, firmware instructions, state setting data, integrated circuit configuration data, or source code or object code written in any combination of one or more programming languages, including an object oriented programming language such as Smalltalk, C + + or the like and procedural programming languages, such as the "C" programming language or similar programming languages. The computer-readable program instructions may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any type of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet service provider). In some embodiments, the electronic circuitry can execute computer-readable program instructions to implement aspects of the present application by utilizing state information of the computer-readable program instructions to personalize the electronic circuitry, such as a programmable logic circuit, a Field Programmable Gate Array (FPGA), or a Programmable Logic Array (PLA).
Example two
The present embodiment provides a system for identifying a ground arrow, including:
the acquisition module is used for acquiring a ground annular view;
the extraction module is used for extracting a ground arrow and a lane line from the ground annular view, and taking a turning arrow in the ground arrow as a turning arrow initial template;
the detection module is used for traversing pixel-level lines of the picture after the ground arrows and the lane lines are extracted, and detecting all straight arrows in the ground arrows;
the rotating module is used for rotating the initial turning arrow template by a fixed angle to form a turning arrow matching template;
the identification module is used for traversing pixel-level columns of the pictures after the ground arrows and the lane lines are extracted, scaling the turning arrows to generate candidate frames with different sizes, and then performing template matching, position clustering and turning arrow type identification on the candidate frames and a turning arrow matching template to identify the turning arrows;
and the output module is used for combining the identified straight arrow and the identified turning arrow so as to output the ground arrow in the ground ring view.
The ground arrow recognition system provided in the present embodiment will be described in detail with reference to the drawings. Please refer to fig. 8, which is a schematic structural diagram of a ground arrow recognition system in an embodiment. As shown in fig. 8, the ground arrow recognition system 8 includes an obtaining module 81, an extracting module 82, a detecting module 83, a rotating module 84, a recognition module 85, and an output module 86.
The acquiring module 81 is used for acquiring a ground ring view.
Specifically, the obtaining module 81 obtains four ground pictures collected by four fisheye cameras (front, back, left and right), and splices the four ground pictures to form a ground ring view.
The extraction module 82 is configured to extract a ground arrow and a lane line from the ground ring view, and extract a turn arrow in the ground arrow as an initial template of the turn arrow.
In this embodiment, the extracting module 82 performs semantic segmentation on the ground annular view through the FCN network, and extracts a ground arrow and a lane line in the ground annular view after the semantic segmentation.
Specifically, the extraction module 82 extracts the RGB values of the ground arrows as (255,255,255) and the RGB values of the lane lines as (0,0, 255).
The detection module 83 is configured to perform pixel-level row traversal on the picture after the ground arrows and the lane lines are extracted, and detect all straight arrows in the ground arrows.
In this embodiment, the detection process of the detection module 83 is as follows:
and (4) performing pixel-level row traversal (for example, performing row traversal with 3 as a step length) on the picture after the ground arrows and the lane lines are extracted, and storing the central position and the row width of each row of arrows. Specifically, this step is to save the coordinates of all points in each row with RGB values (255 ), calculate the average value of these points, and count the pixel line width of each row arrow.
Judging whether the line width of the arrow continuously and progressively changes, if so, determining whether the central point of each line of the arrow can be subjected to straight line fitting; if the straight line fitting can be carried out, determining that the arrow is a straight arrow; if the straight line fitting cannot be carried out, the arrow is determined to be not a straight arrow. If not, determining that the arrow is not a straight arrow.
In the present embodiment, the continuous gradual change of the line width of the arrow includes continuously increasing t values or continuously decreasing t values.
Theoretically, the center point of each row traversed by the straight arrow is on a straight line, but actually, there is a deviation. Therefore, in this embodiment, whether straight line fitting is possible is determined by calculating a difference between included angles of straight lines formed by two continuous points from three continuous and tapered center points. If the included angle difference is delta theta (| theta (L)i)-θ(Li+1) I) is larger than an angle threshold value theta, the arrow is considered not to be fitted into a straight line, and the arrow is determined to be a straight arrow; and if the included angle difference delta theta is smaller than or equal to an angle threshold theta, the included angle difference delta theta is considered to be capable of being fitted into a straight line, and the arrow is determined to be a non-straight arrow.
The rotation module 84 is configured to calculate a fixed angle at which the initial turning arrow template needs to rotate, and rotate the initial turning arrow template according to the fixed angle to form a matching turning arrow template.
In this embodiment, the process of the rotation module 84 calculating the fixed angle of the initial template of the turning arrow to be rotated is as follows:
the picture from which the ground arrows and the lane lines are extracted is subjected to pixel-level line traversal (for example, line traversal with a step size of 17), coordinates of which RGB values are (0, 255) for each line of the lane lines are saved, and the center positions of the points, i.e., p shown in fig. 5, are calculated1(x,y)…pn(x,y)。
Calculating the slope included angle of the lane line for the central points of two continuous lines;
eliminating the maximum included angles and the minimum included angles of all slope included angles, and calculating the average included angle of the remaining slope included angles;
and calculating the heading angle of the vehicle body according to the average included angle, and defining the heading angle of the vehicle body as the fixed angle of the rotation of the initial template of the steering arrow.
In this embodiment, the calculation method of the vehicle body heading angle is as follows: θ' ═ pi/2- θ; wherein, theta' is the vehicle body course angle, and theta is the average included angle.
The identification module 85 is configured to perform pixel-level column traversal on the image after the ground arrow and the lane line are extracted, and perform template matching, position clustering, and steering arrow type identification on the steering arrow after the steering arrow is scaled to generate candidate frames with different sizes, so as to identify the steering arrow.
Specifically, the identification process of the identification module 85 is as follows:
performing pixel-level column traversal (for example, performing column traversal with 3 as a step length) on the picture after the ground arrow and the lane line are extracted, and storing the position of the central point of each column of the arrow; generating a plurality of candidate frames with different sizes by zooming at the central point position; interpolating different candidate frames to make the size of the candidate frames the same as that of the initial template of the turning arrow; calculating the template matching degree between the candidate frame after interpolation and the initial template of the turning arrow, eliminating the candidate frame with the matching degree smaller than the threshold value of the matching degree, and storing the candidate frame with the maximum matching degree in the 5 candidate frames in the column, the arrow type and the current position of the candidate frame, namely, confidence, class, (x, y'). Classifying the positions of the turning arrows of the matched results of all the templates in each row by using the distance between the positions of the central points; traversing the positions of the center points of the results matched with all the templates, and calculating the distance between every two templates; if the distance is smaller than the distance threshold, the matching result of the two templates is considered as the matching result under the same turning arrow, and the matching result of the two templates is classified into a tuple. If the distance is larger than or equal to the distance threshold, the matching result of the two templates is considered as the matching result under the other turning arrow, and the matching result of the two templates is classified into the other tuple; and selecting the arrow type corresponding to the maximum matching degree from the tuple as the type of the turning arrow at the position.
The output module 86 is configured to combine the identified straight arrow and the identified turning arrow to output the ground arrow in the ground ring view.
It should be noted that the division of the modules of the above system is only a logical division, and the actual implementation may be wholly or partially integrated into one physical entity, or may be physically separated. And the modules can be realized in a form that all software is called by the processing element, or in a form that all the modules are realized in a form that all the modules are called by the processing element, or in a form that part of the modules are called by the hardware. For example: the x module can be a separately established processing element, and can also be integrated in a certain chip of the system. In addition, the x-module may be stored in the memory of the system in the form of program codes, and may be called by one of the processing elements of the system to execute the functions of the x-module. Other modules are implemented similarly. All or part of the modules can be integrated together or can be independently realized. The processing element described herein may be an integrated circuit having signal processing capabilities. In implementation, each step of the above method or each module above may be implemented by an integrated logic circuit of hardware in a processor element or an instruction in the form of software. These above modules may be one or more integrated circuits configured to implement the above methods, such as: one or more Application Specific Integrated Circuits (ASICs), one or more microprocessors (DSPs), one or more Field Programmable Gate Arrays (FPGAs), and the like. When a module is implemented in the form of a Processing element scheduler code, the Processing element may be a general-purpose processor, such as a Central Processing Unit (CPU) or other processor capable of calling program code. These modules may be integrated together and implemented in the form of a System-on-a-chip (SOC).
EXAMPLE III
This embodiment provides an identification terminal of ground arrow, this identification terminal includes: a processor, memory, transceiver, communication interface, or/and system bus; the memory for storing the computer program and the communication interface for communicating with other devices are connected with the processor and the transceiver through the system bus and communicate with each other, and the processor and the transceiver are used for operating the computer program to make the identification terminal execute the steps of the identification method of the above ground arrow. In this embodiment, the identification terminal of the ground arrow includes a vehicle-mounted terminal.
The above-mentioned system bus may be a Peripheral Component Interconnect (PCI) bus, an Extended Industry Standard Architecture (EISA) bus, or the like. The system bus may be divided into an address bus, a data bus, a control bus, and the like. For ease of illustration, only one thick line is shown, but this does not mean that there is only one bus or one type of bus. The communication interface is used for realizing communication between the database access device and other equipment (such as a client, a read-write library and a read-only library). The Memory may include a Random Access Memory (RAM), and may further include a non-volatile Memory (non-volatile Memory), such as at least one disk Memory.
The Processor may be a general-purpose Processor, and includes a Central Processing Unit (CPU), a Network Processor (NP), and the like; the Integrated Circuit may also be a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) or other Programmable logic device, discrete Gate or transistor logic device, or discrete hardware components.
The protection scope of the method for identifying a ground arrow according to the present invention is not limited to the execution sequence of the steps illustrated in the embodiment, and all the solutions obtained by adding, subtracting, and replacing the steps according to the prior art according to the principles of the present invention are included in the protection scope of the present invention.
The invention also provides a system for identifying a ground arrow, which can implement the method for identifying a ground arrow of the invention, but the implementation device of the method for identifying a ground arrow of the invention includes, but is not limited to, the structure of the system for identifying a ground arrow recited in the embodiment, and all structural modifications and substitutions in the prior art made according to the principle of the invention are included in the protection scope of the invention.
In summary, the method, the system, the terminal and the computer-readable storage medium for identifying a ground arrow according to the present invention have the following advantages:
firstly, the invention uses triangle feature detection and template matching to identify the arrow, and does not use a neural network model, so that a large amount of training sets and pixel-level labels are not needed, and a large amount of workload is reduced.
Secondly, the invention improves the recognition rate of the arrow, the algorithm is linear in the aspect of space-time complexity, and the CPU occupancy rate is greatly reduced.
Thirdly, the method has better recognition rate under the conditions of processing multiple arrows, abrasion of the arrows and deformation. The invention effectively overcomes various defects in the prior art and has high industrial utilization value.
The foregoing embodiments are merely illustrative of the principles and utilities of the present invention and are not intended to limit the invention. Any person skilled in the art can modify or change the above-mentioned embodiments without departing from the spirit and scope of the present invention. Accordingly, it is intended that all equivalent modifications or changes which can be made by those skilled in the art without departing from the spirit and technical spirit of the present invention be covered by the claims of the present invention.

Claims (11)

1. A method for identifying a ground arrow, comprising:
acquiring a ground ring view;
extracting a ground arrow and a lane line from the ground annular view, and taking a turning arrow in the ground arrow as an initial template of the turning arrow;
performing pixel-level row traversal on the picture after the ground arrows and the lane lines are extracted, and detecting all straight arrows in the ground arrows; rotating the initial turning arrow template by a fixed angle to form a turning arrow matching template;
traversing pixel-level columns of the image with the ground arrow and the lane line extracted, scaling the steering arrow to generate candidate frames with different sizes, and then performing template matching, position clustering and steering arrow type identification with a steering arrow matching template to identify the steering arrow;
merging the identified straight arrows and turn arrows to output the ground arrows in the ground ring view.
2. The method for identifying a ground arrow according to claim 1, wherein the step of performing pixel-level row traversal on the picture after the ground arrow and the lane line are extracted, and detecting all straight arrows in the ground arrow comprises:
performing pixel-level row traversal on the picture after the ground arrows and the lane lines are extracted, and storing the central position and the row width of each row of arrows;
if the continuous gradual change of the line width of the arrow is judged to exist, whether the central point of each line of the arrow can be subjected to straight line fitting is determined; if yes, determining the arrow as a straight arrow; if not, determining that the arrow is not a straight arrow.
3. The method for identifying ground arrows according to claim 2, wherein the step of determining whether the center point of each row of arrows can be fitted with a straight line comprises:
calculating an included angle between straight lines formed by every two points in the continuous central points on the basis of continuous gradual change of the line width of each line arrow;
judging whether the included angle is larger than an included angle threshold value; if yes, the continuous central points cannot be fitted into a straight line, and the arrow is a non-straight arrow; if not, the continuous central points can be fitted into a straight line, and the arrow is a straight arrow.
4. The method for identifying a ground arrow according to claim 3, wherein before rotating the initial turning arrow template by a fixed angle to form a turning arrow matching template, the method further comprises:
traversing the picture with the ground arrow and the lane line extracted at a pixel level, and calculating the slope included angle of the lane line for the central points of two continuous lines;
eliminating the maximum included angles and the minimum included angles of all slope included angles, and calculating the average included angle of the remaining slope included angles;
calculating a vehicle body course angle according to the average included angle;
the body heading angle is defined as the fixed angle of rotation of the initial template of the turning arrow.
5. The method for identifying a ground arrow according to claim 3, wherein the heading angle of the vehicle body is calculated in the following manner:
θ'=π/2-θ;
wherein, theta' is the vehicle body course angle, and theta is the average included angle.
6. The method for identifying a ground arrow according to claim 4, wherein the step of traversing the picture after extracting the ground arrow and the lane line in a pixel level, performing template matching, position clustering and turning arrow type identification on the turning arrow after scaling the turning arrow to generate candidate frames with different sizes and then performing template matching, position clustering and turning arrow type identification on the turning arrow matching template to identify the turning arrow comprises:
traversing pixel-level rows of the picture after the ground arrows and the lane lines are extracted, and storing the position of the central point of each row of the arrows;
generating a plurality of candidate frames with different sizes by zooming at the central point position;
interpolating different candidate frames to make the size of the candidate frames the same as that of the initial template of the turning arrow;
calculating the template matching degree between the candidate frame after interpolation and the initial template of the turning arrow, eliminating the candidate frame with the matching degree smaller than the threshold value of the matching degree, and storing the candidate frame with the maximum matching degree, the arrow type and the current position of the candidate frame.
7. The method for identifying a ground arrow according to claim 6, wherein the step of performing pixel-level column traversal on the image after the ground arrow and the lane line are extracted, performing template matching, position clustering and turning arrow type identification on the turning arrow after the turning arrow is scaled to generate candidate frames with different sizes and then performing template matching, position clustering and turning arrow type identification on the candidate frames and the turning arrow matching template so as to identify the turning arrow further comprises:
classifying the positions of the turning arrows of the matched results of all the templates in each row by using the distance between the positions of the central points;
traversing the positions of the center points of the results matched with all the templates, and calculating the distance between every two templates;
if the distance is smaller than the distance threshold, the matching result of the two templates is considered as the matching result under the same turning arrow, and the matching result of the two templates is classified into a tuple; if the distance is larger than or equal to the distance threshold, the matching result of the two templates is considered as the matching result under the other turning arrow, and the matching result of the two templates is classified into the other tuple;
and selecting the arrow type corresponding to the maximum matching degree from the tuple as the type of the turning arrow at the position.
8. A system for identifying a ground arrow, comprising:
the acquisition module is used for acquiring a ground annular view;
the extraction module is used for extracting a ground arrow and a lane line from the ground annular view, and taking a turning arrow in the ground arrow as a turning arrow initial template;
the detection module is used for traversing pixel-level lines of the picture after the ground arrows and the lane lines are extracted, and detecting all straight arrows in the ground arrows;
the rotating module is used for rotating the initial turning arrow template by a fixed angle to form a turning arrow matching template;
the identification module is used for traversing pixel-level columns of the pictures after the ground arrows and the lane lines are extracted, scaling the turning arrows to generate candidate frames with different sizes, and then performing template matching, position clustering and turning arrow type identification on the candidate frames and a turning arrow matching template to identify the turning arrows;
and the output module is used for combining the identified straight arrow and the identified turning arrow so as to output the ground arrow in the ground ring view.
9. A computer-readable storage medium, on which a computer program is stored, which program, when being executed by a processor, is adapted to carry out a method for identifying a ground arrow according to any one of claims 1 to 7.
10. An identification terminal of a ground arrow, comprising: a processor and a memory;
the memory is used for storing a computer program, and the processor is used for executing the computer program stored by the memory to make the identification terminal execute the identification method of the ground arrow according to any one of claims 1 to 7.
11. The terrestrial arrow-head identifying terminal according to claim 10, wherein the terrestrial arrow-head identifying terminal comprises a vehicle-mounted terminal.
CN202110523788.4A 2021-05-13 2021-05-13 Ground arrow identification method, system, terminal and computer readable storage medium Active CN113158976B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110523788.4A CN113158976B (en) 2021-05-13 2021-05-13 Ground arrow identification method, system, terminal and computer readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110523788.4A CN113158976B (en) 2021-05-13 2021-05-13 Ground arrow identification method, system, terminal and computer readable storage medium

Publications (2)

Publication Number Publication Date
CN113158976A true CN113158976A (en) 2021-07-23
CN113158976B CN113158976B (en) 2024-04-02

Family

ID=76875258

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110523788.4A Active CN113158976B (en) 2021-05-13 2021-05-13 Ground arrow identification method, system, terminal and computer readable storage medium

Country Status (1)

Country Link
CN (1) CN113158976B (en)

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2013113011A1 (en) * 2012-01-26 2013-08-01 Telecommunication Systems, Inc. Natural navigational guidance
CN105825203A (en) * 2016-03-30 2016-08-03 大连理工大学 Ground arrowhead sign detection and identification method based on dotted pair matching and geometric structure matching
CN105913041A (en) * 2016-04-27 2016-08-31 浙江工业大学 Pre-marked signal lights based identification method
CN109948630A (en) * 2019-03-19 2019-06-28 深圳初影科技有限公司 Recognition methods, device, system and the storage medium of target sheet image
CN111414826A (en) * 2020-03-13 2020-07-14 腾讯科技(深圳)有限公司 Method, device and storage medium for identifying landmark arrow
CN111476157A (en) * 2020-04-07 2020-07-31 南京慧视领航信息技术有限公司 Lane guide arrow recognition method under intersection monitoring environment
CN112131963A (en) * 2020-08-31 2020-12-25 青岛秀山移动测量有限公司 Road marking line extraction method based on driving direction structural feature constraint
CN112183427A (en) * 2020-10-10 2021-01-05 厦门理工学院 Rapid extraction method for arrow-shaped traffic signal lamp candidate image area

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2013113011A1 (en) * 2012-01-26 2013-08-01 Telecommunication Systems, Inc. Natural navigational guidance
CN105825203A (en) * 2016-03-30 2016-08-03 大连理工大学 Ground arrowhead sign detection and identification method based on dotted pair matching and geometric structure matching
CN105913041A (en) * 2016-04-27 2016-08-31 浙江工业大学 Pre-marked signal lights based identification method
CN109948630A (en) * 2019-03-19 2019-06-28 深圳初影科技有限公司 Recognition methods, device, system and the storage medium of target sheet image
CN111414826A (en) * 2020-03-13 2020-07-14 腾讯科技(深圳)有限公司 Method, device and storage medium for identifying landmark arrow
CN111476157A (en) * 2020-04-07 2020-07-31 南京慧视领航信息技术有限公司 Lane guide arrow recognition method under intersection monitoring environment
CN112131963A (en) * 2020-08-31 2020-12-25 青岛秀山移动测量有限公司 Road marking line extraction method based on driving direction structural feature constraint
CN112183427A (en) * 2020-10-10 2021-01-05 厦门理工学院 Rapid extraction method for arrow-shaped traffic signal lamp candidate image area

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
魏瑾瑜: ""道路箭头标志检测与识别算法研究"", 《中国优秀硕士学位论文全文数据库工程科技Ⅱ辑》, pages 034 - 967 *

Also Published As

Publication number Publication date
CN113158976B (en) 2024-04-02

Similar Documents

Publication Publication Date Title
Ghanem et al. Lane detection under artificial colored light in tunnels and on highways: an IoT-based framework for smart city infrastructure
Marzougui et al. A lane tracking method based on progressive probabilistic Hough transform
WO2019228211A1 (en) Lane-line-based intelligent driving control method and apparatus, and electronic device
US11017244B2 (en) Obstacle type recognizing method and apparatus, device and storage medium
Yuan et al. Robust lane detection for complicated road environment based on normal map
WO2018205467A1 (en) Automobile damage part recognition method, system and electronic device and storage medium
KR101596299B1 (en) Apparatus and Method for recognizing traffic sign board
An et al. Real-time lane departure warning system based on a single FPGA
Danescu et al. Detection and classification of painted road objects for intersection assistance applications
US20220207889A1 (en) Method for recognizing vehicle license plate, electronic device and computer readable storage medium
Ye et al. A two-stage real-time YOLOv2-based road marking detector with lightweight spatial transformation-invariant classification
CN112418216A (en) Method for detecting characters in complex natural scene image
KR20190053355A (en) Method and Apparatus for Recognizing Road Symbols and Lanes
CN113299073B (en) Method, device, equipment and storage medium for identifying illegal parking of vehicle
WO2023155581A1 (en) Image detection method and apparatus
CN109960959B (en) Method and apparatus for processing image
Wang et al. Fast vanishing point detection method based on road border region estimation
CN117218622A (en) Road condition detection method, electronic equipment and storage medium
Liu et al. Vision-based environmental perception for autonomous driving
Annamalai et al. An optimized computer vision and image processing algorithm for unmarked road edge detection
Al Mamun et al. Efficient lane marking detection using deep learning technique with differential and cross-entropy loss.
Zhu et al. Moment-based multi-lane detection and tracking
Hwang et al. Optimized clustering scheme-based robust vanishing point detection
CN113158976B (en) Ground arrow identification method, system, terminal and computer readable storage medium
Heidarizadeh Preprocessing Methods of Lane Detection and Tracking for Autonomous Driving

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB03 Change of inventor or designer information

Inventor after: Song Jing

Inventor after: Wu Zizhang

Inventor after: Wang Fan

Inventor before: Song Jing

Inventor before: Xiang Weixing

Inventor before: Wu Zizhang

Inventor before: Wang Fan

CB03 Change of inventor or designer information
GR01 Patent grant
GR01 Patent grant