CN110956081B - Method and device for identifying position relationship between vehicle and traffic marking and storage medium - Google Patents

Method and device for identifying position relationship between vehicle and traffic marking and storage medium Download PDF

Info

Publication number
CN110956081B
CN110956081B CN201910976859.9A CN201910976859A CN110956081B CN 110956081 B CN110956081 B CN 110956081B CN 201910976859 A CN201910976859 A CN 201910976859A CN 110956081 B CN110956081 B CN 110956081B
Authority
CN
China
Prior art keywords
line segment
vehicle
line
pixel
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910976859.9A
Other languages
Chinese (zh)
Other versions
CN110956081A (en
Inventor
李永敬
王明真
刘尚武
古明辉
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Starcart Technology Co ltd
Original Assignee
Guangdong Starcart Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Starcart Technology Co ltd filed Critical Guangdong Starcart Technology Co ltd
Priority to CN201910976859.9A priority Critical patent/CN110956081B/en
Publication of CN110956081A publication Critical patent/CN110956081A/en
Application granted granted Critical
Publication of CN110956081B publication Critical patent/CN110956081B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/08Detecting or categorising vehicles
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • Computing Systems (AREA)
  • Software Systems (AREA)
  • Molecular Biology (AREA)
  • Computational Linguistics (AREA)
  • Biophysics (AREA)
  • Biomedical Technology (AREA)
  • Mathematical Physics (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Multimedia (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Traffic Control Systems (AREA)
  • Image Analysis (AREA)

Abstract

The invention relates to the technical field of vehicle state identification based on visual image processing, and discloses an identification method of a vehicle and traffic marking position relationship, which comprises the following steps: according to preset categories, carrying out semantic segmentation on the image to obtain pixel sets of a plurality of categories; performing line extraction on the pixel set; according to the position information of the vehicle in the image, a first line segment belonging to the vehicle is respectively screened out from a plurality of line segments and used for representing the position of a first pixel group; and identifying the position relation between the vehicle and the traffic marking according to the position relation between the first line segment and the third line segment, and outputting an identification result. Some technical effects of the present disclosure are: the first line segment representing the wheel connecting line position is obtained by processing the image, and the position of the first line segment is compared with the position of the third line segment representing the traffic marking line, so that the position relation between the vehicle and the traffic marking line is obtained.

Description

Method and device for identifying position relationship between vehicle and traffic marking and storage medium
Technical Field
The disclosure relates to the technical field of vehicle state identification based on visual image processing, in particular to a method for identifying the position relationship between a vehicle and a traffic marking and a method for judging illegal pressing lines.
Background
In the technical field of vehicle state recognition based on visual image processing, there are many research results for judging whether a vehicle is pressed during running by processing visual images (generally, it can be understood that at least one wheel of the vehicle spans a specific traffic marking, such as a white solid line, a yellow solid line, etc.). For example:
patent document with publication number of CN109598943A and entitled "method, device and system for monitoring vehicle violation" proposes that vehicle attributes and traffic marking attributes are identified and classified by means of CNN (Convolutional Neural Networks, convolutional neural network), and whether the vehicle violates the rules or not is judged according to the relevance and the position relationship between the two attributes.
Patent document with publication number of CN107358170A and name of "a method for identifying a vehicle violation line based on mobile machine vision" proposes that a lane line in view is identified through hough transformation, and then the relation between the lane line and a rectangular shadow area of the shadow at the bottom of the vehicle is analyzed to judge whether the behavior of the violation line exists.
The Shu's paper on the basis of machine vision (author Hu Peng) of the national university of science and technology, proposes that comprehensive judgment is performed according to whether the mass center of a line-pressing object and the line distance meet the standard or not and then according to the overlapping region of the line pressing.
The foregoing determination of whether the vehicle is a line is basically performed by determining the positional relationship between the vehicle and the traffic marking, and some drawbacks of the prior art are that implementation of most techniques needs to rely on accurate recognition of the edge profile of the entire vehicle body, and needs to perform large-area image processing on pixels related to the vehicle body, which is inefficient.
Disclosure of Invention
In order to solve at least one of the foregoing technical problems, in one aspect, the disclosure provides a method for identifying a positional relationship between a vehicle and a traffic marking, which includes the following steps: acquiring an image containing the vehicle and the traffic marking; according to preset categories, carrying out semantic segmentation on the image to obtain pixel sets of a plurality of categories; the set of pixels comprises a first set of types comprising a first group of pixels connecting two wheels on one side of the vehicle; the set of pixels includes a third set of classes including corresponding third groups of pixels of the traffic marking; extracting lines from the pixel set to obtain a plurality of line segments for representing positions of different pixel groups in the pixel set; the line segment includes a third line segment representing a position of the third pixel group; according to the position information of the vehicle in the image, a first line segment belonging to the vehicle is respectively screened out from the plurality of line segments and used for representing the position of the first pixel group; and identifying the position relationship between the vehicle and the traffic marking according to the position relationship between the first line segment and the third line segment, and outputting an identification result.
Preferably, said set of pixels further comprises a second set of pixels comprising a second group of pixels connecting two wheels at one end of said vehicle; screening a second line segment belonging to the vehicle from the plurality of line segments according to the position information of the vehicle in the image, wherein the second line segment is used for representing the position of the second pixel group; and identifying the position relationship between the vehicle and the traffic marking according to the position relationship between the second line segment and the third line segment, and outputting an identification result.
Preferably, the image is processed by adopting a deep learning target detection method, information of a minimum circumscribed rectangle of the vehicle in the image is obtained, and the information of the minimum circumscribed rectangle is used as position information of the vehicle in the image for subsequent operation.
Preferably, "line extraction of the set of pixels" includes the steps of: performing pixel separation on the pixel set of each category to obtain a corresponding binary image; performing a closing operation and an area communication operation on the binary image to obtain a plurality of communication areas; performing area filtration on the communication area according to a first set threshold value to obtain a processed communication block; and executing straight line fitting operation on the edge of each communication block, and obtaining the information of the line segment corresponding to each communication block according to the position information of the minimum circumscribed rectangle of the communication block.
Preferably, the information of the line segment includes coordinates of both ends of the line segment and a straight line representation equation of the line segment.
Preferably, a first center point coordinate of a line segment extracted from the first class set is obtained, and whether the first center point coordinate is located in the minimum circumscribed rectangle is judged; if so, selecting a line segment corresponding to a first center point coordinate with the minimum center distance from the minimum circumscribed rectangle, and confirming the line segment as a first line segment belonging to the vehicle; and similarly, processing the line segments extracted from the second class set, and confirming the second line segments belonging to the vehicle.
In yet another aspect, the present disclosure provides a method for determining a rule-breaking line by applying the identification method, including the following steps: acquiring a first intersection point coordinate of the first line segment and the third line segment; judging whether the distance from the first intersection point to the first center point coordinate of the first line segment is smaller than a second set threshold value or not; if the distance from the first intersection point to the nearest endpoint of the third line segment is smaller than a third set threshold value, judging whether the distance from the first intersection point to the nearest endpoint of the third line segment is smaller than a third set threshold value; if the number is smaller than the preset number, generating a line pressing prompt message.
Preferably, a second intersection point coordinate of the second line segment and the third line segment is obtained; judging whether the distance from the second intersection point to the second center point coordinate of the second line segment is smaller than a third set threshold value or not; if the distance from the second intersection point to the nearest endpoint of the third line segment is smaller than a fifth set threshold value, a line pressing prompt message is generated.
In yet another aspect, the present disclosure proposes a violation alert device that performs the determination method, comprising a camera, a processor, and a display; the camera is used for shooting and acquiring images containing the vehicles in front and the traffic marking in real time; the processor is used for executing the judging method and sending the line pressing prompt information to the display; and the display displays the line pressing prompt information.
In yet another aspect, the present disclosure provides an identification device for a positional relationship between a vehicle and a traffic marking, including: the image acquisition module is used for acquiring images containing the vehicle and the traffic marking; the semantic segmentation module is used for dividing pixels of the image into a plurality of classes of pixel sets in a semantic segmentation mode; the set of pixels comprises a first set of classes comprising a first group of pixels connected to a wheel on one side of the vehicle; the set of pixels comprises a second set of pixels comprising a second group of pixels connected to a wheel at one end of the vehicle; the set of pixels includes a third set of classes including corresponding third groups of pixels of the traffic marking; the line extraction module is used for carrying out line extraction on the pixel set to obtain a plurality of line segments used for representing the positions of different pixel groups in the pixel set; the line segment includes a third line segment representing a position of the third pixel group; the line attribution judging module is used for respectively screening a first line segment and a second line segment which belong to the vehicle from the plurality of line segments according to the position information of the vehicle in the image, and the first line segment and the second line segment are respectively used for representing the positions of the first pixel group and the second pixel group; the identification module is used for identifying the position relation between the vehicle and the traffic marking according to the position relation among the first line segment, the second line segment and the third line segment, and outputting an identification result.
In yet another aspect, the present disclosure presents a computer-readable storage medium, on which a computer program is stored, which computer program, when being executed by a processor, implements the steps of the identification method.
Some technical effects of the present disclosure are: the first line segment representing the wheel connecting line position is obtained by processing the image, and the position of the first line segment is compared with the position of the third line segment representing the traffic marking line, so that the position relation between the vehicle and the traffic marking line is obtained.
Drawings
For a better understanding of the technical solutions of the present disclosure, reference may be made to the following drawings for aiding in the description of the prior art or embodiments. The drawings will selectively illustrate products or methods involved in the prior art or some embodiments of the present disclosure. The basic information of these figures is as follows:
FIG. 1 is a schematic diagram illustrating positions of a first pixel group, a second pixel group, and a third pixel group according to an embodiment;
FIG. 2 is a schematic diagram of a minimum bounding rectangle in one embodiment;
fig. 3 is a schematic diagram of positions of a first line segment, a second line segment, and a third line segment in an embodiment.
In the above figures, the reference numerals and the corresponding technical features are as follows:
10-vehicle, 11-minimum bounding rectangle, 20-first pixel group, 21-first line segment, 30-second pixel group, 31-second line segment, 40-third pixel group, 41-third line segment.
Detailed Description
Further technical means or technical effects to which the present disclosure relates will be described below, and it is apparent that examples (or embodiments) provided are only some embodiments, but not all, which are intended to be covered by the present disclosure. All other embodiments that can be made by those skilled in the art without the exercise of inventive faculty, based on the embodiments in this disclosure and the explicit or implicit presentation of the drawings, are intended to be within the scope of this disclosure.
In one aspect, the present disclosure proposes a method of identifying a positional relationship between a vehicle and a traffic marking, comprising the steps of: acquiring an image containing the vehicle and the traffic marking; according to preset categories, carrying out semantic segmentation on the image to obtain pixel sets of a plurality of categories; the set of pixels comprises a first set of types comprising a first group of pixels connecting two wheels on one side of the vehicle; the set of pixels includes a third set of classes including corresponding third groups of pixels of the traffic marking; extracting lines from the pixel set to obtain a plurality of line segments for representing positions of different pixel groups in the pixel set; the line segment includes a third line segment representing a position of the third pixel group; according to the position information of the vehicle in the image, a first line segment belonging to the vehicle is respectively screened out from the plurality of line segments and used for representing the position of the first pixel group; and identifying the position relationship between the vehicle and the traffic marking according to the position relationship between the first line segment and the third line segment, and outputting an identification result.
It should be noted that the identification method is suitable for determining the behavior of the vehicle line pressing, but is not limited thereto, and is also suitable for determining the relative position of the vehicle and the road, for example. Thus, the traffic markings are not limited to lane lines, but may be other types of markings on the road, including broken lines, solid lines.
Generally, the position relationship between a vehicle and a traffic marking is determined, and the conventional processing method is to obtain the contour of a three-dimensional model of the vehicle by image processing, and calculate the position relationship by the position coordinates of the contour and the position coordinates of the traffic marking, or calculate the position relationship by image processing to obtain the projection of the vehicle on the road. Such a method is large in calculation amount, and it is difficult to always maintain high accuracy in the image processing due to the difference in vehicle body morphology. According to the scheme, the position of the wheel is focused, the first line segment representing the line connecting position of the wheel is obtained through processing the image, and the position is compared with the third line segment representing the traffic marking line according to the first line segment, so that the position relation between the vehicle and the traffic marking line is obtained.
In the identification method, the manner of acquiring the image may be various, for example, videos may be recorded on roads and vehicles by a fixed image pickup apparatus, and then the videos may be processed to obtain a desired image; photographing vehicles on the road through a fixed camera, so as to obtain a required image; of course, the image may also be obtained by moving an image capturing device, a camera, or by capturing a video or a picture.
When the identification method is executed, a pre-trained deep learning model needs to be called, and the deep learning model is operated to identify a pixel connecting line between two wheels (such as a left front wheel and a left rear wheel or a right front wheel and a right rear wheel) on one side of a vehicle in an image as a class of pixels, wherein the connecting line is a line with a certain pixel width, and a person skilled in the art can customize the pixel width (pixel number), for example, the pixel number is 5-10. The deep learning model may adopt models such as DBN (Deep Belief Network ), CNN (Convolutional Neural Networks, convolutional neural network), RNN (Recursive Neural Network, recurrent neural network), LSTM (Long Short-Term Memory network), and when training these models, the pixels in the image need to be first subjected to self-defined classification, generally, the pixels including "the pixel connection line (the first pixel group) between two wheels on one side of the vehicle" may be used as a first class set, the pixels including "the pixel connection line between two wheels on one side of the vehicle" may be used as a second class set, the pixels including "the pixels corresponding to at least one traffic marking (such as a lane line)" may be used as a third class set, and the pixels corresponding to the background of the image may be used as a fourth class set. After classification and labeling of different categories in the image, sample data are obtained, and the sample data are used as the basis for training the deep learning model. Other training steps of the deep learning model may be performed by using a variety of existing techniques, and are not described herein because they are not the means of focus in the present application.
In one embodiment, the semantic segmentation method employs FCN ((Fully Convolutional Networks for Semantic Segmentation, full convolutional network)), although other semantic segmentation methods may be employed by those skilled in the art, such as R-CNN (Region-based Semantic Segmentation ), deep (which is a semantic segmentation model developed by google corporation based on CNN using Tensorflow), and the like.
It should be noted that the pixel set includes a plurality of sets of categories, such as a first set of categories, a second set of categories, and the like. Each set of classes may in turn comprise a plurality of pixel groups, respectively, e.g. the first set of classes comprises a first pixel group, meaning that in some embodiments the first set may also comprise pixel groups at other locations in the image, which may also be considered as pixel groups, which are interpreted by the deep learning model as having some of the same properties as the first pixel group. However, these do not affect the recognition results of the recognition method described in the present application, and those skilled in the art can also avoid this situation to some extent by improving the accuracy of target matching during the training process of the advanced deep learning model.
In one embodiment, the pixels of the first, third, and fourth class sets constitute the aforementioned pixel set. In one embodiment, the pixels of the first class set, the second class set, the third class set, and the fourth class set form the aforementioned pixel set. Of course, the pixel set may also include more classes of pixels.
The line extraction is performed on the pixel set in order to obtain line segment information representing the position of each pixel group, and the line segment information generally includes the position information of the line segment, that is, the position information corresponding to the position information reflecting the pixel group. The line segment information is provided, so that the position relation among different pixel groups can be conveniently judged.
Since in practical situations, there may be a plurality of vehicles in the image, and there may be a plurality of first pixel groups, "the first line segment belonging to the vehicle is screened out from the plurality of line segments", then the corresponding first pixel group and the first line segment corresponding to the first pixel group may be found for the specific vehicle. Of course, the corresponding first pixel group and the first line segment can be found for all vehicles in the image at the same time. The location information of the vehicle in the image may be obtained by a variety of existing target detection techniques, and is not developed here, as it is not the focus of the present application. The position information of the vehicle is available, a first line segment corresponding to the vehicle can be screened out, and the position relation between the vehicle and the traffic marking can be identified through the position relation between the first line segment and the third line segment.
The recognition result can be output in various modes, for example, the recognition result can be prompted in a voice, text and graphic mode, the vehicle is prompted to be positioned above a certain traffic marking, the vehicle is prompted to be close to the certain traffic marking, whether the vehicle violates traffic regulations or not is prompted, and the like. Of course, the recognition result may be output to the memory only as a number, a code, or the like, without being presented on the display interface.
In one embodiment, the set of pixels further comprises a second set of pixels comprising a second group of pixels connecting two wheels at one end of the vehicle; screening a second line segment belonging to the vehicle from the plurality of line segments according to the position information of the vehicle in the image, wherein the second line segment is used for representing the position of the second pixel group; and identifying the position relationship between the vehicle and the traffic marking according to the position relationship between the second line segment and the third line segment, and outputting an identification result.
The one side of the vehicle refers to the left side or the right side of the vehicle, and the one end of the vehicle refers to the front end or the rear end of the vehicle.
As shown in fig. 1, fig. 1 is an image, where the image includes the content of the vehicle 10, and after the semantic segmentation of the image, a first class set including the first pixel group 20, a second class set including the second pixel group 30, and a third class set including the third pixel group 40 may be obtained. Fig. 2 shows a case where the image is subjected to deep learning target detection, resulting in a minimum circumscribed rectangle 11 of the vehicle. Fig. 3 shows the main elements obtained after line extraction of the pixel combinations, namely the first line segment 21, the second line segment 31, and the third line segment 41. The traffic markings illustrated here are lane lines. From this, it is understood that, based on the positional relationship between the first line segment 21 and the third line segment 41, the positional relationship between the vehicle and the traffic marking (lane line in the drawing) can be determined from one aspect; similarly, the positional relationship between the vehicle and the traffic marking may be determined from another aspect based on the positional relationship between the second line segment 31 and the third line segment 41.
In one embodiment, the image is processed by adopting a deep learning target detection method, information of a minimum circumscribed rectangle of the vehicle in the image is obtained, and the information of the minimum circumscribed rectangle is used as position information of the vehicle in the image for subsequent operation. For example, an anchor-free deep learning object detection method may be employed, the process of which mainly includes two parts of image encoding and image decoding. The image coding part mainly adopts VGG16 as a backbone part, is mainly formed by repeatedly stacking 3x3 small convolution kernels and 2x2 maximum pooling layers, is a classical common classification network, and is beneficial to obtaining the perception wild semantic information. The image decoding adopts an up-sampling method to amplify the coding part, which is realized by continuously carrying out 2 times of deconvolution operation, and adopts a 1x1 convolution kernel to carry out regression prediction on the basis of the decoding characteristics, the detection result is shown in fig. 2, and the obtained result is the minimum circumscribed rectangle 11 of the vehicle in the image. When there are multiple vehicles in the image, multiple minimum bounding rectangles are obtained.
After semantic segmentation, a plurality of pixel sets can be obtained, but the result of the semantic segmentation is insufficient for carrying out position analysis of different pixel groups, the operation of line extraction is required to be executed, and the pixel groups are represented in the form of line segments. In one embodiment, "line extracting the set of pixels" includes the steps of: performing pixel separation on the pixel set of each category to obtain a corresponding binary image; performing a closing operation and an area communication operation on the binary image to obtain a plurality of communication areas; performing area filtration on the communication area according to a first set threshold value to obtain a processed communication block; and executing straight line fitting operation on the edge of each communication block, and obtaining the information of the line segment corresponding to each communication block according to the position information of the minimum circumscribed rectangle of the communication block. In this embodiment, the first set threshold may be 5% or 8% of the image area occupied by the vehicle, and other values may be performed as needed by those skilled in the art. In this embodiment, the communication block refers to a communication area obtained after performing the area filtering step. The purpose of performing the straight line fitting is to regularize the edges of the connected blocks, and in addition, to display some occluded edges by means of straight line fitting.
In one embodiment, the information of the line segment includes coordinates of both ends of the line segment and a straight line representation equation of the line segment. In other embodiments, the information of the line segment may include a midpoint coordinate or a central axis representation equation of the line segment.
Considering that there may be a plurality of vehicles in the image, that is, there may be a plurality of first line segments and second line segments, when processing the positional relationship recognition of a certain vehicle, it is necessary to find out the first line segments and the second line segments belonging to the vehicle from them. In one embodiment, a first center point coordinate of a line segment extracted from the first class set is obtained, and whether the first center point coordinate is located in the minimum circumscribed rectangle is judged; if so, selecting a line segment corresponding to a first center point coordinate with the minimum center distance from the minimum circumscribed rectangle, and confirming the line segment as a first line segment belonging to the vehicle; and similarly, processing the line segments extracted from the second class set, and confirming the second line segments belonging to the vehicle. As shown in fig. 3, the first center point coordinate is the midpoint coordinate of the first line segment 21, and whether the first line segment 21 belongs to the vehicle is determined by determining whether the first center point coordinate is within the minimum bounding rectangle 11, and it is obvious that the first line segment 21 in the figure belongs to the vehicle. Similarly, the second center point coordinate of the second line segment 31, that is, the center point coordinate thereof may be obtained, or it may be determined that the second line segment 31 is attributed to the vehicle.
In yet another aspect, the present disclosure provides a method for determining a rule-breaking line by applying the identification method, including the following steps: acquiring a first intersection point coordinate of the first line segment and the third line segment; judging whether the distance from the first intersection point to the first center point coordinate of the first line segment is smaller than a second set threshold value or not; if the distance from the first intersection point to the nearest endpoint of the third line segment is smaller than a third set threshold value, judging whether the distance from the first intersection point to the nearest endpoint of the third line segment is smaller than a third set threshold value; if the number is smaller than the preset number, generating a line pressing prompt message. In one case, as can be seen from fig. 1, 2 and 3, the first line segment 21 and the third line segment 41 have no intersection point, but the second line segment 31 and the third line segment 41 have an intersection point, so that it can be determined that a line pressing occurs during the running of the vehicle. The second set threshold value and the third set threshold value are set in the meaning that the line pressing condition can be identified more accurately (because the first line segment cannot reflect the position of one side wheel in a percentage and truly in some cases, a certain error exists). Specific values of the second set threshold and the third set threshold can be set according to actual needs, for example, the second set threshold and the third set threshold can take values of 40% and 45% of the length of the first line segment.
Similarly, in one embodiment, a second intersection coordinate of the second line segment and the third line segment is obtained; judging whether the distance from the second intersection point to the second center point coordinate of the second line segment is smaller than a third set threshold value or not; if the distance from the second intersection point to the nearest endpoint of the third line segment is smaller than a fifth set threshold value, a line pressing prompt message is generated. This embodiment is based on another principle to determine whether the vehicle is being pressed.
In yet another aspect, the present disclosure proposes a violation alert device that performs the determination method, comprising a camera, a processor, and a display; the camera is used for shooting and acquiring images containing the vehicles in front in real time; the processor is used for executing the judging method and sending the line pressing prompt information to the display; and the display displays the line pressing prompt information.
In yet another aspect, the present disclosure provides an identification device for a positional relationship between a vehicle and a traffic marking, including: the image acquisition module is used for acquiring images containing the vehicle and the traffic marking; the semantic segmentation module is used for dividing pixels of the image into a plurality of classes of pixel sets in a semantic segmentation mode; the set of pixels comprises a first set of classes comprising a first group of pixels connected to a wheel on one side of the vehicle; the set of pixels comprises a second set of pixels comprising a second group of pixels connected to a wheel at one end of the vehicle; the set of pixels includes a third set of classes including corresponding third groups of pixels of the traffic marking; the line extraction module is used for carrying out line extraction on the pixel set to obtain a plurality of line segments used for representing the positions of different pixel groups in the pixel set; the line segment includes a third line segment representing a position of the third pixel group; the line attribution judging module is used for respectively screening a first line segment and a second line segment which belong to the vehicle from the plurality of line segments according to the position information of the vehicle in the image, and the first line segment and the second line segment are respectively used for representing the positions of the first pixel group and the second pixel group; the identification module is used for identifying the position relation between the vehicle and the traffic marking according to the position relation among the first line segment, the second line segment and the third line segment, and outputting an identification result.
In yet another aspect, the present disclosure presents a computer-readable storage medium, on which a computer program is stored, which computer program, when being executed by a processor, implements the steps of the identification method. It will be appreciated by those skilled in the art that all or part of the steps in the embodiments may be implemented by a computer program to instruct related hardware, and the program may be stored in a computer readable medium, and the readable medium may include various media that may store program codes, such as a flash disk, a removable hard disk, a read-only memory, a random access device, a magnetic disk, or an optical disk.
It is within the knowledge and ability of one skilled in the art to combine the various embodiments or features mentioned herein with one another as additional alternative embodiments without conflict, and such limited number of alternative embodiments, not listed one by one, formed by a limited number of combinations of features, still fall within the skill of the present disclosure, as would be understood or inferred by one skilled in the art in view of the drawings and the foregoing.
In addition, the description of most embodiments is based on different emphasis and, where not explicitly described, may be understood with reference to the prior art or other related description herein.
It is emphasized that the embodiments described above are merely exemplary and preferred embodiments of the present disclosure, and are merely used to describe and explain the technical solutions of the present disclosure for the convenience of the reader to understand and not to limit the scope or application of the present disclosure. Any modifications, equivalent substitutions, improvements, or the like, which are within the spirit and principles of the present disclosure, are intended to be encompassed within the scope of the present disclosure.

Claims (8)

1. The method for identifying the position relationship between the vehicle and the traffic marking is characterized by comprising the following steps:
acquiring an image containing the vehicle and the traffic marking;
according to preset categories, carrying out semantic segmentation on the image to obtain pixel sets of a plurality of categories;
the set of pixels comprises a first set of types comprising a first group of pixels connecting two wheels on one side of the vehicle; the set of pixels includes a third set of classes including corresponding third groups of pixels of the traffic marking; performing pixel separation on the pixel set of each category to obtain a corresponding binary image; performing a closing operation and an area communication operation on the binary image to obtain a plurality of communication areas; performing area filtration on the communication area according to a first set threshold value to obtain a processed communication block; performing linear fitting operation on the edge of each communication block, and obtaining the information of the line segment corresponding to each communication block according to the position information of the minimum circumscribed rectangle of the communication block;
acquiring first center point coordinates of line segments extracted from the first class set, and judging whether the first center point coordinates are positioned in the minimum circumscribed rectangle or not; if so, selecting a line segment corresponding to a first center point coordinate with the minimum center distance from the minimum circumscribed rectangle, and confirming the line segment as a first line segment belonging to the vehicle;
extracting lines from the pixel set to obtain a plurality of line segments for representing positions of different pixel groups in the pixel set; the line segment includes a third line segment representing a position of the third pixel group;
according to the position information of the vehicle in the image, a first line segment belonging to the vehicle is respectively screened out from the plurality of line segments and used for representing the position of the first pixel group;
and identifying the position relationship between the vehicle and the traffic marking according to the position relationship between the first line segment and the third line segment, and outputting an identification result.
2. The identification method as claimed in claim 1, wherein:
the pixel set further comprises a second set of pixels comprising a second group of pixels connecting two wheels at one end of the vehicle;
screening a second line segment belonging to the vehicle from the plurality of line segments according to the position information of the vehicle in the image, wherein the second line segment is used for representing the position of the second pixel group;
and identifying the position relationship between the vehicle and the traffic marking according to the position relationship between the second line segment and the third line segment, and outputting an identification result.
3. The identification method according to claim 2, characterized in that:
and processing the image by adopting a deep learning target detection method to obtain information of a minimum circumscribed rectangle of the vehicle in the image, and taking the information of the minimum circumscribed rectangle as position information of the vehicle in the image to carry out subsequent operation.
4. A method for judging a line for violation using the identification method as claimed in any one of claims 1 to 3, characterized by comprising the steps of:
acquiring a first intersection point coordinate of the first line segment and the third line segment; judging whether the distance from the first intersection point to the first center point coordinate of the first line segment is smaller than a second set threshold value or not; if the distance from the first intersection point to the nearest endpoint of the third line segment is smaller than a third set threshold value, judging whether the distance from the first intersection point to the nearest endpoint of the third line segment is smaller than a third set threshold value; if the number is smaller than the preset number, generating a line pressing prompt message.
5. The method of determining according to claim 4, further comprising the steps of:
acquiring a second intersection point coordinate of the second line segment and the third line segment; judging whether the distance from the second intersection point to the second center point coordinate of the second line segment is smaller than a third set threshold value or not; if the distance from the second intersection point to the nearest endpoint of the third line segment is smaller than a fifth set threshold value, a line pressing prompt message is generated.
6. The violation alert device that performs the determination method according to claim 4, characterized in that:
comprises a camera, a processor and a display;
the camera is used for shooting and acquiring images containing the vehicles in front and the traffic marking in real time;
the processor is used for executing the judging method of claim 4 and sending the line pressing prompt information to the display;
and the display displays the line pressing prompt information.
7. The device for identifying the position relationship between the vehicle and the traffic marking comprises:
the image acquisition module is used for acquiring images containing the vehicle and the traffic marking;
the semantic segmentation module is used for dividing pixels of the image into a plurality of classes of pixel sets in a semantic segmentation mode; the set of pixels comprises a first set of classes comprising a first group of pixels connected to a wheel on one side of the vehicle; the set of pixels comprises a second set of pixels comprising a second group of pixels connected to a wheel at one end of the vehicle; the set of pixels includes a third set of classes including corresponding third groups of pixels of the traffic marking; performing pixel separation on the pixel set of each category to obtain a corresponding binary image; performing a closing operation and an area communication operation on the binary image to obtain a plurality of communication areas; performing area filtration on the communication area according to a first set threshold value to obtain a processed communication block; performing linear fitting operation on the edge of each communication block, and obtaining the information of the line segment corresponding to each communication block according to the position information of the minimum circumscribed rectangle of the communication block; acquiring first center point coordinates of line segments extracted from the first class set, and judging whether the first center point coordinates are positioned in the minimum circumscribed rectangle or not; if so, selecting a line segment corresponding to a first center point coordinate with the minimum center distance from the minimum circumscribed rectangle, and confirming the line segment as a first line segment belonging to the vehicle;
the line extraction module is used for carrying out line extraction on the pixel set to obtain a plurality of line segments used for representing the positions of different pixel groups in the pixel set; the line segment includes a third line segment representing a position of the third pixel group;
the line attribution judging module is used for respectively screening a first line segment and a second line segment which belong to the vehicle from the plurality of line segments according to the position information of the vehicle in the image, and the first line segment and the second line segment are respectively used for representing the positions of the first pixel group and the second pixel group;
the identification module is used for identifying the position relation between the vehicle and the traffic marking according to the position relation among the first line segment, the second line segment and the third line segment, and outputting an identification result.
8. A computer readable storage medium having stored thereon a computer program characterized by: the computer program, when executed by a processor, implements the steps of the identification method according to any one of claims 1 to 3.
CN201910976859.9A 2019-10-14 2019-10-14 Method and device for identifying position relationship between vehicle and traffic marking and storage medium Active CN110956081B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910976859.9A CN110956081B (en) 2019-10-14 2019-10-14 Method and device for identifying position relationship between vehicle and traffic marking and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910976859.9A CN110956081B (en) 2019-10-14 2019-10-14 Method and device for identifying position relationship between vehicle and traffic marking and storage medium

Publications (2)

Publication Number Publication Date
CN110956081A CN110956081A (en) 2020-04-03
CN110956081B true CN110956081B (en) 2023-05-23

Family

ID=69975643

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910976859.9A Active CN110956081B (en) 2019-10-14 2019-10-14 Method and device for identifying position relationship between vehicle and traffic marking and storage medium

Country Status (1)

Country Link
CN (1) CN110956081B (en)

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112257539B (en) * 2020-10-16 2024-06-14 广州大学 Method, system and storage medium for detecting position relationship between vehicle and lane line
CN112418183A (en) * 2020-12-15 2021-02-26 广州小鹏自动驾驶科技有限公司 Parking lot element extraction method and device, electronic equipment and storage medium
CN113238560A (en) * 2021-05-24 2021-08-10 珠海市一微半导体有限公司 Robot map rotating method based on line segment information
CN115830353A (en) * 2021-09-17 2023-03-21 北京极智嘉科技股份有限公司 Line segment matching method and device, computer equipment and storage medium
CN114842430B (en) * 2022-07-04 2022-09-09 江苏紫琅汽车集团股份有限公司 Vehicle information identification method and system for road monitoring
CN115953418B (en) * 2023-02-01 2023-11-07 公安部第一研究所 Notebook area stripping method, storage medium and device in security inspection CT three-dimensional image

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110077399A (en) * 2019-04-09 2019-08-02 魔视智能科技(上海)有限公司 A kind of vehicle collision avoidance method merged based on roadmarking, wheel detection
CN110299028A (en) * 2019-07-31 2019-10-01 深圳市捷顺科技实业股份有限公司 Method, apparatus, equipment and the readable storage medium storing program for executing of line detection are got in a kind of parking

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108572183B (en) * 2017-03-08 2021-11-30 清华大学 Inspection apparatus and method of segmenting vehicle image

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110077399A (en) * 2019-04-09 2019-08-02 魔视智能科技(上海)有限公司 A kind of vehicle collision avoidance method merged based on roadmarking, wheel detection
CN110299028A (en) * 2019-07-31 2019-10-01 深圳市捷顺科技实业股份有限公司 Method, apparatus, equipment and the readable storage medium storing program for executing of line detection are got in a kind of parking

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
邓超.数字图像处理与模式识别研究.《数字图像处理与模式识别研究》.地质出版社,2018,第122页第2段. *

Also Published As

Publication number Publication date
CN110956081A (en) 2020-04-03

Similar Documents

Publication Publication Date Title
CN110956081B (en) Method and device for identifying position relationship between vehicle and traffic marking and storage medium
CN112528878B (en) Method and device for detecting lane line, terminal equipment and readable storage medium
WO2022126377A1 (en) Traffic lane line detection method and apparatus, and terminal device and readable storage medium
CN112132156A (en) Multi-depth feature fusion image saliency target detection method and system
CN111627057B (en) Distance measurement method, device and server
CN110599453A (en) Panel defect detection method and device based on image fusion and equipment terminal
CN111191482B (en) Brake lamp identification method and device and electronic equipment
CN115131634A (en) Image recognition method, device, equipment, storage medium and computer program product
CN112613434A (en) Road target detection method, device and storage medium
CN109635701B (en) Lane passing attribute acquisition method, lane passing attribute acquisition device and computer readable storage medium
CN111062347A (en) Traffic element segmentation method in automatic driving, electronic device and storage medium
CN112784675A (en) Target detection method and device, storage medium and terminal
CN116071557A (en) Long tail target detection method, computer readable storage medium and driving device
CN111144361A (en) Road lane detection method based on binaryzation CGAN network
CN116129380A (en) Method and system for judging driving lane of automatic driving vehicle, vehicle and storage medium
CN112733864A (en) Model training method, target detection method, device, equipment and storage medium
CN114092818B (en) Semantic segmentation method and device, electronic equipment and storage medium
CN113160217B (en) Method, device, equipment and storage medium for detecting circuit foreign matters
CN110781863A (en) Method and device for identifying position relation between vehicle and local area and storage medium
JP2021152826A (en) Information processing device, subject classification method, and subject classification program
CN117576416B (en) Workpiece edge area detection method, device and storage medium
CN118247495B (en) Target identification method and device for high-resolution video spliced by multiple cameras
CN116434151B (en) Pavement foreign matter identification method, device, computer equipment and storage medium
CN115063594B (en) Feature extraction method and device based on automatic driving
CN111738012B (en) Method, device, computer equipment and storage medium for extracting semantic alignment features

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant