CN110956081A - Method and device for identifying position relation between vehicle and traffic marking and storage medium - Google Patents

Method and device for identifying position relation between vehicle and traffic marking and storage medium Download PDF

Info

Publication number
CN110956081A
CN110956081A CN201910976859.9A CN201910976859A CN110956081A CN 110956081 A CN110956081 A CN 110956081A CN 201910976859 A CN201910976859 A CN 201910976859A CN 110956081 A CN110956081 A CN 110956081A
Authority
CN
China
Prior art keywords
line segment
vehicle
line
image
pixels
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201910976859.9A
Other languages
Chinese (zh)
Other versions
CN110956081B (en
Inventor
李永敬
王明真
刘尚武
古明辉
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Starcart Technology Co ltd
Original Assignee
Guangdong Starcart Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Starcart Technology Co ltd filed Critical Guangdong Starcart Technology Co ltd
Priority to CN201910976859.9A priority Critical patent/CN110956081B/en
Publication of CN110956081A publication Critical patent/CN110956081A/en
Application granted granted Critical
Publication of CN110956081B publication Critical patent/CN110956081B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/08Detecting or categorising vehicles
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Abstract

The invention relates to the technical field of vehicle state recognition based on visual image processing, and discloses a method for recognizing the position relation between a vehicle and a traffic marking, which comprises the following steps: according to a preset category, performing semantic segmentation on an image to obtain a pixel set of multiple categories; performing line extraction on the pixel set; respectively screening out a first line segment belonging to the vehicle from the plurality of line segments according to the position information of the vehicle in the image, wherein the first line segment is used for representing the position of the first pixel group; and identifying the position relation between the vehicle and the traffic marking according to the position relation of the first line segment and the third line segment, and outputting an identification result. Some technical effects of this disclosure are: the image is processed to obtain a first line segment representing the connecting line position of the wheels, and the position of the first line segment is compared with a third line segment representing the traffic marking line to obtain the position relation between the vehicle and the traffic marking line.

Description

Method and device for identifying position relation between vehicle and traffic marking and storage medium
Technical Field
The disclosure relates to the technical field of vehicle state identification based on visual image processing, in particular to a method for identifying the position relation between a vehicle and a traffic marking and a method for judging illegal line pressing.
Background
In the field of vehicle state recognition based on visual image processing, there are many research results for determining whether a vehicle presses a line during driving through processing of a visual image (generally, it can be understood that at least one wheel of the vehicle crosses a specific traffic marking, such as a white solid line, a yellow solid line, etc.). For example:
patent document No. CN109598943A entitled "method, device and system for monitoring vehicle violation" proposes that vehicle attributes and traffic marking attributes are identified and classified by a CNN (Convolutional Neural network) method, and whether a vehicle violates a rule is determined according to the correlation and the positional relationship between the two attributes.
The patent document with publication number CN107358170A and name "a vehicle illegal line pressing identification method based on mobile machine vision" proposes that a lane line in view is identified through hough transformation, and then the relation between the lane line and a rectangular shadow area of a vehicle bottom shadow is analyzed to judge whether the illegal line pressing behavior exists.
The thesis of the traffic line pressing discrimination method based on machine vision (Hupeng, author) of the university of Western's science and technology proposes that comprehensive judgment is performed according to whether the distance between the center of mass of an object to be pressed and a line meets the standard and then according to the overlapping region of the pressed lines.
The foregoing judgment of whether a vehicle is pressed or not is essentially to judge the position relationship between the vehicle and the traffic marking, and some disadvantages of the prior art are that most of the technologies need to rely on accurate recognition of the edge profile of the whole vehicle body, and need to perform large-area image processing on the pixels related to the vehicle body, which is inefficient.
Disclosure of Invention
To solve at least one of the foregoing technical problems, in one aspect, the present disclosure provides a method for identifying a position relationship between a vehicle and a traffic marking, including: acquiring an image containing the vehicle and the traffic marking; according to a preset category, performing semantic segmentation on the image to obtain a pixel set of multiple categories; the set of pixels comprises a first class set comprising a first group of pixels connecting two wheels on one side of the vehicle; the set of pixels comprises a third class set comprising a corresponding third group of pixels of the traffic marking; performing line extraction on the pixel set to obtain a plurality of line segments for representing different pixel group positions in the pixel set; the line segment includes a third line segment representing a position of the third pixel group; respectively screening out a first line segment belonging to the vehicle from the line segments according to the position information of the vehicle in the image, wherein the first line segment is used for representing the position of the first pixel group; and identifying the position relation between the vehicle and the traffic marking according to the position relation between the first line segment and the third line segment, and outputting an identification result.
Preferably, the set of pixels further comprises a second-class set comprising a second group of pixels connecting two wheels at one end of the vehicle; according to the position information of the vehicle in the image, respectively screening out a second line segment which belongs to the vehicle from the line segments and is used for representing the position of the second pixel group; and identifying the position relation between the vehicle and the traffic marking according to the position relation between the second line segment and the third line segment, and outputting an identification result.
Preferably, the image is processed by a deep learning target detection method, information of a minimum circumscribed rectangle of the vehicle in the image is obtained, and the information of the minimum circumscribed rectangle is used as position information of the vehicle in the image for subsequent operation.
Preferably, the "line extracting the set of pixels" comprises the steps of: carrying out pixel separation on the pixel set of each category to obtain a corresponding binary image; performing closed operation and region communication operation on the binary image to obtain a plurality of communication regions; performing area filtering on the communication area according to a first set threshold value to obtain a processed communication block; and performing straight line fitting operation on the edge of each connected block, and obtaining information of a line segment corresponding to each connected block according to the position information of the minimum circumscribed rectangle of the connected block.
Preferably, the information of the line segment includes coordinates of both ends of the line segment and a straight line representation equation of the line segment.
Preferably, a first central point coordinate of the line segment extracted from the first class set is obtained, and whether the first central point coordinate is located in the minimum circumscribed rectangle is judged; if so, selecting a line segment corresponding to the first center point coordinate with the minimum distance from the center of the minimum circumscribed rectangle, and determining the line segment as a first line segment belonging to the vehicle; similarly, the line segments extracted from the second class set are processed, and a second line segment belonging to the vehicle is confirmed.
In another aspect, the present disclosure provides a method for determining a violation line using the identification method, including the following steps: acquiring a first intersection point coordinate of the first line segment and the third line segment; judging whether the distance from the first intersection point to the first center point coordinate of the first line segment is smaller than a second set threshold value or not; if the distance between the first intersection point and the nearest endpoint of the third line segment is smaller than a third set threshold value, judging whether the distance between the first intersection point and the nearest endpoint of the third line segment is smaller than the third set threshold value; and if the current value is less than the preset value, generating line pressing prompt information.
Preferably, a second intersection point coordinate of the second line segment and the third line segment is obtained; judging whether the distance from the second intersection point to the second center point coordinate of the second line segment is smaller than a third set threshold value; if the distance between the second intersection point and the nearest end point of the third line segment is smaller than a fifth set threshold, and if the distance between the second intersection point and the nearest end point of the third line segment is smaller than the fifth set threshold, line pressing prompt information is generated.
In yet another aspect, the present disclosure provides a violation prompting device for performing the method of determining, comprising a camera, a processor, and a display; the camera is used for shooting and acquiring images containing front vehicles and the traffic marking in real time; the processor is used for executing the judging method and sending the line pressing prompt information to the display; and the display displays the line pressing prompt information.
In yet another aspect, the present disclosure provides an apparatus for identifying a positional relationship between a vehicle and a traffic marking, including: the image acquisition module is used for acquiring an image containing the vehicle and the traffic marking; the semantic segmentation module is used for dividing the pixels of the image into a plurality of classes of pixel sets in a semantic segmentation mode; the set of pixels comprises a first class set comprising a first group of pixels connected to a wheel on one side of the vehicle; the set of pixels comprises a second class set comprising a second set of pixels connected to a wheel at one end of the vehicle; the set of pixels comprises a third class set comprising a corresponding third group of pixels of the traffic marking; the line extraction module is used for performing line extraction on the pixel set to obtain a plurality of line segments used for representing different pixel group positions in the pixel set; the line segment includes a third line segment representing a position of the third pixel group; the line attribution judging module is used for respectively screening a first line segment and a second line segment which belong to the vehicle from the line segments according to the position information of the vehicle in the image, and the first line segment and the second line segment are respectively used for representing the positions of the first pixel group and the second pixel group; and the identification module is used for identifying the position relation between the vehicle and the traffic marking according to the position relation of the first line segment, the second line segment and the third line segment and outputting an identification result.
In a further aspect, the present disclosure proposes a computer-readable storage medium having stored thereon a computer program which, when being executed by a processor, carries out the steps of the identification method.
Some technical effects of this disclosure are: the image is processed to obtain a first line segment representing the connecting line position of the wheels, and the position of the first line segment is compared with a third line segment representing the traffic marking line to obtain the position relation between the vehicle and the traffic marking line.
Drawings
For a better understanding of the technical aspects of the present disclosure, reference may be made to the following drawings, which are included to provide an additional description of the prior art or embodiments. These drawings selectively illustrate articles or methods related to the prior art or some embodiments of the present disclosure. The basic information for these figures is as follows:
FIG. 1 is a schematic diagram illustrating positions of a first pixel group, a second pixel group, and a third pixel group according to an embodiment;
FIG. 2 is a schematic diagram of a minimum bounding rectangle in one embodiment;
fig. 3 is a schematic position diagram of the first line segment, the second line segment, and the third line segment in one embodiment.
In the above drawings, the reference numbers and their corresponding technical features are as follows:
10-vehicle, 11-minimum circumscribed rectangle, 20-first pixel group, 21-first line segment, 30-second pixel group, 31-second line segment, 40-third pixel group, 41-third line segment.
Detailed Description
The technical means or technical effects referred to by the present disclosure will be further described below, and it is apparent that the examples (or embodiments) provided are only some embodiments intended to be covered by the present disclosure, and not all embodiments. All other embodiments, which can be made by those skilled in the art without any inventive step, will be within the scope of the present disclosure, either explicitly or implicitly based on the embodiments and the text of the present disclosure.
In one aspect, the present disclosure provides a method for identifying a position relationship between a vehicle and a traffic marking, which includes the following steps: acquiring an image containing the vehicle and the traffic marking; according to a preset category, performing semantic segmentation on the image to obtain a pixel set of multiple categories; the set of pixels comprises a first class set comprising a first group of pixels connecting two wheels on one side of the vehicle; the set of pixels comprises a third class set comprising a corresponding third group of pixels of the traffic marking; performing line extraction on the pixel set to obtain a plurality of line segments for representing different pixel group positions in the pixel set; the line segment includes a third line segment representing a position of the third pixel group; respectively screening out a first line segment belonging to the vehicle from the line segments according to the position information of the vehicle in the image, wherein the first line segment is used for representing the position of the first pixel group; and identifying the position relation between the vehicle and the traffic marking according to the position relation between the first line segment and the third line segment, and outputting an identification result.
It should be noted that the identification method is suitable for determining the behavior of the vehicle pressing line, but is not limited thereto, and is also suitable for determining the relative position of the vehicle and the road, for example. Therefore, the traffic markings are not limited to lane markings, but may be other types of markings on the road, including dashed lines, solid lines.
In general, the positional relationship between a vehicle and a traffic marking is determined, and conventionally, a processing method is to obtain an outline of a three-dimensional model of the vehicle by image processing, calculate the positional relationship by using position coordinates of the outline and position coordinates of the traffic marking, or calculate the positional relationship by obtaining a projection of the vehicle on a road by image processing. Such a method is large in calculation amount, and it is difficult to always maintain high accuracy in image processing due to various vehicle body forms. According to the scheme, the position of the wheel is focused, the first line segment representing the connecting line position of the wheel is obtained by processing the image, and the position of the first line segment is compared with the third line segment representing the traffic marking line, so that the position relation between the vehicle and the traffic marking line is obtained.
In the identification method, the mode of acquiring the image can be various, for example, videos can be recorded on roads and vehicles through fixed camera equipment, and then the videos are processed to obtain the required image; the vehicle on the road can be photographed through a fixed camera, so that a required image is obtained; of course, the images may be obtained by capturing videos or pictures with a mobile camera device or a camera.
When the recognition method is executed, a depth learning model trained in advance needs to be called, the depth learning model can be operated to recognize a pixel connecting line between two wheels (such as a left front wheel and a left rear wheel or a right front wheel and a right rear wheel) on one side of a vehicle in an image into a pixel of one category, the connecting line is a line with a certain pixel width, and a person skilled in the art can customize the pixel width (the number of pixels), for example, the number of pixels is 5-10. The Deep learning model may adopt models such as DBN (Deep Belief Network), CNN (Convolutional Neural Network), RNN (Recursive Neural Network), LSTM (Long Short-Term Memory Network), and the like, and when training these models, pixels in an image need to be classified in a user-defined manner, and generally, pixels including "a pixel connection line between two wheels at one end of a vehicle" (a first pixel group) may be used as a first class set, pixels including "a pixel connection line between two wheels at one end of a vehicle" may be used as a second class set, pixels including "a pixel corresponding to at least one traffic (such as a lane line)" may be used as a third class set, and pixels corresponding to a background of an image may be used as a fourth class set. After classification and labeling of different classes in the image, sample data are obtained and used as the basis of deep learning model training. Other training steps of the deep learning model may adopt many existing techniques, and are not described herein again because they are not the means of the present application.
In one embodiment, the Semantic Segmentation method is FCN (full volume network), but other Semantic Segmentation methods such as R-CNN (Region-based Semantic Segmentation), Deeplab (which is a Semantic Segmentation model developed by google corporation based on CNN using tensoflow), etc. may be used by those skilled in the art.
It should be noted that the pixel set includes a set of multiple categories, such as a first category set, a second category set, and so on. Each of the sets of categories may include a plurality of pixel groups, for example, the first set includes a first pixel group, which means that in some embodiments, the first set may also include pixel groups at other positions in the image, which may also be considered as pixel groups that are interpreted by the deep learning model as having some of the same attributes as the first pixel group. However, these do not affect the recognition result of the recognition method described in the present application, and those skilled in the art can also avoid this situation to some extent by improving the accuracy of target matching in the course of training the deep learning model in the previous stage.
In one embodiment, the pixels of the first class set, the third class set and the fourth class set constitute the aforementioned pixel set. In one embodiment, the pixels of the first class set, the second class set, the third class set and the fourth class set constitute the aforementioned pixel set. Of course, the set of pixels may also include more classes of pixels.
The line extraction is performed on the pixel set in order to obtain line segment information representing the position of each pixel group, and the line segment information generally includes position information of the line segment, that is, position information reflecting the pixel group. With the line segment information, the position relation among different pixel groups can be conveniently judged.
Since in practical situations, there may be a plurality of vehicles in the image, and there are a plurality of first pixel groups, and "a first line segment belonging to the vehicle is screened out from the plurality of line segments", the corresponding first pixel group and the first line segment corresponding to the first pixel group can be found for a specific vehicle. Of course, the corresponding first pixel group and the corresponding first line segment can also be found for all vehicles in the image at the same time. The position information of the vehicle in the image can be obtained by various existing target detection technologies, and is not focused here. If the position information of the vehicle exists, the first line segment corresponding to the vehicle can be screened out, and if the first line segment exists, the position relation between the vehicle and the traffic marking can be identified through the position relation between the first line segment and the third line segment.
The recognition result can be output in various ways, for example, the recognition result can be prompted in a voice, character and graphic way, the vehicle is prompted to be positioned above a certain traffic marking, the vehicle is prompted to be close to the certain traffic marking, and whether the vehicle violates the traffic regulation or not is prompted. Of course, it is also possible to output only the recognition result as a number, a code, or the like to the memory without displaying it on the display interface.
In one embodiment, the set of pixels further comprises a second class set comprising a second set of pixels connecting two wheels at one end of the vehicle; according to the position information of the vehicle in the image, respectively screening out a second line segment which belongs to the vehicle from the line segments and is used for representing the position of the second pixel group; and identifying the position relation between the vehicle and the traffic marking according to the position relation between the second line segment and the third line segment, and outputting an identification result.
The one side of the vehicle refers to a left side or a right side of the vehicle, and the one end of the vehicle refers to a front end or a rear end of the vehicle.
As shown in fig. 1, fig. 1 is an image, the image includes contents of a vehicle 10, and after semantic segmentation is performed on the image, a first class set including a first pixel group 20, a second class set including a second pixel group 30, and a third class set including a third pixel group 40 can be obtained. Fig. 2 shows a case where the image is subjected to deep learning target detection, resulting in a minimum bounding rectangle 11 of the vehicle. Fig. 3 shows the main elements obtained after line extraction of the pixel combination, i.e. the first line segment 21, the second line segment 31, and the third line segment 41. The traffic markings illustrated here are lane markings. From this, it can be seen that the positional relationship between the vehicle and the traffic markings (lane markings in the figure) can be determined from one aspect based on the positional relationship between the first line segment 21 and the third line segment 41; similarly, the positional relationship between the vehicle and the traffic markings may be determined from another aspect based on the positional relationship between the second line segment 31 and the third line segment 41.
In one embodiment, the image is processed by a deep learning target detection method, information of a minimum circumscribed rectangle of the vehicle in the image is obtained, and the information of the minimum circumscribed rectangle is used as position information of the vehicle in the image for subsequent operation. For example, an anchor-free deep learning target detection method can be adopted, and the process mainly comprises two parts of image coding and image decoding. The image coding part mainly adopts VGG16 as a backbone part, is mainly a small convolution kernel of 3x3 and a maximum pooling layer of 2x2 which are repeatedly stacked, is a classical common classification network and is beneficial to acquiring the semantic information in the perception field. And the image decoding adopts an up-sampling method to amplify the coded part, which is realized by continuously performing 2 times of deconvolution operation, and adopts a convolution kernel of 1x1 to perform regression prediction on the basis of the decoding characteristics, the detection result is shown in fig. 2, and the obtained result is the minimum bounding rectangle 11 of the vehicle in the image. When there are a plurality of vehicles in the image, a plurality of minimum bounding rectangles are obtained.
After semantic segmentation, a plurality of pixel sets can be obtained, but the result of semantic segmentation is not enough to perform position analysis of different pixel groups, and a line extraction operation needs to be performed to express the pixel groups in the form of line segments. In one embodiment, "line extracting the set of pixels" comprises the steps of: carrying out pixel separation on the pixel set of each category to obtain a corresponding binary image; performing closed operation and region communication operation on the binary image to obtain a plurality of communication regions; performing area filtering on the communication area according to a first set threshold value to obtain a processed communication block; and performing straight line fitting operation on the edge of each connected block, and obtaining information of a line segment corresponding to each connected block according to the position information of the minimum circumscribed rectangle of the connected block. In this embodiment, the first set threshold may be 5% or 8% of the image area occupied by the vehicle, and of course, those skilled in the art may also perform other values as needed. In this embodiment, the connected blocks refer to connected regions obtained after the area filtering step is performed. The straight line fitting operation is performed to regularize the edges of the connected blocks, and to display some blocked edges by means of straight line fitting.
In one embodiment, the information of the line segment includes coordinates of both ends of the line segment and a straight line representation equation of the line segment. In other embodiments, the information for the line segment may include a midpoint coordinate or a centerline representation equation for the line segment.
Considering that there may be a plurality of vehicles, that is, there may be a plurality of first line segments and second line segments, when processing the position relationship identification of a certain vehicle, it is necessary to find out the first line segment and the second line segment belonging to the vehicle. In one embodiment, a first central point coordinate of the line segment extracted from the first class set is obtained, and whether the first central point coordinate is located in the minimum circumscribed rectangle is judged; if so, selecting a line segment corresponding to the first center point coordinate with the minimum distance from the center of the minimum circumscribed rectangle, and determining the line segment as a first line segment belonging to the vehicle; similarly, the line segments extracted from the second class set are processed, and a second line segment belonging to the vehicle is confirmed. In one case, as shown in fig. 3, the first center point coordinate is a midpoint coordinate of the first line segment 21, and whether the first line segment 21 belongs to the vehicle is determined by determining whether the first center point coordinate is within the minimum bounding rectangle 11, and obviously, the first line segment 21 in the figure belongs to the vehicle. Similarly, the second center point coordinate of the second line segment 31, that is, the midpoint coordinate thereof may be obtained, or it may be determined that the second line segment 31 belongs to the vehicle.
In another aspect, the present disclosure provides a method for determining a violation line using the identification method, including the following steps: acquiring a first intersection point coordinate of the first line segment and the third line segment; judging whether the distance from the first intersection point to the first center point coordinate of the first line segment is smaller than a second set threshold value or not; if the distance between the first intersection point and the nearest endpoint of the third line segment is smaller than a third set threshold value, judging whether the distance between the first intersection point and the nearest endpoint of the third line segment is smaller than the third set threshold value; and if the current value is less than the preset value, generating line pressing prompt information. In one case, as can be seen from fig. 1, 2 and 3, the first line segment 21 and the third line segment 41 do not have an intersection, but the second line segment 31 and the third line segment 41 have an intersection, and it can be determined that a vehicle is pressed during the driving process. The significance of setting the second set threshold and the third set threshold is that the identification of the line pressing condition can be more accurately performed (because the position of the wheel on one side cannot be truly reflected by the first line section in hundreds at some times, a certain error is generated). Specific values of the second set threshold and the third set threshold may be set according to actual needs, for example, the second set threshold and the third set threshold may be values such as 40% and 45% of the length of the first segment.
Similarly, in one embodiment, a second intersection coordinate of the second line segment and the third line segment is obtained; judging whether the distance from the second intersection point to the second center point coordinate of the second line segment is smaller than a third set threshold value; if the distance between the second intersection point and the nearest end point of the third line segment is smaller than a fifth set threshold, and if the distance between the second intersection point and the nearest end point of the third line segment is smaller than the fifth set threshold, line pressing prompt information is generated. The embodiment judges whether the vehicle is pressed from another principle same angle.
In yet another aspect, the present disclosure provides a violation prompting device for performing the method of determining, comprising a camera, a processor, and a display; the camera is used for shooting and acquiring images containing front vehicles in real time; the processor is used for executing the judging method and sending the line pressing prompt information to the display; and the display displays the line pressing prompt information.
In yet another aspect, the present disclosure provides an apparatus for identifying a positional relationship between a vehicle and a traffic marking, including: the image acquisition module is used for acquiring an image containing the vehicle and the traffic marking; the semantic segmentation module is used for dividing the pixels of the image into a plurality of classes of pixel sets in a semantic segmentation mode; the set of pixels comprises a first class set comprising a first group of pixels connected to a wheel on one side of the vehicle; the set of pixels comprises a second class set comprising a second set of pixels connected to a wheel at one end of the vehicle; the set of pixels comprises a third class set comprising a corresponding third group of pixels of the traffic marking; the line extraction module is used for performing line extraction on the pixel set to obtain a plurality of line segments used for representing different pixel group positions in the pixel set; the line segment includes a third line segment representing a position of the third pixel group; the line attribution judging module is used for respectively screening a first line segment and a second line segment which belong to the vehicle from the line segments according to the position information of the vehicle in the image, and the first line segment and the second line segment are respectively used for representing the positions of the first pixel group and the second pixel group; and the identification module is used for identifying the position relation between the vehicle and the traffic marking according to the position relation of the first line segment, the second line segment and the third line segment and outputting an identification result.
In a further aspect, the present disclosure proposes a computer-readable storage medium having stored thereon a computer program which, when being executed by a processor, carries out the steps of the identification method. It will be understood by those skilled in the art that all or part of the steps in the embodiments may be implemented by hardware instructions associated with a computer program, and the program may be stored in a computer readable medium, which may include various media capable of storing program code, such as a flash memory, a removable hard disk, a read-only memory, a random access memory, a magnetic or optical disk, and the like.
The various embodiments or features mentioned herein may be combined with each other as additional alternative embodiments without conflict, within the knowledge and ability level of those skilled in the art, and a limited number of alternative embodiments formed by a limited number of combinations of features not listed above are still within the skill of the disclosed technology, as will be understood or inferred by those skilled in the art from the figures and above.
Moreover, the descriptions of the various embodiments are expanded upon with varying emphasis, and where not already described, may be had by reference to the prior art or other related descriptions herein.
It is emphasized that the above-mentioned embodiments, which are typical and preferred embodiments of the present disclosure, are only used for explaining and explaining the technical solutions of the present disclosure in detail for the convenience of the reader, and do not limit the protection scope or application of the present disclosure. Any modifications, equivalents, improvements and the like which come within the spirit and principle of the disclosure are intended to be covered by the scope of the disclosure.

Claims (10)

1. The method for identifying the position relation between the vehicle and the traffic marking is characterized by comprising the following steps of:
acquiring an image containing the vehicle and the traffic marking;
according to a preset category, performing semantic segmentation on the image to obtain a pixel set of multiple categories;
the set of pixels comprises a first class set comprising a first group of pixels connecting two wheels on one side of the vehicle; the set of pixels comprises a third class set comprising a corresponding third group of pixels of the traffic marking;
performing line extraction on the pixel set to obtain a plurality of line segments for representing different pixel group positions in the pixel set; the line segment includes a third line segment representing a position of the third pixel group;
respectively screening out a first line segment belonging to the vehicle from the line segments according to the position information of the vehicle in the image, wherein the first line segment is used for representing the position of the first pixel group;
and identifying the position relation between the vehicle and the traffic marking according to the position relation between the first line segment and the third line segment, and outputting an identification result.
2. The identification method according to claim 1, characterized in that:
the set of pixels further comprises a second class set comprising a second set of pixels connecting two wheels at one end of the vehicle;
according to the position information of the vehicle in the image, respectively screening out a second line segment which belongs to the vehicle from the line segments and is used for representing the position of the second pixel group;
and identifying the position relation between the vehicle and the traffic marking according to the position relation between the second line segment and the third line segment, and outputting an identification result.
3. The method of claim 2,
"line extracting the set of pixels" comprises the steps of:
carrying out pixel separation on the pixel set of each category to obtain a corresponding binary image;
performing closed operation and region communication operation on the binary image to obtain a plurality of communication regions;
performing area filtering on the communication area according to a first set threshold value to obtain a processed communication block;
and performing straight line fitting operation on the edge of each connected block, and obtaining information of a line segment corresponding to each connected block according to the position information of the minimum circumscribed rectangle of the connected block.
4. The identification method according to claim 2, characterized in that:
and processing the image by adopting a deep learning target detection method to obtain the information of the minimum circumscribed rectangle of the vehicle in the image, and performing subsequent operation by taking the information of the minimum circumscribed rectangle as the position information of the vehicle in the image.
5. The identification method according to claim 4, characterized in that:
acquiring first center point coordinates of the line segments extracted from the first class set,
judging whether the first central point coordinate is positioned in the minimum circumscribed rectangle or not;
if so, selecting a line segment corresponding to the first center point coordinate with the minimum distance from the center of the minimum circumscribed rectangle, and determining the line segment as a first line segment belonging to the vehicle;
similarly, the line segments extracted from the second class set are processed, and a second line segment belonging to the vehicle is confirmed.
6. The method for judging violation line as claimed in any of claims 1-5, comprising the steps of:
acquiring a first intersection point coordinate of the first line segment and the third line segment; judging whether the distance from the first intersection point to the first center point coordinate of the first line segment is smaller than a second set threshold value or not; if the distance between the first intersection point and the nearest endpoint of the third line segment is smaller than a third set threshold value, judging whether the distance between the first intersection point and the nearest endpoint of the third line segment is smaller than the third set threshold value; and if the current value is less than the preset value, generating line pressing prompt information.
7. The method of claim 6, further comprising the steps of:
acquiring a second intersection point coordinate of the second line segment and the third line segment; judging whether the distance from the second intersection point to the second center point coordinate of the second line segment is smaller than a third set threshold value; if the distance between the second intersection point and the nearest end point of the third line segment is smaller than a fifth set threshold, and if the distance between the second intersection point and the nearest end point of the third line segment is smaller than the fifth set threshold, line pressing prompt information is generated.
8. Violation prompting device for executing the judgment method according to claim 6, characterized in that:
comprises a camera, a processor and a display;
the camera is used for shooting and acquiring images containing front vehicles and the traffic marking in real time;
the processor is used for executing the judging method of claim 6 and sending the line pressing prompt information to the display;
and the display displays the line pressing prompt information.
9. The vehicle and traffic marking position relation recognition device comprises:
the image acquisition module is used for acquiring an image containing the vehicle and the traffic marking;
the semantic segmentation module is used for dividing the pixels of the image into a plurality of classes of pixel sets in a semantic segmentation mode; the set of pixels comprises a first class set comprising a first group of pixels connected to a wheel on one side of the vehicle; the set of pixels comprises a second class set comprising a second set of pixels connected to a wheel at one end of the vehicle; the set of pixels comprises a third class set comprising a corresponding third group of pixels of the traffic marking;
the line extraction module is used for performing line extraction on the pixel set to obtain a plurality of line segments used for representing different pixel group positions in the pixel set; the line segment includes a third line segment representing a position of the third pixel group;
the line attribution judging module is used for respectively screening a first line segment and a second line segment which belong to the vehicle from the line segments according to the position information of the vehicle in the image, and the first line segment and the second line segment are respectively used for representing the positions of the first pixel group and the second pixel group;
and the identification module is used for identifying the position relation between the vehicle and the traffic marking according to the position relation of the first line segment, the second line segment and the third line segment and outputting an identification result.
10. A computer-readable storage medium having stored thereon a computer program, characterized in that: the computer program realizes the steps of the identification method as claimed in any one of claims 1 to 5 when executed by a processor.
CN201910976859.9A 2019-10-14 2019-10-14 Method and device for identifying position relationship between vehicle and traffic marking and storage medium Active CN110956081B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910976859.9A CN110956081B (en) 2019-10-14 2019-10-14 Method and device for identifying position relationship between vehicle and traffic marking and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910976859.9A CN110956081B (en) 2019-10-14 2019-10-14 Method and device for identifying position relationship between vehicle and traffic marking and storage medium

Publications (2)

Publication Number Publication Date
CN110956081A true CN110956081A (en) 2020-04-03
CN110956081B CN110956081B (en) 2023-05-23

Family

ID=69975643

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910976859.9A Active CN110956081B (en) 2019-10-14 2019-10-14 Method and device for identifying position relationship between vehicle and traffic marking and storage medium

Country Status (1)

Country Link
CN (1) CN110956081B (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112418183A (en) * 2020-12-15 2021-02-26 广州小鹏自动驾驶科技有限公司 Parking lot element extraction method and device, electronic equipment and storage medium
CN113238560A (en) * 2021-05-24 2021-08-10 珠海市一微半导体有限公司 Robot map rotating method based on line segment information
WO2022078074A1 (en) * 2020-10-16 2022-04-21 广州大学 Method and system for detecting position relation between vehicle and lane line, and storage medium
CN114842430A (en) * 2022-07-04 2022-08-02 江苏紫琅汽车集团股份有限公司 Vehicle information identification method and system for road monitoring
WO2023040404A1 (en) * 2021-09-17 2023-03-23 北京极智嘉科技股份有限公司 Line segment matching method and apparatus, computer device, and storage medium
CN115953418A (en) * 2023-02-01 2023-04-11 公安部第一研究所 Method, storage medium and equipment for stripping notebook region in security check CT three-dimensional image

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180260959A1 (en) * 2017-03-08 2018-09-13 Tsinghua University Inspection apparatuses and methods for segmenting an image of a vehicle
CN110077399A (en) * 2019-04-09 2019-08-02 魔视智能科技(上海)有限公司 A kind of vehicle collision avoidance method merged based on roadmarking, wheel detection
CN110299028A (en) * 2019-07-31 2019-10-01 深圳市捷顺科技实业股份有限公司 Method, apparatus, equipment and the readable storage medium storing program for executing of line detection are got in a kind of parking

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180260959A1 (en) * 2017-03-08 2018-09-13 Tsinghua University Inspection apparatuses and methods for segmenting an image of a vehicle
CN110077399A (en) * 2019-04-09 2019-08-02 魔视智能科技(上海)有限公司 A kind of vehicle collision avoidance method merged based on roadmarking, wheel detection
CN110299028A (en) * 2019-07-31 2019-10-01 深圳市捷顺科技实业股份有限公司 Method, apparatus, equipment and the readable storage medium storing program for executing of line detection are got in a kind of parking

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
邓超: "《数字图像处理与模式识别研究》", 30 June 2018, 地质出版社 *

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2022078074A1 (en) * 2020-10-16 2022-04-21 广州大学 Method and system for detecting position relation between vehicle and lane line, and storage medium
CN112418183A (en) * 2020-12-15 2021-02-26 广州小鹏自动驾驶科技有限公司 Parking lot element extraction method and device, electronic equipment and storage medium
CN113238560A (en) * 2021-05-24 2021-08-10 珠海市一微半导体有限公司 Robot map rotating method based on line segment information
WO2023040404A1 (en) * 2021-09-17 2023-03-23 北京极智嘉科技股份有限公司 Line segment matching method and apparatus, computer device, and storage medium
CN114842430A (en) * 2022-07-04 2022-08-02 江苏紫琅汽车集团股份有限公司 Vehicle information identification method and system for road monitoring
CN114842430B (en) * 2022-07-04 2022-09-09 江苏紫琅汽车集团股份有限公司 Vehicle information identification method and system for road monitoring
CN115953418A (en) * 2023-02-01 2023-04-11 公安部第一研究所 Method, storage medium and equipment for stripping notebook region in security check CT three-dimensional image
CN115953418B (en) * 2023-02-01 2023-11-07 公安部第一研究所 Notebook area stripping method, storage medium and device in security inspection CT three-dimensional image

Also Published As

Publication number Publication date
CN110956081B (en) 2023-05-23

Similar Documents

Publication Publication Date Title
CN110956081B (en) Method and device for identifying position relationship between vehicle and traffic marking and storage medium
EP3620956B1 (en) Learning method, learning device for detecting lane through classification of lane candidate pixels and testing method, testing device using the same
CN109426801B (en) Lane line instance detection method and device
CN112528878B (en) Method and device for detecting lane line, terminal equipment and readable storage medium
CN111666921B (en) Vehicle control method, apparatus, computer device, and computer-readable storage medium
CN112132156A (en) Multi-depth feature fusion image saliency target detection method and system
CN110675407B (en) Image instance segmentation method and device, electronic equipment and storage medium
Azad et al. New method for optimization of license plate recognition system with use of edge detection and connected component
CN109034136A (en) Image processing method, device, picture pick-up device and storage medium
Zhang et al. Automatic detection of road traffic signs from natural scene images based on pixel vector and central projected shape feature
CN111627057A (en) Distance measuring method and device and server
CN112613434A (en) Road target detection method, device and storage medium
CN111415336A (en) Image tampering identification method and device, server and storage medium
CN117218622A (en) Road condition detection method, electronic equipment and storage medium
CN111062347A (en) Traffic element segmentation method in automatic driving, electronic device and storage medium
CN111191482A (en) Brake lamp identification method and device and electronic equipment
EP3764335A1 (en) Vehicle parking availability map systems and methods
CN116071557A (en) Long tail target detection method, computer readable storage medium and driving device
CN111144361A (en) Road lane detection method based on binaryzation CGAN network
KR102026280B1 (en) Method and system for scene text detection using deep learning
CN116129380A (en) Method and system for judging driving lane of automatic driving vehicle, vehicle and storage medium
CN116052189A (en) Text recognition method, system and storage medium
CN110781863A (en) Method and device for identifying position relation between vehicle and local area and storage medium
CN115588191A (en) Cell sorting method and system based on image acoustic flow control cell sorting model
CN114842198A (en) Intelligent loss assessment method, device and equipment for vehicle and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant