CN109919002B - Yellow stop line identification method and device, computer equipment and storage medium - Google Patents

Yellow stop line identification method and device, computer equipment and storage medium Download PDF

Info

Publication number
CN109919002B
CN109919002B CN201910062723.7A CN201910062723A CN109919002B CN 109919002 B CN109919002 B CN 109919002B CN 201910062723 A CN201910062723 A CN 201910062723A CN 109919002 B CN109919002 B CN 109919002B
Authority
CN
China
Prior art keywords
yellow
foreground region
image
line
forbidden
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910062723.7A
Other languages
Chinese (zh)
Other versions
CN109919002A (en
Inventor
巢中迪
庄伯金
王少军
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ping An Technology Shenzhen Co Ltd
Original Assignee
Ping An Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ping An Technology Shenzhen Co Ltd filed Critical Ping An Technology Shenzhen Co Ltd
Priority to CN201910062723.7A priority Critical patent/CN109919002B/en
Publication of CN109919002A publication Critical patent/CN109919002A/en
Priority to PCT/CN2019/115947 priority patent/WO2020151299A1/en
Application granted granted Critical
Publication of CN109919002B publication Critical patent/CN109919002B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/24Aligning, centring, orientation detection or correction of the image

Abstract

The invention discloses a yellow forbidden line identification method, a yellow forbidden line identification device, computer equipment and a storage medium, and relates to the field of artificial intelligence. The yellow stop line identification method comprises the following steps: acquiring an image to be classified, wherein the image to be classified is related to a ground marking; extracting a foreground region from an image to be classified by using a yolo object detection frame, wherein the detection result of the yolo object detection frame is expressed in the form of candidate frames, the candidate frames with yellow forbidden lines in the detection result are the foreground region, and the yellow forbidden lines are one of ground marks; identifying a yellow forbidden stop line in the foreground region based on the color space to obtain the yellow forbidden stop line in the foreground region; and identifying the yellow forbidden line of the non-foreground region in the image to be classified by adopting a convolutional neural network model to obtain the yellow forbidden line of the non-foreground region. By adopting the yellow forbidden line identification method, near-real-time accurate identification of the yellow forbidden line can be realized under the influence of external environment.

Description

Yellow stop line identification method and device, computer equipment and storage medium
[ field of technology ]
The invention relates to the field of artificial intelligence, in particular to a yellow forbidden stop line identification method, a yellow forbidden stop line identification device, computer equipment and a storage medium.
[ background Art ]
The yellow stop-forbidden line is a stop-forbidden line expressed by yellow, belongs to one type of ground marking, and comprises a yellow stop-forbidden line on the road edge and a yellow stop-forbidden line expressed in a grid form on the road surface. At present, the yellow forbidden line is long under the influence of adverse environmental factors such as illumination, ponding and corrosion, and the yellow forbidden line detection requirement in real life can not be met.
[ invention ]
In view of the above, the embodiments of the present invention provide a method, an apparatus, a computer device, and a storage medium for identifying a forbidden yellow line, which are used for solving the problem that the forbidden yellow line cannot be accurately identified in near real time under the influence of an external environment.
In a first aspect, an embodiment of the present invention provides a yellow stop line identifying method, including:
acquiring an image to be classified, wherein the image to be classified is related to a ground marking;
extracting a foreground region from the image to be classified by using a yolo object detection frame, wherein a detection result of the yolo object detection frame is represented in a form of a candidate frame, the candidate frame with a yellow forbidden line in the detection result is the foreground region, and the yellow forbidden line is one of the ground marks;
identifying a yellow forbidden stop line in the foreground region based on a color space to obtain the yellow forbidden stop line in the foreground region;
and identifying a yellow forbidden line of a non-foreground region in the image to be classified by adopting a convolutional neural network model to obtain the yellow forbidden line of the non-foreground region, wherein the non-foreground region is an image region outside the foreground region in the image to be classified.
Aspects and any possible implementation manner as described above, further provide an implementation manner, where the extracting, with the yolo object detection framework, a foreground area from the image to be classified includes:
setting the yolo object detection frame to a detection mode for singly detecting the yellow stop line;
calculating target confidence of a detection area from the image to be classified by adopting the yolo object detection framework in the detection mode, wherein the detection area is an image small block obtained by segmenting the image to be classified in advance, and each image small block represents one detection area;
and comparing the target confidence coefficient with a preset confidence coefficient threshold value, and obtaining the foreground region according to the detection region of which the target confidence coefficient is higher than the confidence coefficient threshold value.
Aspects and any possible implementation manner as described above, further provide an implementation manner, where the identifying the yellow stop line in the foreground area based on the color space, and obtaining the yellow stop line in the foreground area includes:
converting the foreground region into HSV color space, and determining the color space where the foreground region is located;
judging whether a target color exists in a color space where the foreground region exists, and if so, performing straight line fitting on the foreground region by adopting a least square method based on the target color existing in the foreground region, wherein the target color is yellow;
and obtaining a yellow forbidden line in the foreground region according to the result of the straight line fitting.
In the aspect and any possible implementation manner described above, there is further provided an implementation manner, before the step of identifying a yellow forbidden line of a non-foreground area in the image to be classified by using a convolutional neural network model, the method further includes:
obtaining a training sample, wherein the training sample comprises training pictures of a yellow forbidden stop line;
initializing a convolutional neural network;
and inputting the training sample into the initialized convolutional neural network for training to obtain the convolutional neural network model, wherein the convolutional neural network model is used for identifying the yellow forbidden line.
In the aspect and any possible implementation manner described above, there is further provided an implementation manner, where the identifying, by using a convolutional neural network model, a yellow forbidden line of a non-foreground area in the image to be classified includes:
extracting the feature vector of the non-foreground region by adopting the convolutional neural network model;
based on the feature vector, calculating the classification probability of the yellow forbidden line in the convolutional neural network model;
and determining a non-foreground region with the classification probability of the yellow forbidden line being greater than a preset classification threshold as the yellow forbidden line.
In a second aspect, an embodiment of the present invention provides a yellow stop line identifying device, including:
the image to be classified acquisition module is used for acquiring an image to be classified, and the image to be classified is related to the ground marking;
the foreground region acquisition module is used for extracting a foreground region from the image to be classified by adopting a yolo object detection frame, wherein the detection result of the yolo object detection frame is represented in the form of a candidate frame, the candidate frame with a yellow forbidden stop line in the detection result is the foreground region, and the yellow forbidden stop line is one of the ground marks;
the foreground region yellow stop-preventing line acquisition module is used for identifying the yellow stop-preventing line in the foreground region based on a color space to obtain the yellow stop-preventing line in the foreground region;
and the non-foreground region yellow stop-line obtaining module is used for identifying the yellow stop-line of the non-foreground region in the image to be classified by adopting a convolutional neural network model to obtain the yellow stop-line of the non-foreground region, wherein the non-foreground region refers to an image region outside the foreground region in the image to be classified.
In a third aspect, a computer device includes a memory, a processor, and a computer program stored in the memory and executable on the processor, the processor implementing the steps of the yellow stop line identification method described above when the computer program is executed.
In a fourth aspect, embodiments of the present invention provide a computer-readable storage medium comprising: a computer program for performing the yellow stop line identification method according to any one of the first aspects when the computer program is executed by the processor.
In the embodiment of the invention, firstly, an image to be classified is acquired to identify a yellow forbidden stop line from the image to be classified related to a ground marking; then extracting a foreground region from the image to be classified by using a yolo object detection frame to obtain a candidate frame suspected to comprise a yellow forbidden line, and rapidly and accurately determining the foreground region with a high probability of comprising the yellow forbidden line in the image to be classified; then, identifying the yellow forbidden lines in the foreground region based on the color space to obtain yellow forbidden lines in the foreground region, and rapidly determining a part of yellow forbidden lines in the image to be classified; and finally, identifying the yellow forbidden line of the non-foreground region in the image to be classified by adopting a convolutional neural network model to obtain the yellow forbidden line in the non-foreground region, wherein the yellow forbidden line which cannot be detected by the yolo object detection frame for the first time can be identified. The embodiment of the invention can realize near real-time identification of the yellow forbidden line on the premise of ensuring the identification accuracy.
[ description of the drawings ]
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings that are needed in the embodiments will be briefly described below, it being obvious that the drawings in the following description are only some embodiments of the present invention, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a flow chart of a yellow stop line based identification method according to an embodiment of the present invention;
FIG. 2 is a schematic diagram of a yellow-based stop line recognition device according to an embodiment of the present invention;
FIG. 3 is a schematic diagram of a computer device in accordance with an embodiment of the present invention.
[ detailed description ] of the invention
For a better understanding of the technical solution of the present invention, the following detailed description of the embodiments of the present invention refers to the accompanying drawings.
It should be understood that the described embodiments are merely some, but not all, embodiments of the invention. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
The terminology used in the embodiments of the invention is for the purpose of describing particular embodiments only and is not intended to be limiting of the invention. As used in this application and the appended claims, the singular forms "a," "an," and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise.
It should be understood that the term "and/or" as used herein is merely one of the same fields describing the associated object, meaning that there may be three relationships, e.g., a and/or B, may represent: a exists alone, A and B exist together, and B exists alone. In addition, the character "/" herein generally indicates that the front and rear associated objects are an "or" relationship.
It should be understood that although the terms first, second, third, etc. may be used to describe the preset ranges, etc. in the embodiments of the present invention, these preset ranges should not be limited to these terms. These terms are only used to distinguish one preset range from another. For example, a first preset range may also be referred to as a second preset range, and similarly, a second preset range may also be referred to as a first preset range without departing from the scope of embodiments of the present invention.
Depending on the context, the word "if" as used herein may be interpreted as "at … …" or "at … …" or "in response to a determination" or "in response to detection". Similarly, the phrase "if determined" or "if detected (stated condition or event)" may be interpreted as "when determined" or "in response to determination" or "when detected (stated condition or event)" or "in response to detection (stated condition or event), depending on the context.
Fig. 1 shows a flowchart of the yellow stop line recognition method in the present embodiment. The yellow stop line identification method can be applied to a yellow stop line identification system, and the yellow stop line identification system can be adopted to identify the yellow stop line from the pavement marking. The yellow stop line identification system can be particularly applied to computer equipment, wherein the computer equipment can be used for carrying out man-machine interaction with a user, and the computer equipment comprises, but is not limited to, equipment such as a computer, a smart phone, a tablet personal computer and the like. As shown in fig. 1, the yellow stop line identification method includes the following steps:
s10: and acquiring an image to be classified, wherein the image to be classified is related to the ground marking.
It will be appreciated that during the running of the vehicle, the vehicle-mounted yellow stop line recognition system will acquire an image related to the ground marking in real time through the image capturing device, the image being an image to be classified.
In an embodiment, the yellow stop line identification system acquires an image to be classified, so that when a yellow stop line appears in the image to be classified, the yellow stop line can be quickly identified, and a preset reaction is made.
S20: extracting a foreground region from an image to be classified by using a yolo object detection frame, wherein the detection result of the yolo object detection frame is expressed in the form of a candidate frame, the candidate frame with the yellow forbidden line in the detection result is the foreground region, and the yellow forbidden line is one of ground marks.
Among them, the yolo (You Only Look Once) object detection frame is a model capable of detecting objects and classifying the objects. The candidate box represents a mark box suspected of having a yellow disable line.
In one embodiment, a yolo object detection framework is employed to extract foreground regions from the image to be classified. The yolo object detection frame is obtained through training in advance, wherein a training sample for training the yolo object detection frame comprises various different types of ground marks, and a yellow forbidden line is one of the training samples. Particularly, the ratio of the yellow forbidden line to other ground marking training samples can be 1:1, and the training samples with the ratio can effectively prevent the overfitting phenomenon of the yolo object detection frame and improve the detection accuracy and the classification accuracy of the yolo object detection frame.
It will be appreciated that when a yellow stop line appears in the image to be classified, the yolo object detection framework will quickly determine that there is a region in the image to be classified that has a high probability of belonging to the yellow stop line, i.e., the foreground region.
Further, in step S20, extracting a foreground region from the image to be classified using the yolo object detection frame specifically includes:
s21: the yolo object detection framework is set to a detection mode that detects a yellow stop line singly.
It will be appreciated that the yolo object detection framework is used to detect and classify objects in a typical scenario, and that detection and classification of ground markings other than yellow stop lines is not necessary in view of the fact that embodiments of the present invention only need to detect yellow stop lines, and therefore the yolo object detection framework may be set to a single detection mode for detecting yellow stop lines, wherein the setting is achieved by the yellow stop line identification system.
Specifically, when the yolo object detection frame is trained, the training samples are divided into a yellow forbidden line training sample and a non-yellow forbidden line training sample, wherein the non-yellow forbidden line training sample can comprise any sample except for a yellow forbidden line, the non-yellow forbidden line training sample is not limited to ground marks, and in addition, the ratio of the yellow forbidden line training sample to the non-yellow forbidden line training sample is set to be 1:1, so that the yolo object detection frame obtained through training is prevented from being fitted. After the yolo object detection frame obtained by training the training sample is adopted, the yolo object detection frame is stored in the yellow forbidden line identification system in a file form, and the system can call the file at any time to set the yolo object detection frame into a detection mode for detecting the yellow forbidden line singly.
In an embodiment, the yolo object detection frame is set to a single detection mode for detecting the yellow forbidden line, so that only the yellow forbidden line can be concerned during detection, other non-yellow forbidden line parts are classified, the position of the yellow forbidden line in an image to be classified can be determined, and the processing efficiency is further improved.
S22: and calculating the target confidence of a detection area from the image to be classified by using a yolo object detection framework in a detection mode, wherein the detection area is an image small block obtained by pre-dividing the image to be classified, and each image small block represents a detection area.
The yellow forbidden line displayed by the target confidence level has a certain probability of falling in the detection area.
In an embodiment, firstly, an image to be classified needs to be cut before calculating a target confidence coefficient of a detection area, specifically, when the image to be classified is square, the image to be classified can be divided into s x s image small blocks, and each image small block corresponds to a detection area; when the image to be classified is rectangular, the image to be classified can be divided into a-b image small blocks with equal size as detection areas. In general, the detection areas are obtained by cutting the images to be classified in a mode of equal-size cutting, so that the connection of edge features among different detection areas can be improved during detection, and the probability of including yellow forbidden lines in the foreground areas obtained later is improved.
In one embodiment, after determining the detection regions, a detection mode of a single detection yellow stop line is used to calculate a target confidence level for each detection region in the image to be classified, so as to determine the foreground region according to the target confidence level.
S23: and comparing the target confidence coefficient with a preset confidence coefficient threshold value, and obtaining a foreground region according to the detection region of which the target confidence coefficient is higher than the confidence coefficient threshold value.
In the steps S21-S23, the detection mode of detecting the yellow stop line singly is used to help determine the position of the yellow stop line in the image to be classified, the yolo object detection frame of the detection mode is used to calculate the target confidence of the detection area from the image to be classified, and the target confidence is compared with the preset confidence threshold to obtain the foreground area, so that the probability of including the yellow stop line in the foreground area can be improved, and most of the yellow stop lines in the image to be classified are identified when the yolo object detection frame is used.
S30: and identifying the yellow forbidden stop line in the foreground region based on the color space to obtain the yellow forbidden stop line in the foreground region.
Among them, the color space is also called color model (also called color space or color system) whose purpose is to describe colors in a generally acceptable way under certain standards.
In one embodiment, due to the difference in color between the yellow stop line and other ground markings, a color space may be used to identify the yellow stop line in the foreground region and obtain the yellow stop line from the foreground region. The color space is adopted to distinguish the yellow forbidden line from other objects by means of the characteristics of the yellow forbidden line, and the yellow forbidden line can be rapidly and accurately obtained from a foreground area.
Further, in step S30, the yellow stop line in the foreground area is identified based on the color space, so as to obtain the yellow stop line in the foreground area, which specifically includes:
s31: and converting the foreground region into HSV color space, and determining the color space in which the foreground region is located.
In an embodiment, the color space may specifically be an HSV color space, which has better effect and higher accuracy in identifying the yellow stop line than other color spaces.
S32: and judging whether a target color exists in a color space where the foreground region exists, and if so, performing straight line fitting on the foreground region by adopting a least square method based on the target color existing in the foreground region, wherein the target color is yellow.
It will be appreciated that, in addition to the feature of the yellow stop line regarding the color, the yellow stop line needs to satisfy a straight line relationship, so that the yellow stop line in the foreground area is further determined according to the requirement of the straight line, and the non-straight line object with the yellow feature is excluded.
In an embodiment, the yellow existing in the foreground area is linearly fitted by using a least square method, specifically, a rectangular coordinate system may be established, the pixels of the image are used as the minimum units, the positions of the representative yellow pixels in the foreground area are represented by adopting a coordinate mode, and then the linear fitting is performed by using the least square method according to the coordinates of the representative yellow pixels, so as to identify the yellow forbidden line in the foreground area. According to the embodiment, the characteristics of the yellow forbidden line on the color and the shape are considered for comprehensive judgment, so that the accuracy of yellow forbidden line identification in a foreground area can be effectively improved.
S33: and obtaining a yellow forbidden line in the foreground region according to the result of the straight line fitting.
In one embodiment, when the straight line fitting result is true, a yellow stop line in the foreground region is obtained.
In the steps S31-S33, the characteristics of the yellow forbidden line are comprehensively considered, the yellow forbidden line is identified in the foreground area by adopting a color space and straight line fitting method, and the accuracy of identifying the yellow forbidden line in the foreground area by adopting the method is high.
S40: and identifying a yellow forbidden line of a non-foreground region in the image to be classified by adopting a convolutional neural network model to obtain the yellow forbidden line of the non-foreground region, wherein the non-foreground region refers to an image region outside the foreground region in the image to be classified.
It can be understood that the foreground region is only a region with a high occurrence probability of a yellow stop line in the image to be classified, and does not represent that no yellow stop line exists in the non-foreground region. Under the influence of adverse environmental factors such as illumination, ponding and corrosion, the duty ratio of the yellow forbidden line in the non-foreground region in the image to be classified is increased along with the influence degree of the adverse environmental factors, and in short, the detection effect of the yolo object detection model is relatively reduced under the influence of the adverse environmental factors.
In an embodiment, a convolutional neural network model may be specifically used to identify a yellow forbidden line in the non-foreground area, so as to obtain the yellow forbidden line in the non-foreground area. The convolutional neural network is a deep neural network, can extract deep features of a yellow forbidden line, and still keeps high identification accuracy even under the influence of adverse environmental factors. Convolutional neural networks may be relatively slow compared to yolo neural networks (neural networks that train the yolo object detection model).
Further, before step S40, the method further includes:
s411: and obtaining a training sample, wherein the training sample comprises training pictures of a yellow forbidden stop line.
In one embodiment, a training picture including a yellow stop line is used as a training sample, so that the convolutional neural network can learn deep features of the yellow stop line and fully distinguish the deep features from features of other training samples.
S412: and initializing a convolutional neural network.
The convolutional neural network comprises network parameters, wherein the network parameters comprise weights and offsets. In one embodiment, the weights for convolutional neural network initialization satisfy the formulan l Representing the number of training samples input at layer I, S () represents the variance operation, W l Weight representing layer I, +.>And the method is used for expressing any, l represents a first layer in the convolutional neural network, so that the training efficiency of the convolutional neural network model can be improved by adopting the initial mode, and the recognition accuracy of the convolutional neural network model can be improved. The step of initializing the convolutional neural network is not limited to be performed sequentially with step S411, and may be performed after step S411 or before.
S413: and inputting the training sample into the initialized convolutional neural network for training to obtain a convolutional neural network model, wherein the convolutional neural network model is used for identifying the yellow forbidden line.
In an embodiment, after a required training sample is obtained and a convolutional neural network is initialized, the training sample is input into the initialized convolutional neural network for training, namely, network parameters in the convolutional neural network can be iteratively updated according to the training sample, so that a result output by the training sample in the convolutional neural network reaches an expected result within an error allowable range, and a convolutional neural network model for identifying a yellow forbidden line is obtained.
Steps S411-S413 provide a method for training a convolutional neural network model, by which the model training process can be accelerated, and a convolutional neural network model with higher recognition rate can be obtained.
Further, in step S40, a convolutional neural network model is used to identify a yellow forbidden line of a non-foreground region in the image to be classified, which specifically includes:
s421: and extracting the feature vector of the non-foreground region by adopting a convolutional neural network model.
S422: based on the feature vector, calculating the classification probability of the yellow forbidden line in the convolutional neural network model.
S423: and determining a non-foreground region with the classification probability of the yellow forbidden line being greater than a preset classification threshold as the yellow forbidden line.
In steps S421-S423, it can be understood that after the feature vector of the non-foreground region is extracted by the convolutional neural network model, the feature vector of the non-foreground region needs to be compared with the deep features of the training samples extracted during training, which class of the training samples the feature vector of the non-foreground region belongs to is determined, and when the classification probability of the yellow forbidden line is greater than the preset classification threshold, the non-foreground region corresponding to the extracted feature vector can be determined to be the yellow forbidden line. In this embodiment, a method for determining a yellow stop line is provided, which can effectively determine which non-foreground regions are yellow stop lines.
It can be understood that in the implementation of the present invention, the yellow stop line in the foreground area is first identified by using the yolo object detection frame, and by combining the advantage of fast detection speed of the yolo object detection frame, most of the yellow stop line in the image to be identified is identified at this stage, and only the non-foreground area which occupies a relatively small area in the image to be identified is included because of the influence of adverse environmental factors, and at this time, although the speed of convolution identification is relatively slow, the non-foreground area occupies a relatively small area, so that much time is not consumed in identification, and the yellow stop line in the non-foreground area can be accurately identified. By combining the two recognition stages, the effect of rapidly and accurately recognizing the yellow forbidden line in the image to be recognized is achieved.
In the embodiment of the invention, firstly, an image to be classified is acquired to identify a yellow forbidden stop line from the image to be classified related to a ground marking; then extracting a foreground region from the image to be classified by using a yolo object detection frame to obtain a candidate frame suspected to comprise a yellow forbidden line, and rapidly and accurately determining the foreground region with a high probability of comprising the yellow forbidden line in the image to be classified; then, identifying the yellow forbidden lines in the foreground region based on the color space to obtain yellow forbidden lines in the foreground region, and rapidly determining a part of yellow forbidden lines in the image to be classified; and finally, identifying the yellow forbidden line of the non-foreground area in the image to be classified by adopting a convolutional neural network model to obtain the yellow forbidden line of the non-foreground area, and identifying the yellow forbidden line which cannot be detected by the yolo object detection frame for the first time. The embodiment of the invention can realize near real-time identification of the yellow forbidden line on the premise of ensuring the identification accuracy.
It should be understood that the sequence number of each step in the foregoing embodiment does not mean that the execution sequence of each process should be determined by the function and the internal logic, and should not limit the implementation process of the embodiment of the present invention.
Based on the yellow stop line identification method provided in the embodiment, the embodiment of the invention further provides a device embodiment for realizing the steps and the method in the method embodiment.
Fig. 2 shows a schematic block diagram of a yellow no-stop line recognition device in one-to-one correspondence with the yellow no-stop line recognition method in the embodiment. As shown in fig. 2, the yellow stop line recognition device includes an image acquisition module 10 to be classified, a foreground region acquisition module 20, a foreground region yellow stop line acquisition module 30, and a non-foreground region yellow stop line acquisition module 40. The implementation functions of the image obtaining module 10 to be classified, the foreground region obtaining module 20, the foreground region yellow stop line obtaining module 30 and the non-foreground region yellow stop line obtaining module 40 correspond to the steps corresponding to the yellow stop line identifying method in the embodiment one by one, so that detailed descriptions are omitted.
The image to be classified acquisition module 10 is used for acquiring an image to be classified, and the image to be classified is related to the ground marking.
The foreground region obtaining module 20 is configured to extract a foreground region from an image to be classified by using a yolo object detection frame, where a detection result of the yolo object detection frame is represented in a form of a candidate frame, a candidate frame with a class of yellow forbidden line in the detection result is the foreground region, and the yellow forbidden line is one of the ground marks.
The foreground region yellow stop-line obtaining module 30 is configured to identify a yellow stop-line in the foreground region based on the color space, so as to obtain the yellow stop-line in the foreground region.
The non-foreground region yellow stop line obtaining module 40 is configured to identify a yellow stop line of a non-foreground region in an image to be classified by using a convolutional neural network model, so as to obtain a yellow stop line of the non-foreground region, where the non-foreground region refers to an image region outside the foreground region in the image to be classified.
Alternatively, the foreground region acquisition module 20 includes a detection mode transition unit, a target confidence calculation unit, and a foreground region acquisition unit.
And the detection mode transition unit is used for setting the yolo object detection frame to be a single detection mode for detecting the yellow forbidden stop line.
The target confidence calculating unit is used for calculating the target confidence of a detection area from the image to be classified by adopting a yolo object detection framework of a detection mode, wherein the detection area is an image small block obtained by pre-dividing the image to be classified, and each image small block represents a detection area.
The foreground region acquisition unit is used for comparing the target confidence coefficient with a preset confidence coefficient threshold value and acquiring a foreground region according to a detection region with the target confidence coefficient higher than the confidence coefficient threshold value.
Alternatively, the foreground region yellow stop line acquisition module 30 includes a color space determination unit, a target color judgment unit, and a foreground region yellow stop line acquisition unit.
And the color space determining unit is used for converting the foreground region into an HSV color space and determining the color space in which the foreground region is located.
And the target color judging unit is used for judging whether a target color exists in the color space where the foreground region exists, and if so, performing straight line fitting on the foreground region by adopting a least square method based on the target color existing in the foreground region, wherein the target color is yellow.
And the foreground region yellow forbidden line acquisition unit is used for obtaining the yellow forbidden line in the foreground region according to the result of the straight line fitting.
Optionally, the yellow forbidden line identification device further comprises a training sample acquisition unit, an initialization unit and a convolutional neural network model acquisition unit.
The training sample acquisition unit is used for acquiring a training sample, and the training sample comprises training pictures of a yellow forbidden stop line.
And the initialization unit is used for initializing the convolutional neural network.
The convolutional neural network model acquisition unit is used for inputting the training sample into the initialized convolutional neural network for training to obtain a convolutional neural network model, and the convolutional neural network model is used for identifying a yellow forbidden line.
Alternatively, the non-foreground region yellow stop line obtaining module 40 includes a non-foreground region feature vector extracting unit, a classification probability calculating unit, and a non-foreground region yellow stop line obtaining unit.
And the non-foreground region feature vector extraction unit is used for extracting feature vectors of the non-foreground region by adopting a convolutional neural network model.
And the classification probability calculation unit is used for calculating the classification probability of the yellow forbidden line in the convolutional neural network model based on the feature vector.
And the non-foreground region yellow forbidden line acquisition unit is used for determining the non-foreground region with the classification probability of the yellow forbidden line larger than the preset classification threshold value as the yellow forbidden line.
In the embodiment of the invention, firstly, an image to be classified is acquired to identify a yellow forbidden stop line from the image to be classified related to a ground marking; then extracting a foreground region from the image to be classified by using a yolo object detection frame to obtain a candidate frame suspected to comprise a yellow forbidden line, and rapidly and accurately determining the foreground region with a high probability of comprising the yellow forbidden line in the image to be classified; then, identifying the yellow forbidden lines in the foreground region based on the color space to obtain yellow forbidden lines in the foreground region, and rapidly determining a part of yellow forbidden lines in the image to be classified; and finally, identifying the yellow forbidden line of the non-foreground area in the image to be classified by adopting a convolutional neural network model to obtain the yellow forbidden line of the non-foreground area, and identifying the yellow forbidden line which cannot be detected by the yolo object detection frame for the first time. The embodiment of the invention can realize near real-time identification of the yellow forbidden line on the premise of ensuring the identification accuracy.
The present embodiment provides a computer readable storage medium, on which a computer program is stored, which when executed by a processor implements the yellow stop line identification method in the embodiment, and in order to avoid repetition, details are not described herein. Alternatively, the computer program when executed by the processor implements the functions of each module/unit in the yellow stop line identification device in the embodiment, and in order to avoid repetition, details are not repeated herein.
Fig. 3 is a schematic diagram of a computer device according to an embodiment of the present invention. As shown in fig. 3, the computer device 50 of this embodiment includes: the processor 51, the memory 52, and the computer program 53 stored in the memory 52 and capable of running on the processor 51, wherein the computer program 53 implements the yellow stop line identification method in the embodiment when executed by the processor 51, and is not repeated herein. Alternatively, the computer program 53, when executed by the processor 51, performs the functions of each model/unit in the yellow stop line identification device in the embodiment, and is not described herein in detail for avoiding repetition.
The computer device 50 may be a desktop computer, a notebook computer, a palm top computer, a cloud server, or the like. Computer device 50 may include, but is not limited to, a processor 51, a memory 52. It will be appreciated by those skilled in the art that fig. 3 is merely an example of computer device 50 and is not intended to limit computer device 50, and may include more or fewer components than shown, or may combine certain components, or different components, e.g., a computer device may also include an input-output device, a network access device, a bus, etc.
The processor 51 may be a central processing unit (Central Processing Unit, CPU), but may also be other general purpose processors, digital signal processors (Digital Signal Processor, DSP), application specific integrated circuits (Application Specific Integrated Circuit, ASIC), field programmable gate arrays (Field-Programmable Gate Array, FPGA) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components, or the like. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
The memory 52 may be an internal storage unit of the computer device 50, such as a hard disk or memory of the computer device 50. The memory 52 may also be an external storage device of the computer device 50, such as a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card) or the like, which are provided on the computer device 50. Further, the memory 52 may also include both internal storage units and external storage devices of the computer device 50. The memory 52 is used to store computer programs and other programs and data required by the computer device. The memory 52 may also be used to temporarily store data that has been output or is to be output.
It will be apparent to those skilled in the art that, for convenience and brevity of description, only the above-described division of the functional units and modules is illustrated, and in practical application, the above-described functional distribution may be performed by different functional units and modules according to needs, i.e. the internal structure of the apparatus is divided into different functional units or modules to perform all or part of the above-described functions.
The above embodiments are only for illustrating the technical solution of the present invention, and are not limiting; although the invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical scheme described in the foregoing embodiments can be modified or some technical features thereof can be replaced by equivalents; such modifications and substitutions do not depart from the spirit and scope of the technical solutions of the embodiments of the present invention, and are intended to be included in the scope of the present invention.

Claims (6)

1. A yellow stop line identification method, the method comprising:
acquiring an image to be classified, wherein the image to be classified is related to a ground marking;
extracting a foreground region from the image to be classified by using a yolo object detection frame, wherein a detection result of the yolo object detection frame is represented in a form of a candidate frame, the candidate frame with a yellow forbidden line in the detection result is the foreground region, and the yellow forbidden line is one of the ground marks;
identifying a yellow forbidden stop line in the foreground region based on a color space to obtain the yellow forbidden stop line in the foreground region;
identifying a yellow forbidden line of a non-foreground region in the image to be classified by adopting a convolutional neural network model to obtain the yellow forbidden line of the non-foreground region, wherein the non-foreground region is an image region outside the foreground region in the image to be classified;
the extracting the foreground region from the image to be classified by using the yolo object detection frame comprises the following steps:
setting the yolo object detection frame to a detection mode for singly detecting the yellow stop line;
calculating target confidence of a detection area from the image to be classified by adopting the yolo object detection framework in the detection mode, wherein the detection area is an image small block obtained by segmenting the image to be classified in advance, and each image small block represents one detection area;
comparing the target confidence coefficient with a preset confidence coefficient threshold value, and obtaining the foreground region according to the detection region of which the target confidence coefficient is higher than the confidence coefficient threshold value;
the identifying the yellow stop line in the foreground region based on the color space, to obtain the yellow stop line in the foreground region, includes:
converting the foreground region into HSV color space, and determining the color space where the foreground region is located;
judging whether a target color exists in a color space where the foreground region exists, and if so, performing straight line fitting on the foreground region by adopting a least square method based on the target color existing in the foreground region, wherein the target color is yellow;
and obtaining a yellow forbidden line in the foreground region according to the result of the straight line fitting.
2. The method of claim 1, wherein prior to the step of identifying yellow stop-lines for non-foreground regions in the image to be classified using a convolutional neural network model, the method further comprises:
obtaining a training sample, wherein the training sample comprises training pictures of a yellow forbidden stop line;
initializing a convolutional neural network;
and inputting the training sample into the initialized convolutional neural network for training to obtain the convolutional neural network model, wherein the convolutional neural network model is used for identifying the yellow forbidden line.
3. The method according to claim 1 or 2, wherein the identifying yellow stop lines of non-foreground regions in the image to be classified using a convolutional neural network model comprises:
extracting the feature vector of the non-foreground region by adopting the convolutional neural network model;
based on the feature vector, calculating the classification probability of the yellow forbidden line in the convolutional neural network model;
and determining a non-foreground region with the classification probability of the yellow forbidden line being greater than a preset classification threshold as the yellow forbidden line.
4. A yellow stop line identification device, the device comprising:
the image to be classified acquisition module is used for acquiring an image to be classified, and the image to be classified is related to the ground marking;
the foreground region acquisition module is used for extracting a foreground region from the image to be classified by adopting a yolo object detection frame, wherein the detection result of the yolo object detection frame is represented in the form of a candidate frame, the candidate frame with a yellow forbidden stop line in the detection result is the foreground region, and the yellow forbidden stop line is one of the ground marks;
the foreground region yellow stop-preventing line acquisition module is used for identifying the yellow stop-preventing line in the foreground region based on a color space to obtain the yellow stop-preventing line in the foreground region;
the non-foreground region yellow stop line acquisition module is used for identifying a yellow stop line of a non-foreground region in the image to be classified by adopting a convolutional neural network model to obtain the yellow stop line of the non-foreground region, wherein the non-foreground region refers to an image region outside the foreground region in the image to be classified;
the foreground region acquisition module includes:
a detection mode transition unit configured to set the yolo object detection frame to a detection mode in which the yellow stop line is detected singly;
the target confidence calculating unit is used for calculating the target confidence of a detection area from the image to be classified by adopting the yolo object detection framework of the detection mode, wherein the detection area is an image small block obtained by segmenting the image to be classified in advance, and each image small block represents one detection area;
the foreground region acquisition unit is used for comparing the target confidence coefficient with a preset confidence coefficient threshold value and acquiring the foreground region according to the detection region of which the target confidence coefficient is higher than the confidence coefficient threshold value;
the foreground region yellow stop line acquisition module comprises:
a color space determining unit, configured to perform HSV color space conversion on the foreground region, and determine a color space in which the foreground region is located;
a target color judging unit, configured to judge whether a target color exists in a color space where the foreground region is located, and if so, perform straight line fitting on the foreground region by using a least square method based on the target color existing in the foreground region, where the target color is yellow;
and the foreground region yellow forbidden line acquisition unit is used for obtaining the yellow forbidden line in the foreground region according to the result of the straight line fitting.
5. A computer device comprising a memory, a processor and a computer program stored in the memory and executable on the processor, characterized in that the processor implements the steps of the yellow stop line identification method according to any one of claims 1 to 3 when the computer program is executed.
6. A computer readable storage medium storing a computer program, characterized in that the computer program when executed by a processor implements the steps of the yellow stop line identification method according to any one of claims 1 to 3.
CN201910062723.7A 2019-01-23 2019-01-23 Yellow stop line identification method and device, computer equipment and storage medium Active CN109919002B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201910062723.7A CN109919002B (en) 2019-01-23 2019-01-23 Yellow stop line identification method and device, computer equipment and storage medium
PCT/CN2019/115947 WO2020151299A1 (en) 2019-01-23 2019-11-06 Yellow no-parking line identification method and apparatus, computer device and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910062723.7A CN109919002B (en) 2019-01-23 2019-01-23 Yellow stop line identification method and device, computer equipment and storage medium

Publications (2)

Publication Number Publication Date
CN109919002A CN109919002A (en) 2019-06-21
CN109919002B true CN109919002B (en) 2024-02-27

Family

ID=66960527

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910062723.7A Active CN109919002B (en) 2019-01-23 2019-01-23 Yellow stop line identification method and device, computer equipment and storage medium

Country Status (2)

Country Link
CN (1) CN109919002B (en)
WO (1) WO2020151299A1 (en)

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109919002B (en) * 2019-01-23 2024-02-27 平安科技(深圳)有限公司 Yellow stop line identification method and device, computer equipment and storage medium
CN111008672B (en) * 2019-12-23 2022-06-10 腾讯科技(深圳)有限公司 Sample extraction method, sample extraction device, computer-readable storage medium and computer equipment
CN111325716B (en) * 2020-01-21 2023-09-01 上海万物新生环保科技集团有限公司 Screen scratch and fragmentation detection method and equipment
CN112348905B (en) * 2020-10-30 2023-12-19 深圳市优必选科技股份有限公司 Color recognition method and device, terminal equipment and storage medium
CN112418061B (en) * 2020-11-19 2024-01-23 城云科技(中国)有限公司 Vehicle forbidden region determining method and system
CN112613344B (en) * 2020-12-01 2024-04-16 浙江华锐捷技术有限公司 Vehicle track occupation detection method, device, computer equipment and readable storage medium

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103116985A (en) * 2013-01-21 2013-05-22 信帧电子技术(北京)有限公司 Detection method and device of parking against rules
CN107122776A (en) * 2017-04-14 2017-09-01 重庆邮电大学 A kind of road traffic sign detection and recognition methods based on convolutional neural networks
CN107134144A (en) * 2017-04-27 2017-09-05 武汉理工大学 A kind of vehicle checking method for traffic monitoring
CN107358242A (en) * 2017-07-11 2017-11-17 浙江宇视科技有限公司 Target area color identification method, device and monitor terminal
CN107480730A (en) * 2017-09-05 2017-12-15 广州供电局有限公司 Power equipment identification model construction method and system, the recognition methods of power equipment
CN107633229A (en) * 2017-09-21 2018-01-26 北京智芯原动科技有限公司 Method for detecting human face and device based on convolutional neural networks
CN107679508A (en) * 2017-10-17 2018-02-09 广州汽车集团股份有限公司 Road traffic sign detection recognition methods, apparatus and system
JP2018092501A (en) * 2016-12-07 2018-06-14 クラリオン株式会社 On-vehicle image processing apparatus

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP6702849B2 (en) * 2016-12-22 2020-06-03 株式会社Soken Marking line recognition device
US10242282B2 (en) * 2017-03-20 2019-03-26 Conduent Business Services, Llc Video redaction method and system
CN108229421B (en) * 2018-01-24 2021-07-02 华中科技大学 Depth video information-based method for detecting falling-off from bed in real time
CN109241904B (en) * 2018-08-31 2023-10-20 平安科技(深圳)有限公司 Character recognition model training, character recognition method, device, equipment and medium
CN109919002B (en) * 2019-01-23 2024-02-27 平安科技(深圳)有限公司 Yellow stop line identification method and device, computer equipment and storage medium

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103116985A (en) * 2013-01-21 2013-05-22 信帧电子技术(北京)有限公司 Detection method and device of parking against rules
JP2018092501A (en) * 2016-12-07 2018-06-14 クラリオン株式会社 On-vehicle image processing apparatus
CN107122776A (en) * 2017-04-14 2017-09-01 重庆邮电大学 A kind of road traffic sign detection and recognition methods based on convolutional neural networks
CN107134144A (en) * 2017-04-27 2017-09-05 武汉理工大学 A kind of vehicle checking method for traffic monitoring
CN107358242A (en) * 2017-07-11 2017-11-17 浙江宇视科技有限公司 Target area color identification method, device and monitor terminal
CN107480730A (en) * 2017-09-05 2017-12-15 广州供电局有限公司 Power equipment identification model construction method and system, the recognition methods of power equipment
CN107633229A (en) * 2017-09-21 2018-01-26 北京智芯原动科技有限公司 Method for detecting human face and device based on convolutional neural networks
CN107679508A (en) * 2017-10-17 2018-02-09 广州汽车集团股份有限公司 Road traffic sign detection recognition methods, apparatus and system

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
颜色和形态学在交通标志识别中的应用研究;姜雪宁 等;测绘与空间地理信息;20141225;第37卷(第12期);第17-21页 *

Also Published As

Publication number Publication date
WO2020151299A1 (en) 2020-07-30
CN109919002A (en) 2019-06-21

Similar Documents

Publication Publication Date Title
CN109919002B (en) Yellow stop line identification method and device, computer equipment and storage medium
CN108229509B (en) Method and device for identifying object class and electronic equipment
TWI744283B (en) Method and device for word segmentation
CN110020592B (en) Object detection model training method, device, computer equipment and storage medium
US11455805B2 (en) Method and apparatus for detecting parking space usage condition, electronic device, and storage medium
CN108960211B (en) Multi-target human body posture detection method and system
US8983200B2 (en) Object segmentation at a self-checkout
US8792722B2 (en) Hand gesture detection
CN110838126B (en) Cell image segmentation method, cell image segmentation device, computer equipment and storage medium
CN106709500B (en) Image feature matching method
CN110569782A (en) Target detection method based on deep learning
CN112651953B (en) Picture similarity calculation method and device, computer equipment and storage medium
CN112749673A (en) Method and device for intelligently extracting stock of oil storage tank based on remote sensing image
CN110599516A (en) Moving target detection method and device, storage medium and terminal equipment
CN107862680A (en) A kind of target following optimization method based on correlation filter
CN103093241B (en) Based on the remote sensing image nonuniformity cloud layer method of discrimination of homogeneity process
CN112241736A (en) Text detection method and device
CN113762027B (en) Abnormal behavior identification method, device, equipment and storage medium
CN111402185A (en) Image detection method and device
CN103778629A (en) Background model real-time updating method for non-coherent radar image
CN113378837A (en) License plate shielding identification method and device, electronic equipment and storage medium
CN112101148A (en) Moving target detection method and device, storage medium and terminal equipment
Kim et al. Background modeling using adaptive properties of hybrid features
CN111523583A (en) Method for automatically identifying and classifying equipment nameplate photos by using unmanned aerial vehicle
CN116245866B (en) Mobile face tracking method and system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant