WO2020151299A1 - 黄色禁停线识别方法、装置、计算机设备及存储介质 - Google Patents

黄色禁停线识别方法、装置、计算机设备及存储介质 Download PDF

Info

Publication number
WO2020151299A1
WO2020151299A1 PCT/CN2019/115947 CN2019115947W WO2020151299A1 WO 2020151299 A1 WO2020151299 A1 WO 2020151299A1 CN 2019115947 W CN2019115947 W CN 2019115947W WO 2020151299 A1 WO2020151299 A1 WO 2020151299A1
Authority
WO
WIPO (PCT)
Prior art keywords
yellow
foreground area
image
line
classified
Prior art date
Application number
PCT/CN2019/115947
Other languages
English (en)
French (fr)
Inventor
巢中迪
庄伯金
王少军
Original Assignee
平安科技(深圳)有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 平安科技(深圳)有限公司 filed Critical 平安科技(深圳)有限公司
Publication of WO2020151299A1 publication Critical patent/WO2020151299A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/24Aligning, centring, orientation detection or correction of the image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features

Definitions

  • This application relates to the field of artificial intelligence, and in particular to a method, device, computer equipment, and storage medium for identifying yellow stop line.
  • the yellow no-stop line refers to the no-stop line expressed in yellow, which belongs to a kind of ground marking, including the yellow no-stop line along the road and the yellow no-stop line in the form of a grid on the road.
  • the identification of the yellow no-stop line takes a long time, which cannot meet the near-real-time yellow no-stop line detection requirements in real life.
  • the embodiments of the present application provide a yellow no-stop line identification method, device, computer equipment, and storage medium to solve the problem that the yellow no-stop line cannot be accurately identified in near real time under the influence of the external environment.
  • an embodiment of the present application provides a method for identifying a yellow no-stop line, including:
  • the yolo object detection framework is used to extract the foreground area from the image to be classified, wherein the detection result of the yolo object detection framework is expressed in the form of a candidate frame, and the candidate frame whose category is the yellow forbidden line in the detection result In the foreground area, the yellow no-stop line is one of the ground markings;
  • a convolutional neural network model is used to identify the yellow forbidden line in the non-foreground area in the image to be classified to obtain the yellow forbidden line in the non-foreground area, where the non-foreground area refers to the to be classified The image area outside the foreground area in the image.
  • an embodiment of the present application provides a yellow no-stop line identification device, including:
  • the first acquisition module is configured to acquire an image to be classified, and the image to be classified is related to ground markings;
  • the second acquisition module is used to extract the foreground area from the image to be classified using the yolo object detection framework, wherein the detection result of the yolo object detection framework is expressed in the form of a candidate frame, and the category of the detection result is a yellow stop line
  • the candidate frame is the foreground area, and the yellow forbidden line is one of the ground markings;
  • the third acquisition module is configured to identify the yellow forbidden line in the foreground area based on the color space to obtain the yellow forbidden line in the foreground area;
  • the fourth acquisition module is configured to use a convolutional neural network model to identify the yellow forbidden line in the non-foreground area in the image to be classified to obtain the yellow forbidden line in the non-foreground area, wherein the non-foreground The area refers to an image area outside the foreground area in the image to be classified.
  • a computer device in a third aspect, includes a memory, a processor, and computer-readable instructions that are stored in the memory and can run on the processor.
  • the processor executes the computer-readable instructions, the foregoing Steps of the yellow no-stop line identification method.
  • an embodiment of the present application provides a computer non-volatile readable storage medium, comprising: computer readable instructions, when the computer readable instructions are executed by the processor, they are used to execute the first aspect Any one of the yellow no-stop line identification methods.
  • the image to be classified is first obtained to identify the yellow forbidden line from the image to be classified related to the ground marking; then the yolo object detection framework is used to extract the foreground area from the image to be classified, and the suspected yellow
  • the candidate frame of the forbidden line can quickly and accurately determine the foreground area in the image to be classified that has a greater probability of including the yellow forbidden line; then based on the color space, the yellow forbidden line in the foreground area is identified to obtain the foreground area
  • the yellow forbidden line can quickly determine a part of the yellow forbidden line in the image to be classified; finally, the convolutional neural network model is used to identify the yellow forbidden line in the non-foreground area of the image to be classified to obtain the non-foreground area
  • the yellow prohibition line in can also identify the yellow prohibition line that was not detected by the yolo object detection framework for the first time.
  • the embodiments of the present application can realize near real-time recognition of the yellow no-stop line under the premise of ensuring the recognition accuracy.
  • FIG. 1 is a flowchart of a method for identifying a yellow no-stop line in an embodiment of the present application
  • Figure 2 is a schematic diagram of a yellow no-stop line identification device in an embodiment of the present application.
  • Fig. 3 is a schematic diagram of a computer device in an embodiment of the present application.
  • first, second, third, etc. may be used in the embodiments of the present application to describe the preset range, etc., these preset ranges should not be limited to these terms. These terms are only used to distinguish the preset ranges from each other.
  • the first preset range may also be referred to as the second preset range, and similarly, the second preset range may also be referred to as the first preset range.
  • the word “if” as used herein can be interpreted as “when” or “when” or “in response to determination” or “in response to detection”.
  • the phrase “if determined” or “if detected (statement or event)” can be interpreted as “when determined” or “in response to determination” or “when detected (statement or event) )” or “in response to detection (statement or event)”.
  • Fig. 1 shows a flow chart of the method for identifying the yellow stop line in this embodiment.
  • the yellow no-stop line identification method can be applied to a yellow no-stop line identification system, and the yellow no-stop line identification system can be used to identify the yellow no-stop line from road markings.
  • the yellow no-stop line identification system can be specifically applied to computer equipment, where the computer equipment is a device that can interact with a user's human-computer, including but not limited to computers, smart phones, and tablets.
  • the yellow no-stop line identification method includes the following steps:
  • the vehicle-mounted yellow no-stop line recognition system will obtain real-time images related to the ground markings through the camera equipment, and the images are images to be classified.
  • the yellow forbidden line recognition system acquires the image to be classified, so that when a yellow forbidden line appears in the image to be classified, it can quickly identify the yellow forbidden line and make a preset response.
  • S20 Use the yolo object detection framework to extract the foreground area from the image to be classified, where the detection result of the yolo object detection framework is expressed in the form of a candidate frame, and the candidate frame whose category is yellow forbidden line in the detection result is the foreground area, and the yellow forbidden The stop line is one of the ground markings.
  • the yolo (You Only Look Once) object detection framework is a model that can detect and classify objects.
  • the candidate box represents a marked box suspected of having a yellow stop line.
  • the yolo object detection framework is used to extract the foreground area from the image to be classified.
  • the yolo object detection framework is obtained through training in advance, wherein the training samples for training the yolo object detection framework include various types of ground markings, and the yellow no-stop line is one of the training samples.
  • the ratio of the yellow no-stop line to other ground marking training samples can be 1:1. The use of training samples of this ratio can effectively prevent the over-fitting phenomenon of the yolo object detection framework and improve the detection of the yolo object detection framework Accuracy and classification accuracy.
  • the yolo object detection framework will quickly determine that there is a greater probability of belonging to the yellow forbidden line in the image to be classified, which is also the foreground area.
  • step S20 the yolo object detection framework is used to extract the foreground area from the image to be classified, which specifically includes:
  • the yolo object detection framework is used to detect and classify objects in a normal scene. Considering that the embodiment of the present application only needs to detect the yellow stop line, and the ground markings other than the yellow stop line Line detection and classification are unnecessary. Therefore, the yolo object detection framework can be set to a single detection mode for detecting yellow stop lines, where this setting is implemented by the yellow stop line recognition system.
  • the training samples are divided into yellow forbidden line training samples and non-yellow forbidden line training samples, where the non-yellow forbidden line training samples can include any samples other than the yellow forbidden line, Not only limited to the ground markings, in addition, the ratio of the yellow forbidden line training samples and the non-yellow forbidden line training samples is also set to 1:1 to prevent the yolo object detection framework obtained from training from overfitting.
  • the yolo object detection framework trained by the training sample is used, the yolo object detection framework will be stored in the yellow stop line recognition system in the form of a file. The system can call the file at any time to set the yolo object detection framework to a single detection yellow prohibition. Line stop detection mode.
  • the yolo object detection framework is set to a detection mode for detecting a single yellow forbidden line, which can only care about the yellow forbidden line during detection, and group other non-yellow forbidden lines into one category, which helps Determine the position of the yellow stop line in the image to be classified to further improve processing efficiency.
  • S22 Calculate the target confidence of the detection area from the image to be classified using the yolo object detection framework in the detection mode, where the detection area is a small image block obtained by pre-segmenting the image to be classified, and each small image block represents a detection area.
  • the yellow no-stop line shown by the target confidence level has a certain probability to fall within the detection area.
  • the image to be classified first needs to be cut before calculating the target confidence of the detection area.
  • the image to be classified can be divided into s*s image pieces, each Small image blocks correspond to a detection area; when the image to be classified is rectangular, it can also be divided into small a*b image blocks of equal size as the detection area.
  • the cutting of the image to be classified will adopt equal-size cutting to obtain the detection area, which can improve the connection of edge features between different detection areas during detection, and increase the probability that the subsequent foreground area includes the yellow forbidden line.
  • the detection mode of a single detection yellow forbidden line is used to calculate the target confidence of each detection area in the image to be classified, so as to determine the foreground area according to the target confidence.
  • S23 The target confidence is compared with a preset confidence threshold, and the foreground area is obtained according to the detection area whose target confidence is higher than the confidence threshold.
  • the use of a single detection mode for detecting the yellow forbidden line helps to determine the position of the yellow forbidden line in the image to be classified, and the yolo object detection framework of the detection mode is used to calculate the target confidence of the detection area from the image to be classified
  • the target confidence is compared with the preset confidence threshold to obtain the foreground area, which can increase the probability of including the yellow forbidden line in the foreground area, so that most of the images to be classified are already included in the yolo object detection framework.
  • the yellow stop line is identified.
  • S30 Identify the yellow forbidden line in the foreground area based on the color space to obtain the yellow forbidden line in the foreground area.
  • the color space is also called a color model (also called a color space or a color system). Its purpose is to describe colors in a generally acceptable way under certain standards.
  • the color space can be used to identify the yellow forbidden line in the foreground area, and the yellow forbidden line can be obtained from the foreground area. .
  • the color space can be used to distinguish the yellow forbidden line from other objects by the characteristics of the yellow forbidden line, and the yellow forbidden line can be obtained from the foreground area quickly and accurately.
  • step S30 the yellow forbidden line in the foreground area is identified based on the color space to obtain the yellow forbidden line in the foreground area, which specifically includes:
  • S31 Convert the foreground area to the HSV color space, and determine the color space where the foreground area is located.
  • the color space may specifically adopt the HSV color space, which has a better effect on identifying the yellow forbidden line and a higher accuracy rate than other color spaces.
  • the yellow forbidden line in addition to the color characteristics of the yellow forbidden line, the yellow forbidden line also needs to satisfy the straight line relationship, so as to further determine the yellow forbidden line in the foreground area according to the requirements of the straight line, and exclude the non-straight line Objects with yellow characteristics.
  • the least square method is used to fit the yellow existing in the foreground area to a straight line.
  • a rectangular coordinate system can be established, with the pixels of the image as the smallest unit, and the coordinates The method represents the position of each representative yellow pixel in the foreground area, and then uses the least square method to fit a straight line according to the coordinates of the representative yellow pixel to identify the yellow forbidden line in the foreground area.
  • a comprehensive judgment is made by considering the characteristics of the color and shape of the yellow forbidden line, which can effectively improve the accuracy of identifying the yellow forbidden line in the foreground area.
  • the yellow forbidden line in the foreground area is obtained.
  • steps S31-S33 comprehensively consider the characteristics of the yellow forbidden line, use color space and straight line fitting method to identify the yellow forbidden line in the foreground area, and use this method to identify the accuracy of the yellow forbidden line in the foreground area Higher.
  • S40 Use the convolutional neural network model to identify the yellow forbidden line in the non-foreground area of the image to be classified to obtain the yellow forbidden line in the non-foreground area, where the non-foreground area refers to the image outside the foreground area in the image to be classified area.
  • the foreground area is only the area where the yellow forbidden line appears in the image to be classified, and does not mean that there is no yellow forbidden line in the non-foreground area.
  • unfavorable environmental factors such as light, stagnant water and corrosion
  • the proportion of the yellow stop line in the non-foreground area in the image to be classified will increase with the degree of influence of the unfavorable environmental factors.
  • the detection effect of the yolo object detection model will be relatively reduced.
  • a convolutional neural network model may be specifically used to identify the yellow forbidden line in the non-foreground area to obtain the yellow forbidden line in the non-foreground area.
  • Convolutional neural network is a kind of deep neural network that can extract the deep features of the yellow forbidden line, even under the influence of unfavorable environmental factors, still maintain a high recognition accuracy.
  • the convolutional neural network is relatively slow.
  • step S40 it further includes:
  • the training picture including the yellow forbidden line is used as the training sample, so that the convolutional neural network can learn the deep features of the yellow forbidden line and fully distinguish it from the features of other training samples.
  • the convolutional neural network includes network parameters, and the network parameters include weights and biases.
  • the weights initialized by the convolutional neural network satisfy the formula n l represents the number of training samples input in the lth layer, S() represents the variance operation, W l represents the weight of the lth layer, Represents any, and l represents the first layer in the convolutional neural network.
  • the step of initializing the convolutional neural network and step S411 are not restricted to be executed sequentially, and may be executed after step S411 or before.
  • S413 Input the training samples into the initialized convolutional neural network for training to obtain a convolutional neural network model, which is used to identify the yellow prohibition line.
  • the training samples are input into the initialized convolutional neural network for training, and the network in the convolutional neural network can be compared according to the training samples.
  • the parameters are updated iteratively, so that the output result of the training sample in the convolutional neural network reaches the desired result within the allowable error range, and the convolutional neural network model for identifying the yellow forbidden line is obtained.
  • Steps S411-S413 provide a method for training a convolutional neural network model, which can speed up the process of model training and obtain a convolutional neural network model with a higher recognition rate.
  • step S40 the use of a convolutional neural network model to identify the yellow forbidden line in the non-foreground area of the image to be classified includes:
  • S421 Use a convolutional neural network model to extract feature vectors of non-foreground regions.
  • S422 Based on the feature vector, calculate the classification probability of the yellow prohibited line in the convolutional neural network model.
  • S423 Determine a non-foreground area where the classification probability of the yellow stop line is greater than the preset classification threshold as the yellow stop line.
  • steps S421-S423 it is understandable that after extracting the feature vector of the non-foreground region, the convolutional neural network model needs to compare with the deep features of the training sample extracted during training, and judge that the feature vector of the non-foreground region belongs to For which category in the training sample, when the classification probability of the yellow forbidden line is greater than the preset classification threshold, it can be determined that the non-foreground area corresponding to the extracted feature vector is the yellow forbidden line.
  • a method for determining yellow forbidden lines is provided, which can effectively determine which non-foreground areas are yellow forbidden lines.
  • the yolo object detection framework is first used to identify the yellow forbidden lines in the foreground area, combined with the advantages of the fast detection speed of the yolo object detection framework, so that most of the yellow forbidden lines in the image to be recognized are At this stage, it is recognized. For some unrecognized ones due to adverse environmental factors, they are only included in the relatively small non-foreground area of the image to be recognized. At this time, although the speed of convolution recognition will be slower, However, because the non-foreground area occupies a relatively small area, it does not consume a lot of time on recognition, and the yellow stop line in the non-foreground area can be accurately identified. By combining the two recognition stages, the effect of quickly and accurately identifying the yellow forbidden line in the image to be recognized is achieved.
  • the image to be classified is first obtained to identify the yellow forbidden line from the image to be classified related to the ground marking; then the yolo object detection framework is used to extract the foreground area from the image to be classified, and the suspected yellow
  • the candidate frame of the forbidden line can quickly and accurately determine the foreground area in the image to be classified that has a greater probability of including the yellow forbidden line; then based on the color space, the yellow forbidden line in the foreground area is identified to obtain the foreground area
  • the yellow forbidden line can quickly determine a part of the yellow forbidden line in the image to be classified; finally, the convolutional neural network model is used to identify the yellow forbidden line in the non-foreground area of the image to be classified to obtain the
  • the yellow forbidden line can also identify the yellow forbidden line that was not detected by the yolo object detection framework for the first time.
  • the embodiments of the present application can realize near real-time recognition of the yellow no-stop line under the premise of ensuring the recognition accuracy.
  • the embodiments of the present application further provide device embodiments that implement the steps and methods in the foregoing method embodiments.
  • Fig. 2 shows the principle block diagram of the yellow no-stop line identification device corresponding to the yellow no-stop line identification method in the embodiment.
  • the yellow no-stop line identification device includes a first acquisition module 10, a second acquisition module 20, a third acquisition module 30 and a fourth acquisition module 40.
  • the implementation functions of the first acquisition module 10, the second acquisition module 20, the third acquisition module 30, and the fourth acquisition module 40 correspond to the steps corresponding to the yellow stop line identification method in the embodiment one by one. In order to avoid repetition, this The examples are not detailed one by one.
  • the first acquisition module 10 is used to acquire an image to be classified, and the image to be classified is related to ground markings.
  • the second acquisition module 20 is used to extract the foreground area from the image to be classified using the yolo object detection framework, where the detection result of the yolo object detection framework is expressed in the form of a candidate frame, and the category of the detection result is the candidate frame of the yellow stop line It is the foreground area, and the yellow no-stop line is one of the ground markings.
  • the third acquisition module 30 is used to identify the yellow forbidden line in the foreground area based on the color space to obtain the yellow forbidden line in the foreground area.
  • the fourth acquisition module 40 is used for recognizing the yellow forbidden line in the non-foreground area of the image to be classified by using the convolutional neural network model to obtain the yellow forbidden line in the non-foreground area, where the non-foreground area refers to the image to be classified The image area outside the middle foreground area.
  • the second obtaining module 20 is specifically configured to:
  • the yolo object detection framework adopts the detection mode to calculate the target confidence of the detection area from the image to be classified, where the detection area is an image block obtained by pre-segmenting the image to be classified, and each image block represents a detection area.
  • the target confidence is compared with a preset confidence threshold, and the foreground area is obtained according to the detection area whose target confidence is higher than the confidence threshold.
  • the third obtaining module 30 is specifically configured to:
  • the least square method is used to fit a straight line in the foreground area, where the target color is yellow.
  • the yellow no-stop line identification device is also specifically used for:
  • the training samples include training pictures of the yellow no-stop line.
  • the training samples are input into the initialized convolutional neural network for training, and the convolutional neural network model is obtained.
  • the convolutional neural network model is used to identify the yellow prohibition line.
  • the fourth obtaining module 40 is also specifically configured to:
  • the convolutional neural network model is used to extract the feature vector of the non-foreground area.
  • the classification probability of the yellow no-stop line is calculated in the convolutional neural network model.
  • the non-foreground area where the classification probability of the yellow stop line is greater than the preset classification threshold is determined as the yellow stop line.
  • the image to be classified is first obtained to identify the yellow forbidden line from the image to be classified related to the ground marking; then the yolo object detection framework is used to extract the foreground area from the image to be classified, and the suspected yellow
  • the candidate frame of the forbidden line can quickly and accurately determine the foreground area in the image to be classified that has a greater probability of including the yellow forbidden line; then based on the color space, the yellow forbidden line in the foreground area is identified to obtain the foreground area
  • the yellow forbidden line can quickly determine a part of the yellow forbidden line in the image to be classified; finally, the convolutional neural network model is used to identify the yellow forbidden line in the non-foreground area of the image to be classified to obtain the
  • the yellow forbidden line can also identify the yellow forbidden line that was not detected by the yolo object detection framework for the first time.
  • the embodiments of the present application can realize near real-time recognition of the yellow no-stop line under the premise of ensuring the recognition accuracy.
  • This embodiment provides a computer non-volatile readable storage medium.
  • the computer non-volatile readable storage medium stores computer readable instructions.
  • the yellow prohibition in the embodiment is implemented.
  • Line identification methods, in order to avoid repetition, will not repeat them here.
  • the computer-readable instruction is executed by the processor, the function of each module/unit in the yellow forbidden line identification device in the embodiment is realized. To avoid repetition, it will not be repeated here.
  • Fig. 3 is a schematic diagram of a computer device provided by an embodiment of the present application.
  • the computer device 50 of this embodiment includes a processor 51, a memory 52, and computer-readable instructions 53 stored in the memory 52 and running on the processor 51, and the computer-readable instructions 53 are processed.
  • the device 51 is executed, the method for identifying the yellow forbidden line in the embodiment is implemented. In order to avoid repetition, it will not be repeated here.
  • the computer-readable instruction 53 is executed by the processor 51, the function of each model/unit in the yellow forbidden line identification device in the embodiment is realized. To avoid repetition, it will not be repeated here.
  • the computer device 50 may be a computing device such as a desktop computer, a notebook, a palmtop computer, and a cloud server.
  • the computer device 50 may include, but is not limited to, a processor 51 and a memory 52.
  • FIG. 3 is only an example of the computer device 50 and does not constitute a limitation on the computer device 50. It may include more or less components than those shown in the figure, or a combination of certain components, or different components.
  • computer equipment may also include input and output devices, network access devices, buses, and so on.
  • the so-called processor 51 may be a central processing unit (Central Processing Unit, CPU), other general-purpose processors, digital signal processors (Digital Signal Processor, DSP), application specific integrated circuits (ASIC), Field-Programmable Gate Array (FPGA) or other programmable logic devices, discrete gates or transistor logic devices, discrete hardware components, etc.
  • the general-purpose processor may be a microprocessor or the processor may also be any conventional processor or the like.
  • the memory 52 may be an internal storage unit of the computer device 50, such as a hard disk or memory of the computer device 50.
  • the memory 52 may also be an external storage device of the computer device 50, such as a plug-in hard disk equipped on the computer device 50, a smart memory card (Smart Media Card, SMC), a Secure Digital (SD) card, and a flash memory card (Flash). Card) and so on.
  • the memory 52 may also include both an internal storage unit of the computer device 50 and an external storage device.
  • the memory 52 is used to store computer readable instructions and other programs and data required by the computer equipment.
  • the memory 52 can also be used to temporarily store data that has been output or will be output.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Computational Linguistics (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Health & Medical Sciences (AREA)
  • Data Mining & Analysis (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Image Analysis (AREA)

Abstract

本申请公开了一种黄色禁停线识别方法、装置、计算机设备及存储介质,涉及人工智能领域。该黄色禁停线识别方法包括:获取待分类图像,待分类图像与地面标线相关;采用yolo物体检测框架从待分类图像中提取前景区域,其中,yolo物体检测框架的检测结果以候选框的形式表示,检测结果中类别为黄色禁停线的候选框为前景区域,黄色禁停线为地面标线中的一种;基于颜色空间对前景区域中的黄色禁停线进行识别,得到前景区域中的黄色禁停线;采用卷积神经网络模型对待分类图像中非前景区域的黄色禁停线进行识别,得到非前景区域中的黄色禁停线。采用该黄色禁停线识别方法能够在受外部环境影响下实现黄色禁停线近实时的准确识别。

Description

黄色禁停线识别方法、装置、计算机设备及存储介质
本申请以2019年1月23日提交的申请号为201910062723.7,名称为“黄色禁停线识别方法、装置、计算机设备及存储介质”的中国发明专利申请为基础,并要求其优先权。
【技术领域】
本申请涉及人工智能领域,尤其涉及一种黄色禁停线识别方法、装置、计算机设备及存储介质。
【背景技术】
黄色禁停线是指采用黄色表示的禁止停车线,属于地面标线的一种,包括路沿的黄色禁停线和在路面上以网格形式表示的黄色禁停线。目前,在光照、积水和腐蚀等不利环境因素的影响下黄色禁停线的识别所需时长较长,无法满足实际生活中接近实时的黄色禁停线检测需求。
【发明内容】
有鉴于此,本申请实施例提供了一种黄色禁停线识别方法、装置、计算机设备及存储介质,用以解决在受外部环境影响下无法满足黄色禁停线近实时准确识别的问题。
第一方面,本申请实施例提供了一种黄色禁停线识别方法,包括:
获取待分类图像,所述待分类图像与地面标线相关;
采用yolo物体检测框架从所述待分类图像中提取前景区域,其中,所述yolo物体检测框架的检测结果以候选框的形式表示,检测结果中类别为黄色禁停线的所述候选框为所述前景区域,所述黄色禁停线为所述地面标线中的一种;
基于颜色空间对所述前景区域中的黄色禁停线进行识别,得到所述前景区域中的黄色禁停线;
采用卷积神经网络模型对所述待分类图像中非前景区域的黄色禁停线进行识别,得到所述非前景区域中的黄色禁停线,其中,所述非前景区域是指所述待分类图像中所述前景区域以外 的图像区域。
第二方面,本申请实施例提供了一种黄色禁停线识别装置,包括:
第一获取模块,用于获取待分类图像,所述待分类图像与地面标线相关;
第二获取模块,用于采用yolo物体检测框架从所述待分类图像中提取前景区域,其中,所述yolo物体检测框架的检测结果以候选框的形式表示,检测结果中类别为黄色禁停线的所述候选框为所述前景区域,所述黄色禁停线为所述地面标线中的一种;
第三获取模块,用于基于颜色空间对所述前景区域中的黄色禁停线进行识别,得到所述前景区域中的黄色禁停线;
第四获取模块,用于采用卷积神经网络模型对所述待分类图像中非前景区域的黄色禁停线进行识别,得到所述非前景区域中的黄色禁停线,其中,所述非前景区域是指所述待分类图像中所述前景区域以外的图像区域。
第三方面,一种计算机设备,包括存储器、处理器以及存储在所述存储器中并可在所述处理器上运行的计算机可读指令,所述处理器执行所述计算机可读指令时实现上述黄色禁停线识别方法的步骤。
第四方面,本申请实施例提供了一种计算机非易失性可读存储介质,包括:计算机可读指令,当所述计算机可读指令被所述处理器执行时,用以执行第一方面任一项所述的黄色禁停线识别方法。
在本申请实施例中,首先获取待分类图像,以从与地面标线相关的待分类图像中识别黄色禁停线;接着采用yolo物体检测框架从待分类图像中提取前景区域,得到疑似包括黄色禁停线的候选框,能够快速、精确地确定待分类图像中有较大概率包括黄色禁停线的前景区域;然后基于颜色空间对前景区域中的黄色禁停线进行识别,得到前景区域中的黄色禁停线,能够快速地在待分类图像中确定一部分黄色禁停线;最后,采用卷积神经网络模型对待分类图像中非前景区域的黄色禁停线进行识别,得到所述非前景区域中的黄色禁停线,能够对初次采用yolo物体检测框架检测不出的黄色禁停线也识别出来。本申请实施例在保证识别准确率的前提下,能够实现黄色禁停线的近实时识别。
【附图说明】
为了更清楚地说明本申请实施例的技术方案,下面将对实施例中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本申请的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动性的前提下,还可以根据这些附图获得其它的附图。
图1是本申请一实施例中基于黄色禁停线识别方法的一流程图;
图2是本申请一实施例中基于黄色禁停线识别装置的一示意图;
图3是本申请一实施例中计算机设备的一示意图。
【具体实施方式】
为了更好的理解本申请的技术方案,下面结合附图对本申请实施例进行详细描述。
应当明确,所描述的实施例仅仅是本申请一部分实施例,而不是全部的实施例。基于本申请中的实施例,本领域普通技术人员在没有做出创造性劳动前提下所获得的所有其它实施例,都属于本申请保护的范围。
在本申请实施例中使用的术语是仅仅出于描述特定实施例的目的,而非旨在限制本申请。在本申请实施例和所附权利要求书中所使用的单数形式的“一种”、“所述”和“该”也旨在包括多数形式,除非上下文清楚地表示其他含义。
应当理解,本文中使用的术语“和/或”仅仅是一种描述关联对象的相同的字段,表示可以存在三种关系,例如,A和/或B,可以表示:单独存在A,同时存在A和B,单独存在B这三种情况。另外,本文中字符“/”,一般表示前后关联对象是一种“或”的关系。
应当理解,尽管在本申请实施例中可能采用术语第一、第二、第三等来描述预设范围等,但这些预设范围不应限于这些术语。这些术语仅用来将预设范围彼此区分开。例如,在不脱离本申请实施例范围的情况下,第一预设范围也可以被称为第二预设范围,类似地,第二预设范围也可以被称为第一预设范围。
取决于语境,如在此所使用的词语“如果”可以被解释成为“在……时”或“当……时”或“响应于确定”或“响应于检测”。类似地,取决于语境,短语“如果确定”或“如果检测(陈述的条件或事件)”可以被解释成为“当确定时”或“响应于确定”或“当检测(陈述的条件或事件)时”或“响应于检测(陈述的条件或事件)”。
图1示出本实施例中黄色禁停线识别方法的一流程图。该黄色禁停线识别方法可应用在黄色禁停线识别系统中,在从路面标线中识别黄色禁停线可采用该黄色禁停线识别系统进行识别。该黄色禁停线识别系统具体可应用在计算机设备上,其中,该计算机设备是可与用户进行人机交互的设备,包括但不限于电脑、智能手机和平板等设备。如图1所示,该黄色禁停线识别方法包括如下步骤:
S10:获取待分类图像,待分类图像与地面标线相关。
可以理解地,在车辆行驶过程中,车载黄色禁停线识别系统将通过摄像设备实时获取 与地面标线相关的图像,该图像为待分类图像。
在一实施例中,黄色禁停线识别系统获取待分类图像,以在待分类图像中出现黄色禁停线时,能够快速识别黄色禁停线,并做出预设的反应。
S20:采用yolo物体检测框架从待分类图像中提取前景区域,其中,yolo物体检测框架的检测结果以候选框的形式表示,检测结果中类别为黄色禁停线的候选框为前景区域,黄色禁停线为地面标线中的一种。
其中,yolo(You Only Look Once)物体检测框架是一种能够对物体进行检测,并对物体进行分类的模型。候选框表示疑似存在黄色禁停线的标示框。
在一实施例中,采用yolo物体检测框架从待分类图像中提取前景区域。该yolo物体检测框架是预先经过训练得到的,其中,训练该yolo物体检测框架的训练样本包括各种不同类型的地面标线,黄色禁停线是其中的一种训练样本。特别地,黄色禁停线与其他地面标线训练样本的比例可以都是1:1,采用该等比例的训练样本可以有效防止yolo物体检测框架的过拟合现象,提高yolo物体检测框架的检测准确率和分类准确率。
可以理解地,当待分类图像中出现黄色禁停线时,yolo物体检测框架将快速确定待分类图像中有较大概率属于黄色禁停线的区域,该区域也即前景区域。
进一步地,在步骤S20中,采用yolo物体检测框架从待分类图像中提取前景区域,具体包括:
S21:将yolo物体检测框架设置为单一检测黄色禁停线的检测模式。
可以理解地,在通常场景下yolo物体检测框架是用来检测物体并将物体进行分类的,考虑到本申请实施例只需检测黄色禁停线即可,而除黄色禁停线以外的地面标线的检测和分类是不必要的,因此,可以将yolo物体检测框架设置为单一检测黄色禁停线的检测模式,其中,该设置由黄色禁停线识别系统实现。
具体地,在训练yolo物体检测框架时,将训练样本分为黄色禁停线训练样本和非黄色禁停线训练样本,其中非黄色禁停线训练样本可以包括除黄色禁停线的任意样本,不仅仅局限于地面标线,另外,同样将黄色禁停线训练样本和非黄色禁停线训练样本的比例设为1:1,以防止训练得到的yolo物体检测框架出现过拟合现象。在采用该训练样本训练得到的yolo物体检测框架后,将把该yolo物体检测框架以文件形式存储在黄色禁停线识别系统中,系统可以随时调用文件将yolo物体检测框架设置为单一检测黄色禁停线的检测模式。
在一实施例中,将yolo物体检测框架设置为单一检测黄色禁停线的检测模式,能够在检测时只关心黄色禁停线,将其他非黄色禁停线部分归为一类,有助于确定黄色禁停线在待分类 图像的位置,进一步提高处理效率。
S22:采用检测模式的yolo物体检测框架从待分类图像中计算检测区域的目标置信度,其中,检测区域为待分类图像预先分割得到的图像小块,每一个图像小块代表一检测区域。
其中,目标置信度展现的黄色禁停线有一定概率落在检测区域内的程度。
在一实施例中,首先待分类图像在计算检测区域的目标置信度之前需要进行切割,具体地,当待分类图像为正方形时,可以将待分类图像分成s*s的图像小块,每一图像小块块对应一检测区域;当待分类图像为长方形时,同样可以分成等大小的a*b图像小块作为检测区域。一般情况下,待分类图像的切割将采用等大小切割的方式得到检测区域,可以在检测时提高不同检测区域之间边缘特征的联系,提高后续得到的前景区域中包括黄色禁停线的概率。
在一实施例中,在确定检测区域后,采用单一检测黄色禁停线的检测模式计算待分类图像中每一检测区域的目标置信度,以根据目标置信度确定前景区域。
S23:将目标置信度与预设的置信度阈值进行比较,根据目标置信度高于置信度阈值的检测区域得到前景区域。
步骤S21-S23中,采用单一检测黄色禁停线的检测模式有助于确定黄色禁停线在待分类图像的位置,采用检测模式的yolo物体检测框架从待分类图像中计算检测区域的目标置信度,并将目标置信度与预设的置信度阈值进行比较得到前景区域,能够提高前景区域中包括黄色禁停线概率,使得在采用yolo物体检测框架时就已把待分类图像中的大部分黄色禁停线识别了出来。
S30:基于颜色空间对前景区域中的黄色禁停线进行识别,得到前景区域中的黄色禁停线。
其中,颜色空间也称彩色模型(又称彩色空间或彩色系统)它的用途是在某些标准下用通常可接受的方式对彩色加以说明。
在一实施例中,由于黄色禁停线和其他地面标线等物体在颜色上存在差别,可以采用颜色空间对前景区域中的黄色禁停线进行识别,并从前景区域中得到黄色禁停线。采用颜色空间能够借助黄色禁停线本身的特征与其他物体进行区分,可以快速、准确地从前景区域中得到黄色禁停线。
进一步地,在步骤S30中,基于颜色空间对前景区域中的黄色禁停线进行识别,得到前景区域中的黄色禁停线,具体包括:
S31:将前景区域进行HSV颜色空间的转换,确定前景区域所在的颜色空间。
在一实施例中,颜色空间具体可以采用HSV颜色空间,该HSV颜色空间相比其他颜 色空间在识别黄色禁停线上的效果更好,准确率更高。
S32:判断前景区域所在的颜色空间是否存在目标颜色,若存在,则基于前景区域中存在的目标颜色,采用最小二乘法在前景区域进行直线拟合,其中,目标颜色为黄色。
可以理解地,除了黄色禁停线关于颜色上的特征外,黄色禁停线还需满足直线的关系,从而根据该直线的要求更进一步地确定前景区域中的黄色禁停线,排除非直线的具备黄色特征的物体。
在一实施例中,当前景区域中存在的黄色,则采用最小二乘法对前景区域中存在的黄色进行直线拟合,具体地,可以建立直角坐标系,以图像的像素作为最小单位,采用坐标的方式表示前景区域中各个代表黄色像素的位置,再根据代表黄色像素坐标,采用最小二乘法进行直线拟合,进而识别前景区域中的黄色禁停线。本实施例通过考虑黄色禁停线在颜色以及形状上的特征进行综合判断,能够有效提高前景区域中黄色禁停线识别的准确率。
S33:根据直线拟合的结果得到前景区域中的黄色禁停线。
在一实施例中,当直线拟合结果为真,则得到前景区域中的黄色禁停线。
步骤S31-S33中,综合考虑黄色禁停线的特征,采用颜色空间和直线拟合的方法在前景区域中识别得到黄色禁停线,采用该方法识别前景区域中的黄色禁停线的准确率较高。
S40:采用卷积神经网络模型对待分类图像中非前景区域的黄色禁停线进行识别,得到非前景区域中的黄色禁停线,其中,非前景区域是指待分类图像中前景区域以外的图像区域。
可以理解地,前景区域只是待分类图像中黄色禁停线出现概率较大的区域,并不代表非前景区域没有黄色禁停线。在受光照、积水和腐蚀等不利环境因素的影响下,非前景区域内黄色禁停线在待分类图像中的占比将随不利环境因素的影响程度而增大,简单地说,就是在不利环境因素影响下,yolo物体检测模型的检测效果会相对降低。
在一实施例中,具体可以采用卷积神经网络模型对非前景区域的黄色禁停线进行识别,得到非前景区域中的黄色禁停线。卷积神经网络是一种深度神经网络,可以提取黄色禁停线的深层特征,即使在不利环境因素的影响下,仍然保持较高的识别准确率。卷积神经网络与yolo神经网络(训练yolo物体检测模型的神经网络)相比,卷积神经网络的速度会相对慢些。
进一步地,在步骤S40之前,还包括:
S411:获取训练样本,训练样本包括黄色禁停线的训练图片。
在一实施例中,将包括黄色禁停线的训练图片作为训练样本,以让卷积神经网络能够学习黄色禁停线的深层特征,并与其他训练样本的特征充分区分开来。
S412:初始化卷积神经网络。
其中,卷积神经网络包括网络参数,网络参数包括权值和偏置。在一实施例中,令卷积神经网络初始化的权值满足公式
Figure PCTCN2019115947-appb-000001
n l表示在第l层输入的训练样本的样本个数,S()表示方差运算,W l表示第l层的权值,
Figure PCTCN2019115947-appb-000002
表示任意,l表示卷积神经网络中的第l层,采用该初始的方式可以提高卷积神经网络模型训练的效率,有助于提高卷积神经网络模型的识别准确率。该初始化卷积神经网络的步骤与步骤S411没有先后执行的限制,可以在步骤S411之后也可以在之前执行。
S413:将训练样本输入到初始化后的卷积神经网络中进行训练,得到卷积神经网络模型,卷积神经网络模型用于识别黄色禁停线。
在一实施例中,在获取所需的训练样本和初始化卷积神经网络后,将训练样本输入到初始化后的卷积神经网络中进行训练,即可根据训练样本对卷积神经网络中的网络参数进行迭代更新,使训练样本在卷积神经网络中输出的结果在误差允许的范围内达到期望的结果,得到用于识别黄色禁停线的卷积神经网络模型。
步骤S411-S413提供了一种训练卷积神经网络模型方法,采用该方法可以加快模型训练的过程,并得到识别率较高的卷积神经网络模型。
进一步地,在步骤S40中,采用卷积神经网络模型对待分类图像中非前景区域的黄色禁停线进行识别,具体包括:
S421:采用卷积神经网络模型提取非前景区域的特征向量。
S422:基于特征向量,在卷积神经网络模型中计算得到黄色禁停线的分类概率。
S423:将黄色禁停线的分类概率大于预设分类阈值的非前景区域确定为黄色禁停线。
在步骤S421-S423中,可以理解地,卷积神经网络模型在提取非前景区域的特征向量后,需要与训练时提取的训练样本的深层特征进行比较,判断在提取非前景区域的特征向量属于训练样本中的哪一类,当黄色禁停线的分类概率大于预设分类阈值时,则可确定该提取的特征向量所对应的非前景区域为黄色禁停线。在本实施例中,提供了一种确定黄色禁停线的方法,可以有效确定哪些非前景区域为黄色禁停线。
可以理解地,在本申请实施中先采用yolo物体检测框架识别前景区域中的黄色禁停线,结合yolo物体检测框架的检测速度快的优点,使得待识别图像中大部分的黄色禁停线在该阶段就被识别出来,对于部分因不利环境因素影响的而没有识别出来的,只包括在 待识别图像中占比较小的非前景区域中,此时虽采用卷积识别的速度会比较慢,但是由于非前景区域占比较小,因此不会消耗很多时间在识别上,且能够准确识别出非前景区域中的黄色禁停线。通过将两个识别阶段相结合,达到既快速又准确识别待识别图像中黄色禁停线的效果。
在本申请实施例中,首先获取待分类图像,以从与地面标线相关的待分类图像中识别黄色禁停线;接着采用yolo物体检测框架从待分类图像中提取前景区域,得到疑似包括黄色禁停线的候选框,能够快速、精确地确定待分类图像中有较大概率包括黄色禁停线的前景区域;然后基于颜色空间对前景区域中的黄色禁停线进行识别,得到前景区域中的黄色禁停线,能够快速地在待分类图像中确定一部分黄色禁停线;最后,采用卷积神经网络模型对待分类图像中非前景区域的黄色禁停线进行识别,得到非前景区域中的黄色禁停线,能够对初次采用yolo物体检测框架检测不出的黄色禁停线也识别出来。本申请实施例在保证识别准确率的前提下,能够实现黄色禁停线的近实时识别。
应理解,上述实施例中各步骤的序号的大小并不意味着执行顺序的先后,各过程的执行顺序应以其功能和内在逻辑确定,而不应对本申请实施例的实施过程构成任何限定。
基于实施例中所提供的黄色禁停线识别方法,本申请实施例进一步给出实现上述方法实施例中各步骤及方法的装置实施例。
图2示出与实施例中黄色禁停线识别方法一一对应的黄色禁停线识别装置的原理框图。如图2所示,该黄色禁停线识别装置包括第一获取模块10、第二获取模块20、第三获取模块30和第四获取模块40。其中,第一获取模块10、第二获取模块20、第三获取模块30和第四获取模块40的实现功能与实施例中黄色禁停线识别方法对应的步骤一一对应,为避免赘述,本实施例不一一详述。
第一获取模块10,用于获取待分类图像,待分类图像与地面标线相关。
第二获取模块20,用于采用yolo物体检测框架从待分类图像中提取前景区域,其中,yolo物体检测框架的检测结果以候选框的形式表示,检测结果中类别为黄色禁停线的候选框为前景区域,黄色禁停线为地面标线中的一种。
第三获取模块30,用于基于颜色空间对前景区域中的黄色禁停线进行识别,得到前景区域中的黄色禁停线。
第四获取模块40,用于采用卷积神经网络模型对待分类图像中非前景区域的黄色禁停线进行识别,得到非前景区域中的黄色禁停线,其中,非前景区域是指待分类图像中前景区域以外的图像区域。
可选地,第二获取模块20具体用于:
将yolo物体检测框架设置为单一检测黄色禁停线的检测模式。
采用检测模式的yolo物体检测框架从待分类图像中计算检测区域的目标置信度,其中,检测区域为待分类图像预先分割得到的图像小块,每一个图像小块代表一检测区域。
将目标置信度与预设的置信度阈值进行比较,根据目标置信度高于置信度阈值的检测区域得到前景区域。
可选地,第三获取模块30具体用于:
将前景区域进行HSV颜色空间的转换,确定前景区域所在的颜色空间。
判断前景区域所在的颜色空间是否存在目标颜色,若存在,则基于前景区域中存在的目标颜色,采用最小二乘法在前景区域进行直线拟合,其中,目标颜色为黄色。
根据直线拟合的结果得到前景区域中的黄色禁停线。
可选地,黄色禁停线识别装置还具体用于:
获取训练样本,训练样本包括黄色禁停线的训练图片。
初始化卷积神经网络。
将训练样本输入到初始化后的卷积神经网络中进行训练,得到卷积神经网络模型,卷积神经网络模型用于识别黄色禁停线。
可选地,第四获取模块40还具体用于:
采用卷积神经网络模型提取非前景区域的特征向量。
基于特征向量,在卷积神经网络模型中计算得到黄色禁停线的分类概率。
将黄色禁停线的分类概率大于预设分类阈值的非前景区域确定为黄色禁停线。
在本申请实施例中,首先获取待分类图像,以从与地面标线相关的待分类图像中识别黄色禁停线;接着采用yolo物体检测框架从待分类图像中提取前景区域,得到疑似包括黄色禁停线的候选框,能够快速、精确地确定待分类图像中有较大概率包括黄色禁停线的前景区域;然后基于颜色空间对前景区域中的黄色禁停线进行识别,得到前景区域中的黄色禁停线,能够快速地在待分类图像中确定一部分黄色禁停线;最后,采用卷积神经网络模型对待分类图像中非前景区域的黄色禁停线进行识别,得到非前景区域中的黄色禁停线,能够对初次采用yolo物体检测框架检测不出的黄色禁停线也识别出来。本申请实施例在保证识别准确率的前提下,能够实现黄色禁停线的近实时识别。
本实施例提供一计算机非易失性可读存储介质,该计算机非易失性可读存储介质上存储有计算机可读指令,该计算机可读指令被处理器执行时实现实施例中黄色禁停线识别方 法,为避免重复,此处不一一赘述。或者,该计算机可读指令被处理器执行时实现实施例中黄色禁停线识别装置中各模块/单元的功能,为避免重复,此处不一一赘述。
图3是本申请一实施例提供的计算机设备的示意图。如图3所示,该实施例的计算机设备50包括:处理器51、存储器52以及存储在存储器52中并可在处理器51上运行的计算机可读指令53,该计算机可读指令53被处理器51执行时实现实施例中的黄色禁停线识别方法,为避免重复,此处不一一赘述。或者,该计算机可读指令53被处理器51执行时实现实施例中黄色禁停线识别装置中各模型/单元的功能,为避免重复,此处不一一赘述。
计算机设备50可以是桌上型计算机、笔记本、掌上电脑及云端服务器等计算设备。计算机设备50可包括,但不仅限于,处理器51、存储器52。本领域技术人员可以理解,图3仅仅是计算机设备50的示例,并不构成对计算机设备50的限定,可以包括比图示更多或更少的部件,或者组合某些部件,或者不同的部件,例如计算机设备还可以包括输入输出设备、网络接入设备、总线等。
所称处理器51可以是中央处理单元(Central Processing Unit,CPU),还可以是其他通用处理器、数字信号处理器(Digital Signal Processor,DSP)、专用集成电路(Application Specific Integrated Circuit,ASIC)、现场可编程门阵列(Field-Programmable Gate Array,FPGA)或者其他可编程逻辑器件、分立门或者晶体管逻辑器件、分立硬件组件等。通用处理器可以是微处理器或者该处理器也可以是任何常规的处理器等。
存储器52可以是计算机设备50的内部存储单元,例如计算机设备50的硬盘或内存。存储器52也可以是计算机设备50的外部存储设备,例如计算机设备50上配备的插接式硬盘,智能存储卡(Smart Media Card,SMC),安全数字(Secure Digital,SD)卡,闪存卡(Flash Card)等。进一步地,存储器52还可以既包括计算机设备50的内部存储单元也包括外部存储设备。存储器52用于存储计算机可读指令以及计算机设备所需的其他程序和数据。存储器52还可以用于暂时地存储已经输出或者将要输出的数据。
所属领域的技术人员可以清楚地了解到,为了描述的方便和简洁,仅以上述各功能单元、模块的划分进行举例说明,实际应用中,可以根据需要而将上述功能分配由不同的功能单元、模块完成,即将装置的内部结构划分成不同的功能单元或模块,以完成以上描述的全部或者部分功能。
以上实施例仅用以说明本申请的技术方案,而非对其限制;尽管参照前述实施例对本申请进行了详细的说明,本领域的普通技术人员应当理解:其依然可以对前述各实施例所 记载的技术方案进行修改,或者对其中部分技术特征进行等同替换;而这些修改或者替换,并不使相应技术方案的本质脱离本申请各实施例技术方案的精神和范围,均应包含在本申请的保护范围之内。

Claims (20)

  1. 一种黄色禁停线识别方法,其特征在于,所述方法包括:
    获取待分类图像,所述待分类图像与地面标线相关;
    采用yolo物体检测框架从所述待分类图像中提取前景区域,其中,所述yolo物体检测框架的检测结果以候选框的形式表示,检测结果中类别为黄色禁停线的所述候选框为所述前景区域,所述黄色禁停线为所述地面标线中的一种;
    基于颜色空间对所述前景区域中的黄色禁停线进行识别,得到所述前景区域中的黄色禁停线;
    采用卷积神经网络模型对所述待分类图像中非前景区域的黄色禁停线进行识别,得到所述非前景区域中的黄色禁停线,其中,所述非前景区域是指所述待分类图像中所述前景区域以外的图像区域。
  2. 根据权利要求1所述的方法,其特征在于,所述采用yolo物体检测框架从所述待分类图像中提取前景区域,包括:
    将所述yolo物体检测框架设置为单一检测所述黄色禁停线的检测模式;
    采用所述检测模式的所述yolo物体检测框架从所述待分类图像中计算检测区域的目标置信度,其中,所述检测区域为所述待分类图像预先分割得到的图像小块,每一个所述图像小块代表一所述检测区域;
    将所述目标置信度与预设的置信度阈值进行比较,根据所述目标置信度高于所述置信度阈值的所述检测区域得到所述前景区域。
  3. 根据权利要求1所述的方法,其特征在于,所述基于颜色空间对所述前景区域中的黄色禁停线进行识别,得到所述前景区域中的黄色禁停线,包括:
    将所述前景区域进行HSV颜色空间的转换,确定所述前景区域所在的颜色空间;
    判断所述前景区域所在的颜色空间是否存在目标颜色,若存在,则基于所述前景区域中存在的所述目标颜色,采用最小二乘法在所述前景区域进行直线拟合,其中,所述目标颜色为黄色;
    根据直线拟合的结果得到所述前景区域中的黄色禁停线。
  4. 根据权利要求1所述的方法,其特征在于,在所述采用卷积神经网络模型对所述待分类图像中非前景区域的黄色禁停线进行识别的步骤之前,所述方法还包括:
    获取训练样本,所述训练样本包括黄色禁停线的训练图片;
    初始化卷积神经网络;
    将所述训练样本输入到初始化后的卷积神经网络中进行训练,得到所述卷积神经网络模型,所述卷积神经网络模型用于识别所述黄色禁停线。
  5. 根据权利要求1至4任意一项所述的方法,其特征在于,所述采用卷积神经网络模型对所述待分类图像中非前景区域的黄色禁停线进行识别,包括:
    采用所述卷积神经网络模型提取所述非前景区域的特征向量;
    基于所述特征向量,在所述卷积神经网络模型中计算得到黄色禁停线的分类概率;
    将所述黄色禁停线的分类概率大于预设分类阈值的非前景区域确定为所述黄色禁停线。
  6. 一种黄色禁停线识别装置,其特征在于,所述装置包括:
    第一获取模块,用于获取待分类图像,所述待分类图像与地面标线相关;
    第二获取模块,用于采用yolo物体检测框架从所述待分类图像中提取前景区域,其中,所述yolo物体检测框架的检测结果以候选框的形式表示,检测结果中类别为黄色禁停线的所述候选框为所述前景区域,所述黄色禁停线为所述地面标线中的一种;
    第三获取模块,用于基于颜色空间对所述前景区域中的黄色禁停线进行识别,得到所述前景区域中的黄色禁停线;
    第四获取模块,用于采用卷积神经网络模型对所述待分类图像中非前景区域的黄色禁停线进行识别,得到所述非前景区域中的黄色禁停线,其中,所述非前景区域是指所述待分类图像中所述前景区域以外的图像区域。
  7. 根据权利要求6所述的装置,其特征在于,所述第二获取模块,具体用于:
    将所述yolo物体检测框架设置为单一检测所述黄色禁停线的检测模式;
    采用所述检测模式的所述yolo物体检测框架从所述待分类图像中计算检测区域的目标置信度,其中,所述检测区域为所述待分类图像预先分割得到的图像小块,每一个所述图像小块代表一所述检测区域;
    将所述目标置信度与预设的置信度阈值进行比较,根据所述目标置信度高于所述置信度阈值的所述检测区域得到所述前景区域。
  8. 根据权利要求6所述的装置,其特征在于,所述第三获取模块,具体用于:
    将所述前景区域进行HSV颜色空间的转换,确定所述前景区域所在的颜色空间;
    判断所述前景区域所在的颜色空间是否存在目标颜色,若存在,则基于所述前景区域中存在的所述目标颜色,采用最小二乘法在所述前景区域进行直线拟合,其中,所述目标颜色为黄色;
    根据直线拟合的结果得到所述前景区域中的黄色禁停线。
  9. 根据权利要求6所述的装置,其特征在于,所述装置还具体用于:
    获取训练样本,所述训练样本包括黄色禁停线的训练图片;
    初始化卷积神经网络;
    将所述训练样本输入到初始化后的卷积神经网络中进行训练,得到所述卷积神经网络模型,所述卷积神经网络模型用于识别所述黄色禁停线。
  10. 根据权利要求6-9任意一项所述的装置,其特征在于,所述第四获取模块具体用于:
    采用所述卷积神经网络模型提取所述非前景区域的特征向量;
    基于所述特征向量,在所述卷积神经网络模型中计算得到黄色禁停线的分类概率;
    将所述黄色禁停线的分类概率大于预设分类阈值的非前景区域确定为所述黄色禁停线。
  11. 一种计算机设备,包括存储器、处理器以及存储在所述存储器中并可在所述处理器上运行的计算机可读指令,其特征在于,所述处理器执行所述计算机可读指令时实现如下步骤:
    获取待分类图像,所述待分类图像与地面标线相关;
    采用yolo物体检测框架从所述待分类图像中提取前景区域,其中,所述yolo物体检测框架的检测结果以候选框的形式表示,检测结果中类别为黄色禁停线的所述候选框为所述前景区域,所述黄色禁停线为所述地面标线中的一种;
    基于颜色空间对所述前景区域中的黄色禁停线进行识别,得到所述前景区域中的黄色禁停线;
    采用卷积神经网络模型对所述待分类图像中非前景区域的黄色禁停线进行识别,得到所述非前景区域中的黄色禁停线,其中,所述非前景区域是指所述待分类图像中所述前景区域以外的图像区域。
  12. 根据权利要求11所述的计算机设备,其特征在于,所述处理器执行所述计算机可读指令实现采用yolo物体检测框架从所述待分类图像中提取前景区域时,包括如下步骤:
    将所述yolo物体检测框架设置为单一检测所述黄色禁停线的检测模式;
    采用所述检测模式的所述yolo物体检测框架从所述待分类图像中计算检测区域的目标置信度,其中,所述检测区域为所述待分类图像预先分割得到的图像小块,每一个所述图像小块代表一所述检测区域;
    将所述目标置信度与预设的置信度阈值进行比较,根据所述目标置信度高于所述置信度阈值的所述检测区域得到所述前景区域。
  13. 根据权利要求11所述的计算机设备,其特征在于,所述处理器执行所述计算机可读 指令实现基于颜色空间对所述前景区域中的黄色禁停线进行识别,得到所述前景区域中的黄色禁停线时,包括如下步骤:
    将所述前景区域进行HSV颜色空间的转换,确定所述前景区域所在的颜色空间;
    判断所述前景区域所在的颜色空间是否存在目标颜色,若存在,则基于所述前景区域中存在的所述目标颜色,采用最小二乘法在所述前景区域进行直线拟合,其中,所述目标颜色为黄色;
    根据直线拟合的结果得到所述前景区域中的黄色禁停线。
  14. 根据权利要求11所述的计算机设备,其特征在于,所述处理器执行所述计算机可读指令还实现如下步骤:
    在所述采用卷积神经网络模型对所述待分类图像中非前景区域的黄色禁停线进行识别的步骤之前,获取训练样本,所述训练样本包括黄色禁停线的训练图片;
    初始化卷积神经网络;
    将所述训练样本输入到初始化后的卷积神经网络中进行训练,得到所述卷积神经网络模型,所述卷积神经网络模型用于识别所述黄色禁停线。
  15. 根据权利要求11-14任意一项所述的计算机设备,其特征在于,所述处理器执行所述计算机可读指令实现采用卷积神经网络模型对所述待分类图像中非前景区域的黄色禁停线进行识别时,包括如下步骤:
    采用所述卷积神经网络模型提取所述非前景区域的特征向量;
    基于所述特征向量,在所述卷积神经网络模型中计算得到黄色禁停线的分类概率;
    将所述黄色禁停线的分类概率大于预设分类阈值的非前景区域确定为所述黄色禁停线。
  16. 一种计算机非易失性可读存储介质,所述计算机非易失性可读存储介质存储有计算机可读指令,其特征在于,所述计算机可读指令被处理器执行时实现如下步骤:获取待分类图像,所述待分类图像与地面标线相关;
    采用yolo物体检测框架从所述待分类图像中提取前景区域,其中,所述yolo物体检测框架的检测结果以候选框的形式表示,检测结果中类别为黄色禁停线的所述候选框为所述前景区域,所述黄色禁停线为所述地面标线中的一种;
    基于颜色空间对所述前景区域中的黄色禁停线进行识别,得到所述前景区域中的黄色禁停线;
    采用卷积神经网络模型对所述待分类图像中非前景区域的黄色禁停线进行识别,得到所述非前景区域中的黄色禁停线,其中,所述非前景区域是指所述待分类图像中所述前景区域以外 的图像区域。
  17. 根据权利要求16所述的计算机非易失性可读存储介质,其特征在于,所述计算机可读指令被一个或多个处理器执行实现采用yolo物体检测框架从所述待分类图像中提取前景区域时,包括如下步骤:
    将所述yolo物体检测框架设置为单一检测所述黄色禁停线的检测模式;
    采用所述检测模式的所述yolo物体检测框架从所述待分类图像中计算检测区域的目标置信度,其中,所述检测区域为所述待分类图像预先分割得到的图像小块,每一个所述图像小块代表一所述检测区域;
    将所述目标置信度与预设的置信度阈值进行比较,根据所述目标置信度高于所述置信度阈值的所述检测区域得到所述前景区域。
  18. 根据权利要求16所述的计算机非易失性可读存储介质,其特征在于,所述计算机可读指令被一个或多个处理器执行实现基于颜色空间对所述前景区域中的黄色禁停线进行识别,得到所述前景区域中的黄色禁停线时,包括如下步骤:
    将所述前景区域进行HSV颜色空间的转换,确定所述前景区域所在的颜色空间;
    判断所述前景区域所在的颜色空间是否存在目标颜色,若存在,则基于所述前景区域中存在的所述目标颜色,采用最小二乘法在所述前景区域进行直线拟合,其中,所述目标颜色为黄色;
    根据直线拟合的结果得到所述前景区域中的黄色禁停线。
  19. 根据权利要求16所述的计算机非易失性可读存储介质,其特征在于,所述计算机可读指令被一个或多个处理器执行时,使得所述一个或多个处理器还实现如下步骤:
    在采用卷积神经网络模型对所述待分类图像中非前景区域的黄色禁停线进行识别的步骤之前,获取训练样本,所述训练样本包括黄色禁停线的训练图片;
    初始化卷积神经网络;
    将所述训练样本输入到初始化后的卷积神经网络中进行训练,得到所述卷积神经网络模型,所述卷积神经网络模型用于识别所述黄色禁停线。
  20. 根据权利要求16-19任意一项所述的计算机非易失性可读存储介质,其特征在于,所述计算机可读指令被一个或多个处理器执行实现采用卷积神经网络模型对所述待分类图像中非前景区域的黄色禁停线进行识别时,包括如下步骤:
    采用所述卷积神经网络模型提取所述非前景区域的特征向量;
    基于所述特征向量,在所述卷积神经网络模型中计算得到黄色禁停线的分类概率;
    将所述黄色禁停线的分类概率大于预设分类阈值的非前景区域确定为所述黄色禁停线。
PCT/CN2019/115947 2019-01-23 2019-11-06 黄色禁停线识别方法、装置、计算机设备及存储介质 WO2020151299A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201910062723.7 2019-01-23
CN201910062723.7A CN109919002B (zh) 2019-01-23 2019-01-23 黄色禁停线识别方法、装置、计算机设备及存储介质

Publications (1)

Publication Number Publication Date
WO2020151299A1 true WO2020151299A1 (zh) 2020-07-30

Family

ID=66960527

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2019/115947 WO2020151299A1 (zh) 2019-01-23 2019-11-06 黄色禁停线识别方法、装置、计算机设备及存储介质

Country Status (2)

Country Link
CN (1) CN109919002B (zh)
WO (1) WO2020151299A1 (zh)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112348905A (zh) * 2020-10-30 2021-02-09 深圳市优必选科技股份有限公司 一种颜色识别方法、装置、终端设备及存储介质
CN112613344A (zh) * 2020-12-01 2021-04-06 浙江大华汽车技术有限公司 车辆占道检测方法、装置、计算机设备和可读存储介质

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109919002B (zh) * 2019-01-23 2024-02-27 平安科技(深圳)有限公司 黄色禁停线识别方法、装置、计算机设备及存储介质
CN111008672B (zh) * 2019-12-23 2022-06-10 腾讯科技(深圳)有限公司 样本提取方法、装置、计算机可读存储介质和计算机设备
CN111325716B (zh) * 2020-01-21 2023-09-01 上海万物新生环保科技集团有限公司 屏幕划痕碎裂检测方法及设备
CN112418061B (zh) * 2020-11-19 2024-01-23 城云科技(中国)有限公司 一种车辆禁停区域确定方法及系统

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107134144A (zh) * 2017-04-27 2017-09-05 武汉理工大学 一种用于交通监控的车辆检测方法
CN108229421A (zh) * 2018-01-24 2018-06-29 华中科技大学 一种基于深度视频信息的坠床行为实时检测方法
US20180268240A1 (en) * 2017-03-20 2018-09-20 Conduent Business Services, Llc Video redaction method and system
CN109241904A (zh) * 2018-08-31 2019-01-18 平安科技(深圳)有限公司 文字识别模型训练、文字识别方法、装置、设备及介质
CN109919002A (zh) * 2019-01-23 2019-06-21 平安科技(深圳)有限公司 黄色禁停线识别方法、装置、计算机设备及存储介质

Family Cites Families (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103116985B (zh) * 2013-01-21 2015-04-01 信帧电子技术(北京)有限公司 一种违章停车检测方法和装置
JP6773540B2 (ja) * 2016-12-07 2020-10-21 クラリオン株式会社 車載用画像処理装置
JP6702849B2 (ja) * 2016-12-22 2020-06-03 株式会社Soken 区画線認識装置
CN107122776A (zh) * 2017-04-14 2017-09-01 重庆邮电大学 一种基于卷积神经网络的交通标志检测与识别方法
CN107358242B (zh) * 2017-07-11 2020-09-01 浙江宇视科技有限公司 目标区域颜色识别方法、装置及监控终端
CN107480730A (zh) * 2017-09-05 2017-12-15 广州供电局有限公司 电力设备识别模型构建方法和系统、电力设备的识别方法
CN107633229A (zh) * 2017-09-21 2018-01-26 北京智芯原动科技有限公司 基于卷积神经网络的人脸检测方法及装置
CN107679508A (zh) * 2017-10-17 2018-02-09 广州汽车集团股份有限公司 交通标志检测识别方法、装置及系统

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180268240A1 (en) * 2017-03-20 2018-09-20 Conduent Business Services, Llc Video redaction method and system
CN107134144A (zh) * 2017-04-27 2017-09-05 武汉理工大学 一种用于交通监控的车辆检测方法
CN108229421A (zh) * 2018-01-24 2018-06-29 华中科技大学 一种基于深度视频信息的坠床行为实时检测方法
CN109241904A (zh) * 2018-08-31 2019-01-18 平安科技(深圳)有限公司 文字识别模型训练、文字识别方法、装置、设备及介质
CN109919002A (zh) * 2019-01-23 2019-06-21 平安科技(深圳)有限公司 黄色禁停线识别方法、装置、计算机设备及存储介质

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112348905A (zh) * 2020-10-30 2021-02-09 深圳市优必选科技股份有限公司 一种颜色识别方法、装置、终端设备及存储介质
CN112348905B (zh) * 2020-10-30 2023-12-19 深圳市优必选科技股份有限公司 一种颜色识别方法、装置、终端设备及存储介质
CN112613344A (zh) * 2020-12-01 2021-04-06 浙江大华汽车技术有限公司 车辆占道检测方法、装置、计算机设备和可读存储介质
CN112613344B (zh) * 2020-12-01 2024-04-16 浙江华锐捷技术有限公司 车辆占道检测方法、装置、计算机设备和可读存储介质

Also Published As

Publication number Publication date
CN109919002B (zh) 2024-02-27
CN109919002A (zh) 2019-06-21

Similar Documents

Publication Publication Date Title
WO2020151299A1 (zh) 黄色禁停线识别方法、装置、计算机设备及存储介质
WO2021164228A1 (zh) 一种图像数据的增广策略选取方法及系统
TWI744283B (zh) 一種單詞的分割方法和裝置
WO2020155518A1 (zh) 物体检测方法、装置、计算机设备及存储介质
CN108229509B (zh) 用于识别物体类别的方法及装置、电子设备
WO2018103608A1 (zh) 一种文字检测方法、装置及存储介质
CN110163076B (zh) 一种图像数据处理方法和相关装置
CN109165589B (zh) 基于深度学习的车辆重识别方法和装置
WO2019232866A1 (zh) 人眼模型训练方法、人眼识别方法、装置、设备及介质
TW202208832A (zh) 缺陷檢測方法、電子設備以及電腦可讀儲存介質
CN108121991B (zh) 一种基于边缘候选区域提取的深度学习舰船目标检测方法
WO2018233038A1 (zh) 基于深度学习的车牌识别方法、装置、设备及存储介质
WO2021136528A1 (zh) 一种实例分割的方法及装置
WO2022021029A1 (zh) 检测模型训练方法、装置、检测模型使用方法及存储介质
WO2020147410A1 (zh) 行人检测方法、系统、计算机设备及计算机可存储介质
CN109522807B (zh) 基于自生成特征的卫星影像识别系统、方法及电子设备
WO2020151148A1 (zh) 基于神经网络的黑白照片色彩恢复方法、装置及存储介质
CN112651953B (zh) 图片相似度计算方法、装置、计算机设备及存储介质
US11893773B2 (en) Finger vein comparison method, computer equipment, and storage medium
WO2023185118A1 (zh) 图像处理方法、装置、电子设备及存储介质
WO2020010620A1 (zh) 波浪识别方法、装置、计算机可读存储介质和无人飞行器
CN110807404A (zh) 基于深度学习的表格线检测方法、装置、终端、存储介质
CN114359932B (zh) 文本检测方法、文本识别方法及装置
CN111177811A (zh) 一种应用于云平台的消防点位自动布图的方法
Agunbiade et al. Enhancement performance of road recognition system of autonomous robots in shadow scenario

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19911153

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19911153

Country of ref document: EP

Kind code of ref document: A1