WO2021142944A1 - 车辆行为识别方法、装置 - Google Patents

车辆行为识别方法、装置 Download PDF

Info

Publication number
WO2021142944A1
WO2021142944A1 PCT/CN2020/082253 CN2020082253W WO2021142944A1 WO 2021142944 A1 WO2021142944 A1 WO 2021142944A1 CN 2020082253 W CN2020082253 W CN 2020082253W WO 2021142944 A1 WO2021142944 A1 WO 2021142944A1
Authority
WO
WIPO (PCT)
Prior art keywords
vehicle
image
monitoring
algorithm
driving situation
Prior art date
Application number
PCT/CN2020/082253
Other languages
English (en)
French (fr)
Inventor
张萌
董晓飞
梅舒欢
曹峰
余犀
Original Assignee
南京新一代人工智能研究院有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 南京新一代人工智能研究院有限公司 filed Critical 南京新一代人工智能研究院有限公司
Publication of WO2021142944A1 publication Critical patent/WO2021142944A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/58Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads
    • G06V20/584Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads of vehicle lights or traffic lights
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/52Surveillance or monitoring of activities, e.g. for recognising suspicious objects

Definitions

  • the invention relates to the field of vehicle monitoring and management, and in particular to a method and device for vehicle behavior recognition.
  • traffic supervision, determination of vehicle driving conditions, determination of vehicle behavior categories, and violations of regulations mainly rely on real-time supervision by traffic police or monitoring by surveillance cameras.
  • the problem is that the use of traffic police to determine the behavior category of vehicles and then supervise them will cost a lot of human resources.
  • traffic police it is difficult for traffic police to implement all-weather supervision, and surveillance cameras can usually only carry out simple logic. Determine, for example, whether the vehicle is speeding or running a red light.
  • the present invention aims to provide a method and device for vehicle behavior recognition.
  • a vehicle behavior recognition method which includes: acquiring a monitoring image of a vehicle driving road; and recognizing the monitoring image by using an image recognition model constructed by a convolutional neural network to obtain a monitoring image Image features: Obtain the driving situation of the vehicle according to the image characteristics of the monitoring image; determine the vehicle behavior category by using the driving situation of the vehicle obtained from the monitoring image.
  • real-time monitoring images including photos or/and videos of the vehicle driving road are acquired.
  • a training image with a label of a vehicle driving situation is used as a training set to train a convolutional neural network based on an SCNN algorithm that uses residual information to transfer information; wherein, the SCNN algorithm is used to extract image features ,
  • the depth of the convolution kernel used for the lane lines in the surveillance image is higher than the depth of the convolution kernel used for the traffic lights and traffic signs in the surveillance image.
  • a training video with a label of a vehicle driving situation is used as a training set to train a convolutional neural network based on the SCNN algorithm and the KCF algorithm using residual information for information transmission; wherein, the SCNN algorithm is used for Extract the image features of the video frame.
  • the depth of the convolution kernel used for the lane lines in the surveillance image is higher than the depth of the convolution kernel used for the traffic lights and traffic signs in the surveillance image.
  • the KCF algorithm is used to track the SCNN algorithm. The image characteristics of a specific vehicle in the video frame.
  • the convolutional neural network in the image recognition model determines at least one of the following traffic relationships based on the image characteristics of the monitored images: the positional relationship between the vehicle and the lane line, the relationship between the vehicle and the traffic light, the vehicle and the traffic The relationship between the signs, the position relationship between the vehicle and the vehicle, the speed of the vehicle; the vehicle driving situation is determined according to the traffic relationship and the preset vehicle traffic regulations.
  • the vehicle behavior category to which the vehicle driving situation obtained from the monitoring image belongs is determined.
  • An embodiment of the present invention provides a vehicle behavior recognition device, which includes a monitoring unit, a first recognition unit, a second recognition unit, and a behavior determination unit, wherein: the monitoring unit is used to obtain a monitoring image of a vehicle driving road; The first recognition unit is used for recognizing the monitoring image by using the image recognition model constructed by the convolutional neural network to obtain the image characteristics of the monitoring image; the second recognition unit is used for obtaining the image characteristics of the monitoring image according to the image characteristics of the monitoring image Vehicle driving situation; the behavior determination unit determines the vehicle behavior category by the vehicle driving situation obtained according to the monitoring image.
  • the monitoring unit is also used to obtain monitoring images including photos or/and videos of the vehicle driving road in real time.
  • the first recognition unit is further configured to use training images with labels of vehicle driving conditions as a training set to train a convolutional neural network based on the SCNN algorithm using residual information for information transmission; wherein, The SCNN algorithm is used to extract image features.
  • the depth of the convolution kernel used for lane lines in the surveillance image is higher than the depth of the convolution kernel used for traffic lights and traffic signs in the surveillance image.
  • the first recognition unit is also used to use a training video with a label of a vehicle driving situation as a training set to train a convolutional neural network based on the SCNN algorithm and the KCF algorithm for information transmission using residuals ;
  • the SCNN algorithm is used to extract the image features of the video frame, the depth of the convolution kernel used for the lane lines in the surveillance image is higher than the depth of the convolution kernel used for the traffic lights and traffic signs in the surveillance image
  • the KCF algorithm is used to track the image features of a specific vehicle in the video frame extracted by the SCNN algorithm.
  • the present invention has the following significant advantages: the image recognition model constructed by the neural network recognizes the acquired monitoring images, determines the driving situation and behavior category of the vehicle, and determines whether the vehicle violates regulations and types of violations , Can realize all-weather, all-round, efficient and accurate automatic traffic supervision.
  • FIG. 1 is a schematic flowchart of a vehicle behavior recognition method provided in an embodiment of the present invention
  • Figures 2 and 3 are the vehicle behavior category tables provided in the embodiment of the present invention.
  • FIG. 1 is a schematic flowchart of a vehicle behavior recognition method provided in an embodiment of the present invention, including specific steps, which will be described in detail below in combination with specific steps.
  • Step S101 Obtain a monitoring image of the vehicle driving road.
  • monitoring images including photos or/and videos of the vehicle driving road are acquired in real time.
  • the surveillance video may be based on the video and image obtained from the outside of the vehicle, showing the situation of the vehicle during the movement, and the video and image that can reflect the positional relationship between the vehicle and the surroundings during the movement.
  • step S102 the monitoring image is recognized by using the image recognition model constructed by the convolutional neural network to obtain the image characteristics of the monitoring image.
  • a training image with a label of a vehicle driving situation is used as a training set to train a convolutional neural network based on an SCNN algorithm that uses residual information to transmit information; wherein, the SCNN algorithm is used for When extracting image features, the depth of the convolution kernel used for the lane lines in the surveillance image is higher than the depth of the convolution kernel used for the traffic lights and traffic signs in the surveillance image.
  • a training video with a label of a vehicle driving situation is used as a training set to train a convolutional neural network based on the SCNN algorithm and the KCF algorithm that use residual information to transmit information;
  • the SCNN algorithm Used to extract the image features of the video frame, the depth of the convolution kernel used for the lane lines in the surveillance image is higher than the depth of the convolution kernel used for the traffic lights and traffic signs in the surveillance image, the KCF algorithm is used to track SCNN The image features of a specific vehicle in the video frame extracted by the algorithm.
  • the tags in the training set can be marked on objects that need to be identified, such as vehicles, traffic lights, traffic signs, and lane lines.
  • the image recognition algorithm based on convolutional neural network can carry out in-depth learning and training, and can more accurately locate dynamic targets and other targets that require image feature extraction, and perform accurate image feature extraction. Ensure that the accuracy of the recognition result is excellent.
  • the SCNN Simential Convolutional Neural Network
  • SCNN Sequential Convolutional Neural Network
  • the SCNN is used to obtain information from the vehicle driving perspective (the vehicle itself).
  • the surrounding images during the driving process of the vehicle are analyzed.
  • the embodiment of the present invention applies SCNN to the monitoring image to obtain image characteristics, which can more completely determine the behavior category of the vehicle.
  • a low-depth convolution kernel can be used for feature extraction, and for less obvious objects and signs such as lane lines, a high-depth convolution can be used.
  • the core performs feature extraction, which can improve the efficiency of behavior recognition.
  • the SCNN algorithm can identify lane lines, traffic lights, traffic signs, and moving vehicles after training. After the image features are recognized, the output image features can be used as the input of the KCF algorithm, and the KCF can be used to track the vehicle. Keeping track of the same target vehicle can improve the accuracy of the recognition result.
  • the KCF (Kernel Correlation Filter) algorithm is a tracking algorithm that can be applied to high-precision tracking of targets, avoiding the loss of tracking dynamic targets when there are no dynamic targets in certain videos, and improving recognition The accuracy of the result.
  • the KCF tracking algorithm is used to track the image features of the dynamic target extracted from the video frame, so as to avoid the loss of the dynamic target and ensure the accuracy of the recognition result.
  • the video frames used for recognition can be determined by the user according to the actual application. When all the video frames included in the video are used for recognition, it can be ensured that each behavior of the target vehicle is analyzed and classified. When the extracted video frames are used for recognition, the recognition efficiency can be improved.
  • the information transmission of the SCNN algorithm uses the following formula:
  • X'i, j, k represent the input three-dimensional tensor
  • X i, j, k represent the output three-dimensional tensor
  • i represents the number of channels
  • j represents the number of rows
  • k represents the number of columns
  • m represents the accumulation of channels
  • n represents high accumulation
  • X'm, j-1, k+n-1 represent the last updated tensor
  • K m, i, n represents its corresponding weight
  • f() is the relu function.
  • the residual information transmission method is adopted, which is easier to train and learn, and the information transmission effect is better.
  • Step S103 Obtain the driving situation of the vehicle according to the image characteristics of the monitored image.
  • the static driving situation of the vehicle in the image can be obtained through image features ;
  • the monitoring image is a video
  • the continuous driving situation of the vehicle can be obtained through each video frame.
  • the convolutional neural network in the image recognition model determines at least one of the following traffic relationships based on the image characteristics of the monitored image: the positional relationship between the vehicle and the lane line, the relationship between the vehicle and the traffic light, The relationship between the vehicle and the traffic sign, the position relationship between the vehicle and the vehicle, and the speed of the vehicle;
  • lane lines, traffic lights, and traffic signs correspond to different traffic regulations.
  • the targeted surveillance image is a photo
  • the targeted surveillance image is a video
  • the traffic regulations that the vehicle needs to follow during the corresponding period of time or distance are determined based on the lane lines, traffic lights, and traffic signs in the video.
  • the preset vehicle traffic regulations include the positional relationship between the target vehicle and other vehicles and other traffic regulations that generally need to be observed. For example, the distance between vehicles must be greater than a certain value, and contact with other vehicles is a rear-end collision or collision situation.
  • step S104 the vehicle behavior category is determined based on the vehicle driving situation obtained from the monitoring image.
  • the vehicle behavior category to which the vehicle driving situation obtained from the monitoring image belongs is determined.
  • the behavior category of the vehicle can be determined correspondingly.
  • the behavior category includes the behavior label, whether it violates the regulations and the level of violation.
  • Vehicle behavior category can be used for vehicle traffic management. It can realize all-weather, all-round, efficient and accurate traffic supervision.
  • the advantage is that it can be used flexibly to avoid the situation of low recognition effect and target loss, and determine the driving situation of the vehicle based on the traffic relationship and traffic behavior specifications, and then determine the vehicle behavior category , It can complete and efficient real-time monitoring of target vehicles and dynamic target vehicles, and ensure the accuracy of the recognition results.
  • the embodiment of the present invention also provides a vehicle behavior recognition device, which includes a monitoring unit, a first recognition unit, a second recognition unit, and a behavior determination unit, wherein:
  • the monitoring unit is used to obtain a monitoring image of a vehicle driving road
  • the first recognition unit is configured to recognize the monitoring image by using an image recognition model constructed by a convolutional neural network to obtain image characteristics of the monitoring image;
  • the second recognition unit is used to obtain the driving situation of the vehicle according to the image characteristics of the monitoring image
  • the behavior determination unit determines the type of vehicle behavior based on the driving situation of the vehicle obtained from the monitoring image.
  • the monitoring unit is also used to obtain real-time monitoring images including photos or/and videos of the driving road of the vehicle.
  • the first recognition unit is also used to use a training image with a label of a vehicle driving situation as a training set to train a convolutional neural network based on the SCNN algorithm that uses residual information to transfer information
  • the SCNN algorithm is used to extract image features, and the depth of the convolution kernel used for the lane lines in the surveillance image is higher than the depth of the convolution kernel used for the traffic lights and traffic signs in the surveillance image.
  • the first recognition unit is also used to use a training video with a label of a vehicle driving situation as a training set to train a convolutional neural network based on the SCNN algorithm and the KCF algorithm; wherein, the The SCNN algorithm is used to extract the image features of the video frame.
  • the depth of the convolution kernel used for the lane lines in the surveillance image is higher than the depth of the convolution kernel used for the traffic lights and traffic signs in the surveillance image.
  • the KCF algorithm is used for Track the image features of a specific vehicle in the video frame extracted by the SCNN algorithm.
  • the second recognition unit is also used for the convolutional neural network in the image recognition model to determine at least one of the following traffic relationships based on the image characteristics of the monitored image: the position between the vehicle and the lane line Relationship, the relationship between the vehicle and the traffic light, the relationship between the vehicle and the traffic sign, the position relationship between the vehicle and the vehicle, the speed of the vehicle; the vehicle driving situation is determined according to the traffic relationship and the preset vehicle traffic regulations.
  • the behavior determination unit is further configured to determine the vehicle behavior category to which the vehicle behavior obtained from the monitoring image belongs based on the preset correspondence between the vehicle behavior category and the vehicle driving situation.

Abstract

本发明公开了一种车辆行为识别方法、装置,所述方法包括:获取车辆行驶道路的监控图像;通过采用卷积神经网络构建的图像识别模型,对所述监控图像进行识别,得到监控图像的图像特征;依据监控图像的图像特征得到车辆行驶情形;通过依据监控图像得到的车辆行驶情形判定车辆行为类别。采用上述方案,通过神经网络构建的图像识别模型,对获取的监控图像进行识别,判断车辆的行驶情形和行为类别,确定车辆是否违规和违规种类,可以实现全天候、全方位、高效准确的交通自动监管。

Description

车辆行为识别方法、装置 技术领域
本发明涉及车辆监控管理领域,尤其涉及一种车辆行为识别方法、装置。
背景技术
随着机动车辆数量的飞速增长,对于交通监管的需求也日益增高。
现有技术中,交通监管、确定车辆行驶情况、确定车辆的行为类别、是否违规主要依赖于交通警察的实时监管,或者监控摄像头的监控。存在的问题是,通过交通警察判定车辆的行为类别,进而进行监管,在人力资源上的耗费是巨大的,同时交通警察也难以实现全天候全方位的监管,而监控摄像头通常只能进行简单的逻辑判断,例如车辆是否超速、是否闯红灯等。
发明内容
发明目的:本发明旨在提供一种车辆行为识别方法、装置。
技术方案:本发明实施例中提供一种车辆行为识别方法,包括:获取车辆行驶道路的监控图像;通过采用卷积神经网络构建的图像识别模型,对所述监控图像进行识别,得到监控图像的图像特征;依据监控图像的图像特征得到车辆行驶情形;通过依据监控图像得到的车辆行驶情形判定车辆行为类别。
具体的,实时获取车辆行驶道路的包括照片或/和视频的监控图像。
具体的,采用带有车辆行驶情形的标签的训练图像作为训练集,对基于使用残差式进行信息传递的SCNN算法的卷积神经网络进行训练;其中,所述SCNN算法,用于提取图像特征,针对监控图像中的车道线采用的卷积核的深度,高于针对监控图像中的红绿灯、交通标志采用的卷积核的深度。
具体的,采用带有车辆行驶情形的标签的训练视频作为训练集,对基于使用残差式进行信息传递的SCNN算法和KCF算法的卷积神经网络进行训练;其中,所述SCNN算法,用于提取视频帧的图像特征,针对监控图像中的车道线采用的卷积核的深度,高于针对监控图像中的红绿灯、交通标志采用的卷积核的深度,KCF算法用于追踪SCNN算法提取的视频帧中的特定车辆的图像特征。
具体的,所述图像识别模型中的卷积神经网络依据监控图像的图像特征,确定以下至少一种交通关系:车辆和车道线之间的位置关系,车辆和红绿灯之间的关系,车辆和 交通标志之间的关系,车辆和车辆之间的位置关系,车辆的行驶速度;依据交通关系与预设的车辆交通规范确定车辆行驶情形。
具体的,依据预设的车辆行为类别与车辆行驶情形的对应关系,判定依据监控图像得到的车辆行驶情形所归属的车辆行为类别。
本发明实施例中提供一种车辆行为识别装置,包括监控单元,第一识别单元,第二识别单元和行为判定单元,其中:所述监控单元,用于获取车辆行驶道路的监控图像;所述第一识别单元,用于通过采用卷积神经网络构建的图像识别模型,对所述监控图像进行识别,得到监控图像的图像特征;所述第二识别单元,用于依据监控图像的图像特征得到车辆行驶情形;所述行为判定单元,通过依据监控图像得到的车辆行驶情形判定车辆行为类别。
具体的,所述监控单元,还用于实时获取车辆行驶道路的包括照片或/和视频的监控图像。
具体的,所述第一识别单元,还用于采用带有车辆行驶情形的标签的训练图像作为训练集,对基于使用残差式进行信息传递的SCNN算法的卷积神经网络进行训练;其中,所述SCNN算法,用于提取图像特征,针对监控图像中的车道线采用的卷积核的深度,高于针对监控图像中的红绿灯、交通标志采用的卷积核的深度。
具体的,所述第一识别单元,还用于采用带有车辆行驶情形的标签的训练视频作为训练集,对基于使用残差式进行信息传递的SCNN算法和KCF算法的卷积神经网络进行训练;其中,所述SCNN算法,用于提取视频帧的图像特征,针对监控图像中的车道线采用的卷积核的深度,高于针对监控图像中的红绿灯、交通标志采用的卷积核的深度,KCF算法用于追踪SCNN算法提取的视频帧中的特定车辆的图像特征。
有益效果:与现有技术相比,本发明具有如下显著优点:通过神经网络构建的图像识别模型,对获取的监控图像进行识别,判断车辆的行驶情形和行为类别,判定车辆是否违规和违规种类,可以实现全天候、全方位、高效准确的交通自动监管。
附图说明
图1为本发明实施例中提供的车辆行为识别方法的流程示意图;
图2、3为本发明实施例中提供的车辆行为类别表格。
具体实施方式
下面结合附图对本发明的技术方案作进一步说明。
参阅图1,其为本发明实施例中提供的车辆行为识别方法的流程示意图,包括具体步骤,以下结合具体步骤进行详细说明。
步骤S101,获取车辆行驶道路的监控图像。
本发明实施例中,实时获取车辆行驶道路的包括照片或/和视频的监控图像。
在具体实施中,监控视频可以是基于从车辆外部获取的,显示车辆移动过程中的情况的视频、图像,可以体现出车辆在移动过程中与周围事物之间的位置关系的视频、图像。
步骤S102,通过采用卷积神经网络构建的图像识别模型,对所述监控图像进行识别,得到监控图像的图像特征。
本发明实施例中,采用带有车辆行驶情形的标签的训练图像作为训练集,对基于使用残差式进行信息传递的SCNN算法的卷积神经网络进行训练;其中,所述SCNN算法,用于提取图像特征,针对监控图像中的车道线采用的卷积核的深度,高于针对监控图像中的红绿灯、交通标志采用的卷积核的深度。
本发明实施例中,采用带有车辆行驶情形的标签的训练视频作为训练集,对基于使用残差式进行信息传递的SCNN算法和KCF算法的卷积神经网络进行训练;其中,所述SCNN算法,用于提取视频帧的图像特征,针对监控图像中的车道线采用的卷积核的深度,高于针对监控图像中的红绿灯、交通标志采用的卷积核的深度,KCF算法用于追踪SCNN算法提取的视频帧中的特定车辆的图像特征。
在具体实施中,训练集中的标签可以标记在需要进行识别的物体上,例如车辆、红绿灯、交通标志和车道线等。
在具体实施中,基于卷积神经网络的图像识别算法,可以进行深度的学习和训练,可以更加准确的定位到动态目标和其他需要进行图像特征提取的目标,进行准确的图像特征提取,进而可以保证识别结果的准确度优秀。
在具体实施中,SCNN(Sequential Convolutional Neural Network)算法是十分优异的图像检验、识别算法,不同于现有技术中将SCNN应用于从自动驾驶,即通过SCNN对从车辆驾驶角度(车辆本身)获得的车辆行驶过程中周围的图像进行分析,本发明实施例将SCNN应用于监控图像,获得图像特征,可以更加完整地的判定车辆的行为类别。
在具体实施中,针对红绿灯、交通标志等比较明显的物体,可以采用深度不高的卷积核进行特征提取,而对于车道线等不太明显的物体、标志,可以采用深度较高的卷积 核进行特征提取,可以提成行为识别的效率。
在具体实施中,当监控图像是视频时,在实际应用中,SCNN算法可以在训练后识别车道线、红绿灯、交通标志、行驶车辆。在识别到图像特征后,可以将输出的图像特征作为KCF算法的输入,使用KCF实现对于车辆的追踪。保持对于同一目标车辆进行追踪,可以提升识别结果的准确度。
在具体实施中,KCF(Kernel Correlation Filter)算法是跟踪算法,可以应用于对目标的高精度追踪,避免了在某几个视频没有动态目标出现的情况下,丢失追踪的动态目标,提升了识别结果的准确度。
在具体实施中,采用KCF追踪算法对从视频帧中提取的动态目标的图像特征进行追踪,避免出现动态目标丢失的情形,可以保证识别结果的准确度。
在具体实施中,应用于识别的视频帧可以由用户根据实际应用进行相应的确定,当视频包括的所有视频帧都应用于识别时,则可以保证目标车辆的每个行为都进行分析和分类,当抽取视频帧应用于识别时,则可以提高识别效率。
在具体实施中,SCNN算法的信息传递采用以下公式:
Figure PCTCN2020082253-appb-000001
其中,X’ i,j,k表示输入的三维张量,X i,j,k表示输出的三维张量,i表示通道数量,j表示行数,k表示列数,m表示通道的累加,n表示高的累加,X’ m,j-1,k+n-1表示上一次更新的张量,K m,i,n表示其对应的权重,f()是relu函数。
在具体实施中,采用残差式的信息传递方式,更加易于进行训练学习,信息传递效果更好。
步骤S103,依据监控图像的图像特征得到车辆行驶情形。
在具体实施中,在对采用卷积神经网络构建的图像识别模型进行训练学习完成后,在实际应用过程中,当监控图像是图像时,可以通过图像特征获取在图像中车辆的静止的行驶情形;当监控图像是视频时,可以通过各个视频帧获取车辆连续的行驶情形。
本发明实施例中,所述图像识别模型中的卷积神经网络依据监控图像的图像特征, 确定以下至少一种交通关系:车辆和车道线之间的位置关系,车辆和红绿灯之间的关系,车辆和交通标志之间的关系,车辆和车辆之间的位置关系,车辆的行驶速度;
依据交通关系与预设的车辆交通规范确定车辆行驶情形。
在具体实施中,车道线、红绿灯、交通标志分别对应了不同的交通规范,当针对的监控图像是照片时,基于照片中的车道线、红绿灯、交通标志确定车辆当前需要遵循的交通规范,当针对的监控图像是视频时,基于视频中的车道线、红绿灯、交通标志确定对应的一段时间或路程中车辆需要遵循的交通规范。而预设的车辆交通规范包括目标车辆和其他车辆之间的位置关系等普遍需要遵守的交通规范,如车辆之间的距离须大于一定数值,与其他车辆接触属于追尾或碰擦情形。
在具体实施中,通过车道线、红绿灯、交通标志等确定的交通规范,可以确定目标车辆在行驶过程中的遵守交通规范、违反交通规范以及违反何种交通规范的行驶情形。
步骤S104,通过依据监控图像得到的车辆行驶情形判定车辆行为类别。
本发明实施例中,依据预设的车辆行为类别与车辆行驶情形的对应关系,判定依据监控图像得到的车辆行驶情形所归属的车辆行为类别。
参阅图2、3,其为本发明实施例中提供的车辆行为类别表格。
在具体实施中,在确定了车辆在行驶过程中的遵守交通规范、违反交通规范以及违反何种交通规范的行驶情形,可以对应判定车辆的行为类别,行为类别包括行为标签、是否违规和违规等级,车辆行为类别可以用于进行车辆交通管理。可以实现全天候、全方位、高效准确的交通监管。
在具体实施中,采用SCNN算法和/或KCF算法,优势在于可以进行灵活应用算法,避免出现识别效果低下和目标丢失的情形,通过交通关系和交通行为规范确定车辆行驶情形,进而判定车辆行为类别,可以完整的、高效的对目标车辆以及动态目标车辆进行实时监控,并且确保了识别结果的准确度。
本发明实施例中还提供一种车辆行为识别装置,包括监控单元,第一识别单元,第二识别单元和行为判定单元,其中:
所述监控单元,用于获取车辆行驶道路的监控图像;
所述第一识别单元,用于通过采用卷积神经网络构建的图像识别模型,对所述监控图像进行识别,得到监控图像的图像特征;
所述第二识别单元,用于依据监控图像的图像特征得到车辆行驶情形;
所述行为判定单元,通过依据监控图像得到的车辆行驶情形判定车辆行为类别。
本发明实施例中,所述监控单元,还用于实时获取车辆行驶道路的包括照片或/和视频的监控图像。
本发明实施例中,所述第一识别单元,还用于采用带有车辆行驶情形的标签的训练图像作为训练集,对基于使用残差式进行信息传递的SCNN算法的卷积神经网络进行训练;其中,所述SCNN算法,用于提取图像特征,针对监控图像中的车道线采用的卷积核的深度,高于针对监控图像中的红绿灯、交通标志采用的卷积核的深度。
本发明实施例中,所述第一识别单元,还用于采用带有车辆行驶情形的标签的训练视频作为训练集,对基于SCNN算法和KCF算法的卷积神经网络进行训练;其中,所述SCNN算法,用于提取视频帧的图像特征,针对监控图像中的车道线采用的卷积核的深度,高于针对监控图像中的红绿灯、交通标志采用的卷积核的深度,KCF算法用于追踪SCNN算法提取的视频帧中的特定车辆的图像特征。
本发明实施例中,所述第二识别单元,还用于所述图像识别模型中的卷积神经网络依据监控图像的图像特征,确定以下至少一种交通关系:车辆和车道线之间的位置关系,车辆和红绿灯之间的关系,车辆和交通标志之间的关系,车辆和车辆之间的位置关系,车辆的行驶速度;依据交通关系与预设的车辆交通规范确定车辆行驶情形。
本发明实施例中,所述行为判定单元,还用于依据预设的车辆行为类别与车辆行驶情形的对应关系,判定依据监控图像得到的车辆行驶情形所归属的车辆行为类别。

Claims (10)

  1. 一种车辆行为识别方法,其特征在于,包括:
    获取车辆行驶道路的监控图像;
    通过采用卷积神经网络构建的图像识别模型,对所述监控图像进行识别,得到监控图像的图像特征;
    依据监控图像的图像特征得到车辆行驶情形;
    通过依据监控图像得到的车辆行驶情形判定车辆行为类别。
  2. 根据权利要求1所述的车辆行为识别方法,其特征在于,所述获取车辆行驶道路的监控图像包括:
    实时获取车辆行驶道路的包括照片或/和视频的监控图像。
  3. 根据权利要求2所述的车辆行为识别方法,其特征在于,在所述通过采用卷积神经网络构建的图像识别模型之前,包括:
    采用带有车辆行驶情形的标签的训练图像作为训练集,对基于使用残差式进行信息传递的SCNN算法的卷积神经网络进行训练;其中,所述SCNN算法,用于提取图像特征,针对监控图像中的车道线采用的卷积核的深度,高于针对监控图像中的红绿灯、交通标志采用的卷积核的深度。
  4. 根据权利要求2所述的车辆行为识别方法,其特征在于,在所述通过采用卷积神经网络构建的图像识别模型之前,包括:
    采用带有车辆行驶情形的标签的训练视频作为训练集,对基于使用残差式进行信息传递的SCNN算法和KCF算法的卷积神经网络进行训练;其中,所述SCNN算法,用于提取视频帧的图像特征,针对监控图像中的车道线采用的卷积核的深度,高于针对监控图像中的红绿灯、交通标志采用的卷积核的深度,KCF算法用于追踪SCNN算法提取的视频帧中的特定车辆的图像特征。
  5. 根据权利要求3或4所述的车辆行为识别方法,其特征在于,所述依据监控图像的图像特征得到车辆行驶情形,包括:
    所述图像识别模型中的卷积神经网络依据监控图像的图像特征,确定以下至少一种交通关系:车辆和车道线之间的位置关系,车辆和红绿灯之间的关系,车辆和交通标志之间的关系,车辆和车辆之间的位置关系,车辆的行驶速度;
    依据交通关系与预设的车辆交通规范确定车辆行驶情形。
  6. 根据权利要求5所述的车辆行为识别方法,其特征在于,所述通过依据监控图 像得到的车辆行驶情形判定车辆行为类别,包括:
    依据预设的车辆行为类别与车辆行驶情形的对应关系,判定依据监控图像得到的车辆行驶情形所归属的车辆行为类别。
  7. 一种车辆行为识别装置,其特征在于,包括监控单元,第一识别单元,第二识别单元和行为判定单元,其中:
    所述监控单元,用于获取车辆行驶道路的监控图像;
    所述第一识别单元,用于通过采用卷积神经网络构建的图像识别模型,对所述监控图像进行识别,得到监控图像的图像特征;
    所述第二识别单元,用于依据监控图像的图像特征得到车辆行驶情形;
    所述行为判定单元,通过依据监控图像得到的车辆行驶情形判定车辆行为类别。
  8. 根据权利要求7所述的车辆行为识别装置,其特征在于,所述监控单元,还用于实时获取车辆行驶道路的包括照片或/和视频的监控图像。
  9. 根据权利要求8所述的车辆行为识别装置,其特征在于,所述第一识别单元,还用于采用带有车辆行驶情形的标签的训练图像作为训练集,对基于使用残差式进行信息传递的SCNN算法的卷积神经网络进行训练;其中,所述SCNN算法,用于提取图像特征,针对监控图像中的车道线采用的卷积核的深度,高于针对监控图像中的红绿灯、交通标志采用的卷积核的深度。
  10. 根据权利要求8所述的车辆行为识别装置,其特征在于,所述第一识别单元,还用于采用带有车辆行驶情形的标签的训练视频作为训练集,对基于使用残差式进行信息传递的SCNN算法和KCF算法的卷积神经网络进行训练;其中,所述SCNN算法,用于提取视频帧的图像特征,针对监控图像中的车道线采用的卷积核的深度,高于针对监控图像中的红绿灯、交通标志采用的卷积核的深度,KCF算法用于追踪SCNN算法提取的视频帧中的特定车辆的图像特征。
PCT/CN2020/082253 2020-01-13 2020-03-31 车辆行为识别方法、装置 WO2021142944A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202010030247.3 2020-01-13
CN202010030247.3A CN111209880A (zh) 2020-01-13 2020-01-13 车辆行为识别方法、装置

Publications (1)

Publication Number Publication Date
WO2021142944A1 true WO2021142944A1 (zh) 2021-07-22

Family

ID=70788807

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/082253 WO2021142944A1 (zh) 2020-01-13 2020-03-31 车辆行为识别方法、装置

Country Status (2)

Country Link
CN (1) CN111209880A (zh)
WO (1) WO2021142944A1 (zh)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20210334550A1 (en) * 2020-04-22 2021-10-28 Pixord Corporation Control system of traffic lights and method thereof
CN114299414A (zh) * 2021-11-30 2022-04-08 无锡数据湖信息技术有限公司 一种基于深度学习的车辆闯红灯识别判定方法
CN116863711A (zh) * 2023-07-29 2023-10-10 广东省交通运输规划研究中心 基于公路监控的车道流量检测方法、装置、设备及介质

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111814766B (zh) * 2020-09-01 2020-12-15 中国人民解放军国防科技大学 车辆行为预警方法、装置、计算机设备和存储介质
CN116168370B (zh) * 2023-04-24 2023-07-18 北京数字政通科技股份有限公司 一种自动驾驶数据识别方法及其系统

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102902955A (zh) * 2012-08-30 2013-01-30 中国科学技术大学 一种车辆行为的智能分析方法及系统
CN106355884A (zh) * 2016-11-18 2017-01-25 成都通甲优博科技有限责任公司 一种基于车型分类的高速公路车辆引导系统及方法
CN106886755A (zh) * 2017-01-19 2017-06-23 北京航空航天大学 一种基于交通标志识别的交叉口车辆违章检测系统
CN109637151A (zh) * 2018-12-31 2019-04-16 上海眼控科技股份有限公司 一种高速公路应急车道违章行驶的识别方法
CN110032947A (zh) * 2019-03-22 2019-07-19 深兰科技(上海)有限公司 一种监控事件发生的方法及装置
CN110298300A (zh) * 2019-06-27 2019-10-01 上海工程技术大学 一种检测车辆违章压线的方法
CN111259760A (zh) * 2020-01-13 2020-06-09 南京新一代人工智能研究院有限公司 动态目标行为识别方法、装置

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106874863B (zh) * 2017-01-24 2020-02-07 南京大学 基于深度卷积神经网络的车辆违停逆行检测方法
CN109784254B (zh) * 2019-01-07 2021-06-25 中兴飞流信息科技有限公司 一种车辆违规事件检测的方法、装置和电子设备
CN109887281B (zh) * 2019-03-01 2021-03-26 北京云星宇交通科技股份有限公司 一种监控交通事件的方法及系统
CN110379172A (zh) * 2019-07-17 2019-10-25 浙江大华技术股份有限公司 交通规则的生成方法及装置、存储介质、电子装置

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102902955A (zh) * 2012-08-30 2013-01-30 中国科学技术大学 一种车辆行为的智能分析方法及系统
CN106355884A (zh) * 2016-11-18 2017-01-25 成都通甲优博科技有限责任公司 一种基于车型分类的高速公路车辆引导系统及方法
CN106886755A (zh) * 2017-01-19 2017-06-23 北京航空航天大学 一种基于交通标志识别的交叉口车辆违章检测系统
CN109637151A (zh) * 2018-12-31 2019-04-16 上海眼控科技股份有限公司 一种高速公路应急车道违章行驶的识别方法
CN110032947A (zh) * 2019-03-22 2019-07-19 深兰科技(上海)有限公司 一种监控事件发生的方法及装置
CN110298300A (zh) * 2019-06-27 2019-10-01 上海工程技术大学 一种检测车辆违章压线的方法
CN111259760A (zh) * 2020-01-13 2020-06-09 南京新一代人工智能研究院有限公司 动态目标行为识别方法、装置

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20210334550A1 (en) * 2020-04-22 2021-10-28 Pixord Corporation Control system of traffic lights and method thereof
US11776259B2 (en) * 2020-04-22 2023-10-03 Pixord Corporation Control system of traffic lights and method thereof
CN114299414A (zh) * 2021-11-30 2022-04-08 无锡数据湖信息技术有限公司 一种基于深度学习的车辆闯红灯识别判定方法
CN114299414B (zh) * 2021-11-30 2023-09-15 无锡数据湖信息技术有限公司 一种基于深度学习的车辆闯红灯识别判定方法
CN116863711A (zh) * 2023-07-29 2023-10-10 广东省交通运输规划研究中心 基于公路监控的车道流量检测方法、装置、设备及介质
CN116863711B (zh) * 2023-07-29 2024-03-29 广东省交通运输规划研究中心 基于公路监控的车道流量检测方法、装置、设备及介质

Also Published As

Publication number Publication date
CN111209880A (zh) 2020-05-29

Similar Documents

Publication Publication Date Title
WO2021142944A1 (zh) 车辆行为识别方法、装置
CN110619279B (zh) 一种基于跟踪的路面交通标志实例分割方法
CN105574543B (zh) 一种基于深度学习的车辆品牌型号识别方法和系统
CN104200207A (zh) 一种基于隐马尔可夫模型的车牌识别方法
Wang et al. Detection and recognition of stationary vehicles and seat belts in intelligent Internet of Things traffic management system
CN114120289A (zh) 一种行车区域与车道线识别方法及系统
Imaduddin et al. Indonesian vehicle license plate number detection using deep convolutional neural network
CN106504542A (zh) 车速智能监控方法和系统
CN111339834B (zh) 车辆行驶方向的识别方法、计算机设备及存储介质
Bichkar et al. Traffic sign classification and detection of Indian traffic signs using deep learning
Jakob et al. Traffic scenarios and vision use cases for the visually impaired
CN112633163B (zh) 一种基于机器学习算法实现非法运营车辆检测的检测方法
CN110148105B (zh) 基于迁移学习和视频帧关联学习的视频分析方法
CN114463755A (zh) 基于高精度地图采集图片中敏感信息自动检测脱敏方法
CN111259760A (zh) 动态目标行为识别方法、装置
Pan et al. A Hybrid Deep Learning Algorithm for the License Plate Detection and Recognition in Vehicle-to-Vehicle Communications
Sari et al. Traffic sign detection and recognition system for autonomous RC cars
Venkatesh et al. An intelligent traffic management system based on the Internet of Things for detecting rule violations
Dutta et al. Smart usage of open source license plate detection and using iot tools for private garage and parking solutions
Umar et al. Traffic violation detection system using image processing
Chhajro et al. Pedestrian Detection Approach for Driver Assisted System using Haar based Cascade Classifiers
Stepanyants et al. A Pipeline for Traffic Accident Dataset Development
US20230281424A1 (en) Method for Extracting Features from Data of Traffic Scenario Based on Graph Neural Network
Kavitha et al. Smart Traffic Violation Detection System Using Artificial Intelligence
Raju et al. Self-Driving Car using Neural Network with Raspberry Pi

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20914380

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20914380

Country of ref document: EP

Kind code of ref document: A1

122 Ep: pct application non-entry in european phase

Ref document number: 20914380

Country of ref document: EP

Kind code of ref document: A1

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 10.03.2023)

122 Ep: pct application non-entry in european phase

Ref document number: 20914380

Country of ref document: EP

Kind code of ref document: A1