WO2021142944A1 - 车辆行为识别方法、装置 - Google Patents
车辆行为识别方法、装置 Download PDFInfo
- Publication number
- WO2021142944A1 WO2021142944A1 PCT/CN2020/082253 CN2020082253W WO2021142944A1 WO 2021142944 A1 WO2021142944 A1 WO 2021142944A1 CN 2020082253 W CN2020082253 W CN 2020082253W WO 2021142944 A1 WO2021142944 A1 WO 2021142944A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- vehicle
- image
- monitoring
- algorithm
- driving situation
- Prior art date
Links
- 238000000034 method Methods 0.000 title claims abstract description 18
- 238000012544 monitoring process Methods 0.000 claims abstract description 64
- 238000013527 convolutional neural network Methods 0.000 claims abstract description 28
- 230000005540 biological transmission Effects 0.000 claims description 7
- 238000013528 artificial neural network Methods 0.000 abstract description 2
- 230000006399 behavior Effects 0.000 description 37
- 238000000605 extraction Methods 0.000 description 4
- 238000009825 accumulation Methods 0.000 description 2
- 230000000694 effects Effects 0.000 description 2
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000006870 function Effects 0.000 description 1
- 238000007689 inspection Methods 0.000 description 1
- 230000003068 static effect Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/50—Context or environment of the image
- G06V20/56—Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
- G06V20/58—Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads
- G06V20/584—Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads of vehicle lights or traffic lights
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/50—Context or environment of the image
- G06V20/52—Surveillance or monitoring of activities, e.g. for recognising suspicious objects
Definitions
- the invention relates to the field of vehicle monitoring and management, and in particular to a method and device for vehicle behavior recognition.
- traffic supervision, determination of vehicle driving conditions, determination of vehicle behavior categories, and violations of regulations mainly rely on real-time supervision by traffic police or monitoring by surveillance cameras.
- the problem is that the use of traffic police to determine the behavior category of vehicles and then supervise them will cost a lot of human resources.
- traffic police it is difficult for traffic police to implement all-weather supervision, and surveillance cameras can usually only carry out simple logic. Determine, for example, whether the vehicle is speeding or running a red light.
- the present invention aims to provide a method and device for vehicle behavior recognition.
- a vehicle behavior recognition method which includes: acquiring a monitoring image of a vehicle driving road; and recognizing the monitoring image by using an image recognition model constructed by a convolutional neural network to obtain a monitoring image Image features: Obtain the driving situation of the vehicle according to the image characteristics of the monitoring image; determine the vehicle behavior category by using the driving situation of the vehicle obtained from the monitoring image.
- real-time monitoring images including photos or/and videos of the vehicle driving road are acquired.
- a training image with a label of a vehicle driving situation is used as a training set to train a convolutional neural network based on an SCNN algorithm that uses residual information to transfer information; wherein, the SCNN algorithm is used to extract image features ,
- the depth of the convolution kernel used for the lane lines in the surveillance image is higher than the depth of the convolution kernel used for the traffic lights and traffic signs in the surveillance image.
- a training video with a label of a vehicle driving situation is used as a training set to train a convolutional neural network based on the SCNN algorithm and the KCF algorithm using residual information for information transmission; wherein, the SCNN algorithm is used for Extract the image features of the video frame.
- the depth of the convolution kernel used for the lane lines in the surveillance image is higher than the depth of the convolution kernel used for the traffic lights and traffic signs in the surveillance image.
- the KCF algorithm is used to track the SCNN algorithm. The image characteristics of a specific vehicle in the video frame.
- the convolutional neural network in the image recognition model determines at least one of the following traffic relationships based on the image characteristics of the monitored images: the positional relationship between the vehicle and the lane line, the relationship between the vehicle and the traffic light, the vehicle and the traffic The relationship between the signs, the position relationship between the vehicle and the vehicle, the speed of the vehicle; the vehicle driving situation is determined according to the traffic relationship and the preset vehicle traffic regulations.
- the vehicle behavior category to which the vehicle driving situation obtained from the monitoring image belongs is determined.
- An embodiment of the present invention provides a vehicle behavior recognition device, which includes a monitoring unit, a first recognition unit, a second recognition unit, and a behavior determination unit, wherein: the monitoring unit is used to obtain a monitoring image of a vehicle driving road; The first recognition unit is used for recognizing the monitoring image by using the image recognition model constructed by the convolutional neural network to obtain the image characteristics of the monitoring image; the second recognition unit is used for obtaining the image characteristics of the monitoring image according to the image characteristics of the monitoring image Vehicle driving situation; the behavior determination unit determines the vehicle behavior category by the vehicle driving situation obtained according to the monitoring image.
- the monitoring unit is also used to obtain monitoring images including photos or/and videos of the vehicle driving road in real time.
- the first recognition unit is further configured to use training images with labels of vehicle driving conditions as a training set to train a convolutional neural network based on the SCNN algorithm using residual information for information transmission; wherein, The SCNN algorithm is used to extract image features.
- the depth of the convolution kernel used for lane lines in the surveillance image is higher than the depth of the convolution kernel used for traffic lights and traffic signs in the surveillance image.
- the first recognition unit is also used to use a training video with a label of a vehicle driving situation as a training set to train a convolutional neural network based on the SCNN algorithm and the KCF algorithm for information transmission using residuals ;
- the SCNN algorithm is used to extract the image features of the video frame, the depth of the convolution kernel used for the lane lines in the surveillance image is higher than the depth of the convolution kernel used for the traffic lights and traffic signs in the surveillance image
- the KCF algorithm is used to track the image features of a specific vehicle in the video frame extracted by the SCNN algorithm.
- the present invention has the following significant advantages: the image recognition model constructed by the neural network recognizes the acquired monitoring images, determines the driving situation and behavior category of the vehicle, and determines whether the vehicle violates regulations and types of violations , Can realize all-weather, all-round, efficient and accurate automatic traffic supervision.
- FIG. 1 is a schematic flowchart of a vehicle behavior recognition method provided in an embodiment of the present invention
- Figures 2 and 3 are the vehicle behavior category tables provided in the embodiment of the present invention.
- FIG. 1 is a schematic flowchart of a vehicle behavior recognition method provided in an embodiment of the present invention, including specific steps, which will be described in detail below in combination with specific steps.
- Step S101 Obtain a monitoring image of the vehicle driving road.
- monitoring images including photos or/and videos of the vehicle driving road are acquired in real time.
- the surveillance video may be based on the video and image obtained from the outside of the vehicle, showing the situation of the vehicle during the movement, and the video and image that can reflect the positional relationship between the vehicle and the surroundings during the movement.
- step S102 the monitoring image is recognized by using the image recognition model constructed by the convolutional neural network to obtain the image characteristics of the monitoring image.
- a training image with a label of a vehicle driving situation is used as a training set to train a convolutional neural network based on an SCNN algorithm that uses residual information to transmit information; wherein, the SCNN algorithm is used for When extracting image features, the depth of the convolution kernel used for the lane lines in the surveillance image is higher than the depth of the convolution kernel used for the traffic lights and traffic signs in the surveillance image.
- a training video with a label of a vehicle driving situation is used as a training set to train a convolutional neural network based on the SCNN algorithm and the KCF algorithm that use residual information to transmit information;
- the SCNN algorithm Used to extract the image features of the video frame, the depth of the convolution kernel used for the lane lines in the surveillance image is higher than the depth of the convolution kernel used for the traffic lights and traffic signs in the surveillance image, the KCF algorithm is used to track SCNN The image features of a specific vehicle in the video frame extracted by the algorithm.
- the tags in the training set can be marked on objects that need to be identified, such as vehicles, traffic lights, traffic signs, and lane lines.
- the image recognition algorithm based on convolutional neural network can carry out in-depth learning and training, and can more accurately locate dynamic targets and other targets that require image feature extraction, and perform accurate image feature extraction. Ensure that the accuracy of the recognition result is excellent.
- the SCNN Simential Convolutional Neural Network
- SCNN Sequential Convolutional Neural Network
- the SCNN is used to obtain information from the vehicle driving perspective (the vehicle itself).
- the surrounding images during the driving process of the vehicle are analyzed.
- the embodiment of the present invention applies SCNN to the monitoring image to obtain image characteristics, which can more completely determine the behavior category of the vehicle.
- a low-depth convolution kernel can be used for feature extraction, and for less obvious objects and signs such as lane lines, a high-depth convolution can be used.
- the core performs feature extraction, which can improve the efficiency of behavior recognition.
- the SCNN algorithm can identify lane lines, traffic lights, traffic signs, and moving vehicles after training. After the image features are recognized, the output image features can be used as the input of the KCF algorithm, and the KCF can be used to track the vehicle. Keeping track of the same target vehicle can improve the accuracy of the recognition result.
- the KCF (Kernel Correlation Filter) algorithm is a tracking algorithm that can be applied to high-precision tracking of targets, avoiding the loss of tracking dynamic targets when there are no dynamic targets in certain videos, and improving recognition The accuracy of the result.
- the KCF tracking algorithm is used to track the image features of the dynamic target extracted from the video frame, so as to avoid the loss of the dynamic target and ensure the accuracy of the recognition result.
- the video frames used for recognition can be determined by the user according to the actual application. When all the video frames included in the video are used for recognition, it can be ensured that each behavior of the target vehicle is analyzed and classified. When the extracted video frames are used for recognition, the recognition efficiency can be improved.
- the information transmission of the SCNN algorithm uses the following formula:
- X'i, j, k represent the input three-dimensional tensor
- X i, j, k represent the output three-dimensional tensor
- i represents the number of channels
- j represents the number of rows
- k represents the number of columns
- m represents the accumulation of channels
- n represents high accumulation
- X'm, j-1, k+n-1 represent the last updated tensor
- K m, i, n represents its corresponding weight
- f() is the relu function.
- the residual information transmission method is adopted, which is easier to train and learn, and the information transmission effect is better.
- Step S103 Obtain the driving situation of the vehicle according to the image characteristics of the monitored image.
- the static driving situation of the vehicle in the image can be obtained through image features ;
- the monitoring image is a video
- the continuous driving situation of the vehicle can be obtained through each video frame.
- the convolutional neural network in the image recognition model determines at least one of the following traffic relationships based on the image characteristics of the monitored image: the positional relationship between the vehicle and the lane line, the relationship between the vehicle and the traffic light, The relationship between the vehicle and the traffic sign, the position relationship between the vehicle and the vehicle, and the speed of the vehicle;
- lane lines, traffic lights, and traffic signs correspond to different traffic regulations.
- the targeted surveillance image is a photo
- the targeted surveillance image is a video
- the traffic regulations that the vehicle needs to follow during the corresponding period of time or distance are determined based on the lane lines, traffic lights, and traffic signs in the video.
- the preset vehicle traffic regulations include the positional relationship between the target vehicle and other vehicles and other traffic regulations that generally need to be observed. For example, the distance between vehicles must be greater than a certain value, and contact with other vehicles is a rear-end collision or collision situation.
- step S104 the vehicle behavior category is determined based on the vehicle driving situation obtained from the monitoring image.
- the vehicle behavior category to which the vehicle driving situation obtained from the monitoring image belongs is determined.
- the behavior category of the vehicle can be determined correspondingly.
- the behavior category includes the behavior label, whether it violates the regulations and the level of violation.
- Vehicle behavior category can be used for vehicle traffic management. It can realize all-weather, all-round, efficient and accurate traffic supervision.
- the advantage is that it can be used flexibly to avoid the situation of low recognition effect and target loss, and determine the driving situation of the vehicle based on the traffic relationship and traffic behavior specifications, and then determine the vehicle behavior category , It can complete and efficient real-time monitoring of target vehicles and dynamic target vehicles, and ensure the accuracy of the recognition results.
- the embodiment of the present invention also provides a vehicle behavior recognition device, which includes a monitoring unit, a first recognition unit, a second recognition unit, and a behavior determination unit, wherein:
- the monitoring unit is used to obtain a monitoring image of a vehicle driving road
- the first recognition unit is configured to recognize the monitoring image by using an image recognition model constructed by a convolutional neural network to obtain image characteristics of the monitoring image;
- the second recognition unit is used to obtain the driving situation of the vehicle according to the image characteristics of the monitoring image
- the behavior determination unit determines the type of vehicle behavior based on the driving situation of the vehicle obtained from the monitoring image.
- the monitoring unit is also used to obtain real-time monitoring images including photos or/and videos of the driving road of the vehicle.
- the first recognition unit is also used to use a training image with a label of a vehicle driving situation as a training set to train a convolutional neural network based on the SCNN algorithm that uses residual information to transfer information
- the SCNN algorithm is used to extract image features, and the depth of the convolution kernel used for the lane lines in the surveillance image is higher than the depth of the convolution kernel used for the traffic lights and traffic signs in the surveillance image.
- the first recognition unit is also used to use a training video with a label of a vehicle driving situation as a training set to train a convolutional neural network based on the SCNN algorithm and the KCF algorithm; wherein, the The SCNN algorithm is used to extract the image features of the video frame.
- the depth of the convolution kernel used for the lane lines in the surveillance image is higher than the depth of the convolution kernel used for the traffic lights and traffic signs in the surveillance image.
- the KCF algorithm is used for Track the image features of a specific vehicle in the video frame extracted by the SCNN algorithm.
- the second recognition unit is also used for the convolutional neural network in the image recognition model to determine at least one of the following traffic relationships based on the image characteristics of the monitored image: the position between the vehicle and the lane line Relationship, the relationship between the vehicle and the traffic light, the relationship between the vehicle and the traffic sign, the position relationship between the vehicle and the vehicle, the speed of the vehicle; the vehicle driving situation is determined according to the traffic relationship and the preset vehicle traffic regulations.
- the behavior determination unit is further configured to determine the vehicle behavior category to which the vehicle behavior obtained from the monitoring image belongs based on the preset correspondence between the vehicle behavior category and the vehicle driving situation.
Abstract
Description
Claims (10)
- 一种车辆行为识别方法,其特征在于,包括:获取车辆行驶道路的监控图像;通过采用卷积神经网络构建的图像识别模型,对所述监控图像进行识别,得到监控图像的图像特征;依据监控图像的图像特征得到车辆行驶情形;通过依据监控图像得到的车辆行驶情形判定车辆行为类别。
- 根据权利要求1所述的车辆行为识别方法,其特征在于,所述获取车辆行驶道路的监控图像包括:实时获取车辆行驶道路的包括照片或/和视频的监控图像。
- 根据权利要求2所述的车辆行为识别方法,其特征在于,在所述通过采用卷积神经网络构建的图像识别模型之前,包括:采用带有车辆行驶情形的标签的训练图像作为训练集,对基于使用残差式进行信息传递的SCNN算法的卷积神经网络进行训练;其中,所述SCNN算法,用于提取图像特征,针对监控图像中的车道线采用的卷积核的深度,高于针对监控图像中的红绿灯、交通标志采用的卷积核的深度。
- 根据权利要求2所述的车辆行为识别方法,其特征在于,在所述通过采用卷积神经网络构建的图像识别模型之前,包括:采用带有车辆行驶情形的标签的训练视频作为训练集,对基于使用残差式进行信息传递的SCNN算法和KCF算法的卷积神经网络进行训练;其中,所述SCNN算法,用于提取视频帧的图像特征,针对监控图像中的车道线采用的卷积核的深度,高于针对监控图像中的红绿灯、交通标志采用的卷积核的深度,KCF算法用于追踪SCNN算法提取的视频帧中的特定车辆的图像特征。
- 根据权利要求3或4所述的车辆行为识别方法,其特征在于,所述依据监控图像的图像特征得到车辆行驶情形,包括:所述图像识别模型中的卷积神经网络依据监控图像的图像特征,确定以下至少一种交通关系:车辆和车道线之间的位置关系,车辆和红绿灯之间的关系,车辆和交通标志之间的关系,车辆和车辆之间的位置关系,车辆的行驶速度;依据交通关系与预设的车辆交通规范确定车辆行驶情形。
- 根据权利要求5所述的车辆行为识别方法,其特征在于,所述通过依据监控图 像得到的车辆行驶情形判定车辆行为类别,包括:依据预设的车辆行为类别与车辆行驶情形的对应关系,判定依据监控图像得到的车辆行驶情形所归属的车辆行为类别。
- 一种车辆行为识别装置,其特征在于,包括监控单元,第一识别单元,第二识别单元和行为判定单元,其中:所述监控单元,用于获取车辆行驶道路的监控图像;所述第一识别单元,用于通过采用卷积神经网络构建的图像识别模型,对所述监控图像进行识别,得到监控图像的图像特征;所述第二识别单元,用于依据监控图像的图像特征得到车辆行驶情形;所述行为判定单元,通过依据监控图像得到的车辆行驶情形判定车辆行为类别。
- 根据权利要求7所述的车辆行为识别装置,其特征在于,所述监控单元,还用于实时获取车辆行驶道路的包括照片或/和视频的监控图像。
- 根据权利要求8所述的车辆行为识别装置,其特征在于,所述第一识别单元,还用于采用带有车辆行驶情形的标签的训练图像作为训练集,对基于使用残差式进行信息传递的SCNN算法的卷积神经网络进行训练;其中,所述SCNN算法,用于提取图像特征,针对监控图像中的车道线采用的卷积核的深度,高于针对监控图像中的红绿灯、交通标志采用的卷积核的深度。
- 根据权利要求8所述的车辆行为识别装置,其特征在于,所述第一识别单元,还用于采用带有车辆行驶情形的标签的训练视频作为训练集,对基于使用残差式进行信息传递的SCNN算法和KCF算法的卷积神经网络进行训练;其中,所述SCNN算法,用于提取视频帧的图像特征,针对监控图像中的车道线采用的卷积核的深度,高于针对监控图像中的红绿灯、交通标志采用的卷积核的深度,KCF算法用于追踪SCNN算法提取的视频帧中的特定车辆的图像特征。
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010030247.3 | 2020-01-13 | ||
CN202010030247.3A CN111209880A (zh) | 2020-01-13 | 2020-01-13 | 车辆行为识别方法、装置 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2021142944A1 true WO2021142944A1 (zh) | 2021-07-22 |
Family
ID=70788807
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2020/082253 WO2021142944A1 (zh) | 2020-01-13 | 2020-03-31 | 车辆行为识别方法、装置 |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN111209880A (zh) |
WO (1) | WO2021142944A1 (zh) |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20210334550A1 (en) * | 2020-04-22 | 2021-10-28 | Pixord Corporation | Control system of traffic lights and method thereof |
CN114299414A (zh) * | 2021-11-30 | 2022-04-08 | 无锡数据湖信息技术有限公司 | 一种基于深度学习的车辆闯红灯识别判定方法 |
CN116863711A (zh) * | 2023-07-29 | 2023-10-10 | 广东省交通运输规划研究中心 | 基于公路监控的车道流量检测方法、装置、设备及介质 |
Families Citing this family (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111814766B (zh) * | 2020-09-01 | 2020-12-15 | 中国人民解放军国防科技大学 | 车辆行为预警方法、装置、计算机设备和存储介质 |
CN116168370B (zh) * | 2023-04-24 | 2023-07-18 | 北京数字政通科技股份有限公司 | 一种自动驾驶数据识别方法及其系统 |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102902955A (zh) * | 2012-08-30 | 2013-01-30 | 中国科学技术大学 | 一种车辆行为的智能分析方法及系统 |
CN106355884A (zh) * | 2016-11-18 | 2017-01-25 | 成都通甲优博科技有限责任公司 | 一种基于车型分类的高速公路车辆引导系统及方法 |
CN106886755A (zh) * | 2017-01-19 | 2017-06-23 | 北京航空航天大学 | 一种基于交通标志识别的交叉口车辆违章检测系统 |
CN109637151A (zh) * | 2018-12-31 | 2019-04-16 | 上海眼控科技股份有限公司 | 一种高速公路应急车道违章行驶的识别方法 |
CN110032947A (zh) * | 2019-03-22 | 2019-07-19 | 深兰科技(上海)有限公司 | 一种监控事件发生的方法及装置 |
CN110298300A (zh) * | 2019-06-27 | 2019-10-01 | 上海工程技术大学 | 一种检测车辆违章压线的方法 |
CN111259760A (zh) * | 2020-01-13 | 2020-06-09 | 南京新一代人工智能研究院有限公司 | 动态目标行为识别方法、装置 |
Family Cites Families (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106874863B (zh) * | 2017-01-24 | 2020-02-07 | 南京大学 | 基于深度卷积神经网络的车辆违停逆行检测方法 |
CN109784254B (zh) * | 2019-01-07 | 2021-06-25 | 中兴飞流信息科技有限公司 | 一种车辆违规事件检测的方法、装置和电子设备 |
CN109887281B (zh) * | 2019-03-01 | 2021-03-26 | 北京云星宇交通科技股份有限公司 | 一种监控交通事件的方法及系统 |
CN110379172A (zh) * | 2019-07-17 | 2019-10-25 | 浙江大华技术股份有限公司 | 交通规则的生成方法及装置、存储介质、电子装置 |
-
2020
- 2020-01-13 CN CN202010030247.3A patent/CN111209880A/zh active Pending
- 2020-03-31 WO PCT/CN2020/082253 patent/WO2021142944A1/zh active Application Filing
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102902955A (zh) * | 2012-08-30 | 2013-01-30 | 中国科学技术大学 | 一种车辆行为的智能分析方法及系统 |
CN106355884A (zh) * | 2016-11-18 | 2017-01-25 | 成都通甲优博科技有限责任公司 | 一种基于车型分类的高速公路车辆引导系统及方法 |
CN106886755A (zh) * | 2017-01-19 | 2017-06-23 | 北京航空航天大学 | 一种基于交通标志识别的交叉口车辆违章检测系统 |
CN109637151A (zh) * | 2018-12-31 | 2019-04-16 | 上海眼控科技股份有限公司 | 一种高速公路应急车道违章行驶的识别方法 |
CN110032947A (zh) * | 2019-03-22 | 2019-07-19 | 深兰科技(上海)有限公司 | 一种监控事件发生的方法及装置 |
CN110298300A (zh) * | 2019-06-27 | 2019-10-01 | 上海工程技术大学 | 一种检测车辆违章压线的方法 |
CN111259760A (zh) * | 2020-01-13 | 2020-06-09 | 南京新一代人工智能研究院有限公司 | 动态目标行为识别方法、装置 |
Cited By (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20210334550A1 (en) * | 2020-04-22 | 2021-10-28 | Pixord Corporation | Control system of traffic lights and method thereof |
US11776259B2 (en) * | 2020-04-22 | 2023-10-03 | Pixord Corporation | Control system of traffic lights and method thereof |
CN114299414A (zh) * | 2021-11-30 | 2022-04-08 | 无锡数据湖信息技术有限公司 | 一种基于深度学习的车辆闯红灯识别判定方法 |
CN114299414B (zh) * | 2021-11-30 | 2023-09-15 | 无锡数据湖信息技术有限公司 | 一种基于深度学习的车辆闯红灯识别判定方法 |
CN116863711A (zh) * | 2023-07-29 | 2023-10-10 | 广东省交通运输规划研究中心 | 基于公路监控的车道流量检测方法、装置、设备及介质 |
CN116863711B (zh) * | 2023-07-29 | 2024-03-29 | 广东省交通运输规划研究中心 | 基于公路监控的车道流量检测方法、装置、设备及介质 |
Also Published As
Publication number | Publication date |
---|---|
CN111209880A (zh) | 2020-05-29 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2021142944A1 (zh) | 车辆行为识别方法、装置 | |
CN110619279B (zh) | 一种基于跟踪的路面交通标志实例分割方法 | |
CN105574543B (zh) | 一种基于深度学习的车辆品牌型号识别方法和系统 | |
CN104200207A (zh) | 一种基于隐马尔可夫模型的车牌识别方法 | |
Wang et al. | Detection and recognition of stationary vehicles and seat belts in intelligent Internet of Things traffic management system | |
CN114120289A (zh) | 一种行车区域与车道线识别方法及系统 | |
Imaduddin et al. | Indonesian vehicle license plate number detection using deep convolutional neural network | |
CN106504542A (zh) | 车速智能监控方法和系统 | |
CN111339834B (zh) | 车辆行驶方向的识别方法、计算机设备及存储介质 | |
Bichkar et al. | Traffic sign classification and detection of Indian traffic signs using deep learning | |
Jakob et al. | Traffic scenarios and vision use cases for the visually impaired | |
CN112633163B (zh) | 一种基于机器学习算法实现非法运营车辆检测的检测方法 | |
CN110148105B (zh) | 基于迁移学习和视频帧关联学习的视频分析方法 | |
CN114463755A (zh) | 基于高精度地图采集图片中敏感信息自动检测脱敏方法 | |
CN111259760A (zh) | 动态目标行为识别方法、装置 | |
Pan et al. | A Hybrid Deep Learning Algorithm for the License Plate Detection and Recognition in Vehicle-to-Vehicle Communications | |
Sari et al. | Traffic sign detection and recognition system for autonomous RC cars | |
Venkatesh et al. | An intelligent traffic management system based on the Internet of Things for detecting rule violations | |
Dutta et al. | Smart usage of open source license plate detection and using iot tools for private garage and parking solutions | |
Umar et al. | Traffic violation detection system using image processing | |
Chhajro et al. | Pedestrian Detection Approach for Driver Assisted System using Haar based Cascade Classifiers | |
Stepanyants et al. | A Pipeline for Traffic Accident Dataset Development | |
US20230281424A1 (en) | Method for Extracting Features from Data of Traffic Scenario Based on Graph Neural Network | |
Kavitha et al. | Smart Traffic Violation Detection System Using Artificial Intelligence | |
Raju et al. | Self-Driving Car using Neural Network with Raspberry Pi |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 20914380 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 20914380 Country of ref document: EP Kind code of ref document: A1 |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 20914380 Country of ref document: EP Kind code of ref document: A1 |
|
32PN | Ep: public notification in the ep bulletin as address of the adressee cannot be established |
Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 10.03.2023) |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 20914380 Country of ref document: EP Kind code of ref document: A1 |