CN111126261B - Video data analysis method and device, raspberry group device and readable storage medium - Google Patents

Video data analysis method and device, raspberry group device and readable storage medium Download PDF

Info

Publication number
CN111126261B
CN111126261B CN201911339888.0A CN201911339888A CN111126261B CN 111126261 B CN111126261 B CN 111126261B CN 201911339888 A CN201911339888 A CN 201911339888A CN 111126261 B CN111126261 B CN 111126261B
Authority
CN
China
Prior art keywords
road
vehicle
characteristic points
video
speed
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201911339888.0A
Other languages
Chinese (zh)
Other versions
CN111126261A (en
Inventor
柯锐珉
蒲自源
庄一帆
史传辉
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Research Institute Tsinghua University
Original Assignee
Shenzhen Research Institute Tsinghua University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Research Institute Tsinghua University filed Critical Shenzhen Research Institute Tsinghua University
Priority to CN201911339888.0A priority Critical patent/CN111126261B/en
Publication of CN111126261A publication Critical patent/CN111126261A/en
Application granted granted Critical
Publication of CN111126261B publication Critical patent/CN111126261B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/52Surveillance or monitoring of activities, e.g. for recognising suspicious objects
    • G06V20/54Surveillance or monitoring of activities, e.g. for recognising suspicious objects of traffic, e.g. cars on the road, trains or boats
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/23Clustering techniques
    • G06F18/232Non-hierarchical techniques
    • G06F18/2321Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/46Extracting features or characteristics from the video content, e.g. video fingerprints, representative shots or key frames
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/01Detecting movement of traffic to be counted or controlled
    • G08G1/0104Measuring and analyzing of parameters relative to traffic conditions
    • G08G1/0125Traffic data processing
    • G08G1/0133Traffic data processing for classifying traffic situation
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/01Detecting movement of traffic to be counted or controlled
    • G08G1/017Detecting movement of traffic to be counted or controlled identifying vehicles
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/01Detecting movement of traffic to be counted or controlled
    • G08G1/056Detecting movement of traffic to be counted or controlled with provision for distinguishing direction of travel
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/08Detecting or categorising vehicles
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Multimedia (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Probability & Statistics with Applications (AREA)
  • Analytical Chemistry (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Chemical & Material Sciences (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Image Analysis (AREA)

Abstract

The application provides a video data analysis method, which comprises the following steps: receiving video information for monitoring road conditions; extracting and tracking characteristic points in the video information; filtering feature points in the video background to obtain feature points of the vehicle; obtaining a road contour according to the characteristic points of the vehicle; extracting a target vehicle in the video information based on the road profile; and identifying a category of the target vehicle. The application also provides a video data analysis device, a raspberry group device and a storage medium. By the method and the device, real-time high-precision vehicle and road contour detection can be performed in the raspberry-set device with limited computational power.

Description

Video data analysis method and device, raspberry group device and readable storage medium
Technical Field
The invention relates to the technical field of computers, in particular to a video data analysis method and device, a raspberry group device and a readable storage medium.
Background
In an intelligent traffic system, extracting road and traffic flow information from videos shot by cameras accurately in real time is one basis for guaranteeing various system functions. In the city at present, most of the cameras have only the function of recording road traffic conditions and do not have the function of detecting the traffic conditions in real time.
Disclosure of Invention
In view of the foregoing, it is desirable to provide a video data analysis method and apparatus, a raspberry-based apparatus, and a readable storage medium that can perform real-time high-precision vehicle and road contour detection in a raspberry-based apparatus with limited computational power.
A first aspect of the present application provides a video data analysis method, the method comprising:
receiving video information for monitoring road conditions;
extracting and tracking characteristic points in the video information;
filtering feature points in the video background to obtain feature points of the vehicle;
obtaining a road contour according to the characteristic points of the vehicle;
extracting a target vehicle in the video information based on the road profile; a kind of electronic device with high-pressure air-conditioning system
A category of the target vehicle is identified.
Further, the filtering the feature points in the video background to obtain the feature points of the vehicle includes:
acquiring the moving speed of the feature points;
comparing the speed of the characteristic points with a speed threshold;
when the speed of the characteristic points is smaller than or equal to the speed threshold value, filtering the characteristic points;
and when the speed of the characteristic point is greater than the speed threshold value, reserving the characteristic point.
Further, obtaining the road profile according to the feature points of the vehicle includes:
acquiring the moving direction and position information of the characteristic points of the vehicle;
clustering according to the moving direction and the position information of the feature points to obtain a plurality of clusters;
and obtaining the road profile according to the shape of each cluster.
Further, the identifying the category of the target vehicle includes:
inputting the target vehicle into a pre-trained vehicle type recognition model;
acquiring a recognition result output by the vehicle type recognition model; a kind of electronic device with high-pressure air-conditioning system
And determining the category of the target vehicle according to the identification result.
Further, the method further comprises:
extracting a road area in the video information according to the road contour to obtain a road area image;
acquiring parameter values of the road area image, wherein the parameter values comprise gray values and reflection values; a kind of electronic device with high-pressure air-conditioning system
And analyzing the road state according to the parameter value and the received temperature and humidity information.
A second aspect of the present application provides a video data analysis apparatus, the apparatus comprising:
the receiving module is used for receiving video information of the monitored road condition;
the extraction module is used for extracting and tracking the characteristic points in the video information;
the filtering module is used for filtering the characteristic points in the video background to obtain the characteristic points of the vehicle;
the processing module is used for obtaining a road contour according to the characteristic points of the vehicle;
the extraction module is also used for extracting a target vehicle in the video information based on the road contour; a kind of electronic device with high-pressure air-conditioning system
And the identification module is used for identifying the category of the target vehicle.
Further, the filtering module is specifically configured to perform the following operations:
acquiring the moving speed of the feature points;
comparing the speed of the characteristic points with a speed threshold;
when the speed of the characteristic points is smaller than or equal to the speed threshold value, filtering the characteristic points;
and when the speed of the characteristic point is greater than the speed threshold value, reserving the characteristic point.
Further, the processing module is specifically configured to perform the following operations:
acquiring the moving direction and position information of the characteristic points of the vehicle;
clustering according to the moving direction and the position information of the feature points to obtain a plurality of clusters;
and obtaining the road profile according to the shape of each cluster.
A third aspect of the present application provides a raspberry-pie apparatus comprising a processor for implementing the video data analysis method when executing a computer program stored in a memory.
A fourth aspect of the present application provides a computer readable storage medium having stored thereon a computer program which, when executed by a processor, implements the video data analysis method.
According to the method, the characteristic points of the vehicles in the video information are extracted, the road outline is obtained according to the characteristic points of the vehicles, and the target vehicles in the video information are extracted based on the road outline; and identifying a category of the target vehicle. The real-time high-precision vehicle and road contour detection can be performed in the raspberry pie device with limited calculation power, so that the data transmission quantity is greatly reduced by utilizing edge calculation.
Drawings
Fig. 1 is a flowchart of a video data analysis method according to an embodiment of the present invention.
Fig. 2 is a block diagram of a video data analysis device according to a second embodiment of the present invention.
Fig. 3 is a schematic diagram of a raspberry pi device according to a third embodiment of the invention.
Detailed Description
In order that the above-recited objects, features and advantages of the present invention will be more clearly understood, a more particular description of the invention will be rendered by reference to specific embodiments thereof which are illustrated in the appended drawings. It should be noted that, in the case of no conflict, the embodiments of the present application and the features in the embodiments may be combined with each other.
In the following description, numerous specific details are set forth in order to provide a thorough understanding of the present invention, and the described embodiments are merely some, rather than all, embodiments of the present invention. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this invention belongs. The terminology used herein in the description of the invention is for the purpose of describing particular embodiments only and is not intended to be limiting of the invention.
Preferably, the traffic monitoring method of the present invention is applied in one or more raspberry groups. In this application, the following description is given by way of example: raspberry pie: raspberry Pi (Chinese name "Raspberry Pi", abbreviated RPi) is designed for learning computer programming education, and is only a credit card-sized microcomputer based on Linux.
The raspberry pie is a microcomputer main board based on ARM, has strong data processing capability, takes an SD memory card or a MicroSD card as a memory hard disk, is provided with a USB interface, can be connected with a keyboard, a mouse and a network cable, and is provided with a television output interface for transmitting video analog signals and an HDMI high-definition video output interface. The components of the raspberry pie are all integrated on a main board which is only slightly larger than a credit card, and the raspberry pie has the basic functions of all computers, and can execute various functions such as electronic forms, word processing, game playing, high-definition video playing and the like by only connecting a display screen and a keyboard. The memory hard disk is used for storing the operation device, the software tool and the processing data of each type of raspberry group, so that the raspberry group can normally operate, and the stored software tool can increase the functional operation of the raspberry group, so that the raspberry group can process the data more conveniently. Wherein the raspberry pie operating device is a Linux-based device such as Ubuntu, fedora, debian, etc.; the software tool may be set as a Python tool, a Java tool, etc.
Example 1
Fig. 1 is a flowchart of a video data analysis method according to an embodiment of the present invention. The order of the steps in the flowchart may be changed and some steps may be omitted according to various needs. For convenience of explanation, only portions relevant to the embodiments of the present invention are shown.
As shown in fig. 1, the video data analysis method specifically includes the following steps:
step S1: video information is received that monitors the condition of the road.
In this embodiment, the video information is video captured by a camera fixedly installed on a road. For example, cameras installed on highways or national roads. And the camera sends the video information of the road condition shot in real time to the raspberry group computer.
Step S2: feature points in the video information are extracted and tracked.
In the embodiment of the invention, the characteristic points are as follows: in the video image, comparing a certain point S with the gray value of the neighborhood pixel point, if l of N continuous pixels exist on the circle k The absolute value of the gray value of k=1, 2, … N minus the gray value of S is greater than the set threshold, S being the desired feature point.
In the present embodiment, the feature points in the video information are extracted by a sparse optical flow method. The sparse optical flow method has the advantages of small calculated amount, accurate optical flow calculation and the like, and is widely applied to a real-time system.
In other embodiments of the present invention, the feature points in the video information may be extracted by other existing or future algorithms or methods for extracting feature points in the video, and the method for extracting feature points is not limited in the present invention.
Step S3: and filtering the characteristic points in the video background to obtain the characteristic points of the vehicle.
In the present embodiment, since the camera is fixedly installed on the road, the background in the video information captured by the camera is in a stationary state, and the vehicle is in a moving state. Therefore, the moving speed of the feature points can be utilized to extract the feature points of the vehicle in the video. The feature points with small or zero setting speed belong to the feature points in the video background, so that the feature points in the video background are filtered out, and the feature points of the vehicle are obtained.
Specifically, the filtering the feature points in the video background to obtain the feature points of the vehicle includes:
acquiring the moving speed of the feature points;
comparing the speed of the characteristic points with a speed threshold;
when the speed of the characteristic points is smaller than or equal to the speed threshold value, filtering the characteristic points;
and when the speed of the characteristic point is greater than the speed threshold value, reserving the characteristic point.
Step S4: and obtaining the road contour according to the characteristic points of the vehicle.
Because the vehicles on the road run in a certain direction, after the characteristic points in the processed video background are filtered for a period of time, the obtained characteristic points of the vehicles can cover most road areas. And carrying out cluster analysis according to the positions and the directions of the characteristic points, wherein the contour of each class after clustering is the contour of the road in one direction.
Specifically, obtaining a road profile from the feature points of the vehicle includes:
acquiring the moving direction and position information of the characteristic points of the vehicle;
clustering according to the moving direction and the position information of the feature points to obtain a plurality of clusters;
and obtaining the road profile according to the shape of each cluster.
In this embodiment, the position and the direction of the feature point are subjected to cluster analysis by a Density clustering algorithm (Density-Based Spatial Clustering of Application with Noise, DBSCAN), and arbitrary shapes can be clustered. One fundamental difference between the DBSCAN clustering algorithm and other methods is that the DBSCAN clustering algorithm is not based on a variety of distance metrics, but rather is based on density. Therefore, the number of categories is not set in advance, but the number of categories is obtained by itself. Since the number of roads in different directions is uncertain in different video scenes. Therefore, the DBSCAN clustering algorithm can accurately obtain the road profile.
Preferably, the method further comprises: and analyzing the road state according to the road profile and the received temperature and humidity information.
Specifically, extracting a road area in the video information according to the road contour to obtain a road area image; acquiring parameter values of the road area image, wherein the parameter values comprise gray values and reflection values; and analyzing the road state according to the parameter value and the received temperature and humidity information.
In this embodiment, the camera may include a temperature sensor and/or a humidity sensor thereon. The temperature sensor and/or the humidity sensor are/is used for detecting the temperature and the humidity of the current road environment and sending the detected temperature value and/or humidity value to the raspberry group. Thereby facilitating the raspberry group to detect the road surface condition according to the parameter value and the received temperature and humidity information. For example, when the gray value of the road area image is higher, the temperature of the environment in which the current road is located is lower.
Step S5: and extracting the target vehicle in the video information based on the road contour.
In this embodiment, the video foreground is obtained after processing the video information by a background difference algorithm. Since vehicles on the road need to be extracted, the video foreground is filtered through the road outline, and only target objects on the road are reserved. And extracting the target vehicle in the video information according to whether the size of the reserved target object accords with the vehicle size.
Specifically, the extracting the target vehicle in the video information based on the road profile includes:
processing each frame of image in the video information by a background difference method to obtain a foreground image;
filtering the foreground image based on the road contour to obtain a target object image;
comparing the size of the target object image with a preset size;
and when the size of the target object image is larger than or equal to the preset size, extracting the target object to obtain a target vehicle.
Step S6: a category of the target vehicle is identified.
In the embodiment, the target vehicle is input into a pre-trained vehicle type recognition model; acquiring a recognition result output by the vehicle type recognition model; and determining the category of the target vehicle according to the identification result.
In this embodiment, the vehicle type recognition model is pre-trained, and the training process may include: pre-acquiring a plurality of vehicle images of different categories; dividing a plurality of vehicle images of different categories and vehicle categories into a training set of a first proportion and a test set of a second proportion, wherein the first proportion is far greater than the second proportion; inputting the training set into a preset deep neural network to perform supervised learning and training to obtain a vehicle type recognition model; inputting the test set into the vehicle type identification model for testing to obtain a test passing rate; and when the test passing rate is greater than or equal to a preset passing rate threshold value, ending the training of the vehicle type recognition model, and when the test passing rate is less than the preset passing rate threshold value, re-dividing the training set and the testing set, learning and training the vehicle type recognition model based on the new training set, and testing the passing rate of the vehicle type recognition model obtained by new training based on the new testing set. Since the vehicle type recognition model is not the focus of the present invention, the specific process of training the vehicle type recognition model is not described in detail herein. The training process of the vehicle type recognition model may be performed off-line, and when the category of the target vehicle needs to be recognized, an image of the target vehicle may be input on-line into the vehicle type recognition model, and the category of the target vehicle may be output through the vehicle type recognition model.
The category of the target vehicle may include bicycles, motorcycles, cars, buses, trucks, and the like.
Preferably, the method further comprises: and sending the category and the road state of the target vehicle to a server. The server is in communication connection with the raspberry group. The server can track the vehicle, count the vehicle, extract the traffic flow and the like according to the received category and road state of the target vehicle so as to support other intelligent traffic monitoring tasks.
According to the video data analysis method provided by the invention, the feature points of the vehicles in the video information are extracted, the road contour is obtained according to the feature points of the vehicles, and the target vehicles in the video information are extracted based on the road contour; and identifying a category of the target vehicle. The real-time high-precision vehicle and road contour detection can be performed in the raspberry pie device with limited calculation power, so that the data transmission quantity is greatly reduced by utilizing edge calculation. The purposes of final traffic flow detection and road surface condition detection can be achieved under the condition of a small amount of data transmission.
Example two
Fig. 2 is a block diagram of a video data analysis device according to a second embodiment of the present invention, and for convenience of explanation, only the portions related to the second embodiment of the present invention are shown in detail below.
Referring to fig. 2, the video data analysis apparatus 10 may be divided into a plurality of functional modules according to the functions performed thereby, and each of the functional modules is configured to perform each step in the corresponding embodiment of fig. 1, so as to implement functions of optimizing data in an action video and filtering redundant features. In an embodiment of the present invention, the functional modules of the video data analysis apparatus 10 may include a receiving module 101, an extracting module 102, a filtering module 103, a processing module 104, and an identifying module 105. The functions of the respective functional modules will be described in detail in the following embodiments.
The receiving module 101 is configured to receive video information for monitoring a road condition.
In this embodiment, the video information is video captured by a camera fixedly installed on a road. For example, cameras installed on highways or national roads. And the camera sends the video information of the road condition shot in real time to the raspberry group computer.
The extracting module 102 is configured to extract and track feature points in the video information.
In the embodiment of the invention, the characteristic points are as follows: in the video image, a point S is compared with its neighborhood of pixelsDot gray value, if there are N consecutive pixels on the circle k The absolute value of the gray value of k=1, 2, … N minus the gray value of S is greater than the set threshold, S being the desired feature point.
In the present embodiment, the feature points in the video information are extracted by a sparse optical flow method. The sparse optical flow method has the advantages of small calculated amount, accurate optical flow calculation and the like, and is widely applied to a real-time system.
In other embodiments of the present invention, the feature points in the video information may be extracted by other existing or future algorithms or methods for extracting feature points in the video, and the method for extracting feature points is not limited in the present invention.
The filtering module 103 is configured to filter feature points in the video background to obtain feature points of the vehicle.
In the present embodiment, since the camera is fixedly installed on the road, the background in the video information captured by the camera is in a stationary state, and the vehicle is in a moving state. Therefore, the moving speed of the feature points can be utilized to extract the feature points of the vehicle in the video. The feature points with small or zero setting speed belong to the feature points in the video background, so that the feature points in the video background are filtered out, and the feature points of the vehicle are obtained.
Specifically, the filtering the feature points in the video background to obtain the feature points of the vehicle includes:
acquiring the moving speed of the feature points;
comparing the speed of the characteristic points with a speed threshold;
when the speed of the characteristic points is smaller than or equal to the speed threshold value, filtering the characteristic points;
and when the speed of the characteristic point is greater than the speed threshold value, reserving the characteristic point.
The processing module 104 is configured to obtain a road contour according to the feature points of the vehicle.
Because the vehicles on the road run in a certain direction, after the characteristic points in the processed video background are filtered for a period of time, the obtained characteristic points of the vehicles can cover most road areas. And carrying out cluster analysis according to the positions and the directions of the characteristic points, wherein the contour of each class after clustering is the contour of the road in one direction.
Specifically, obtaining a road profile from the feature points of the vehicle includes:
acquiring the moving direction and position information of the characteristic points of the vehicle;
clustering according to the moving direction and the position information of the feature points to obtain a plurality of clusters;
and obtaining the road profile according to the shape of each cluster.
In this embodiment, the position and the direction of the feature point are subjected to cluster analysis by a Density clustering algorithm (Density-Based Spatial Clustering of Application with Noise, DBSCAN), and arbitrary shapes can be clustered. One fundamental difference between the DBSCAN clustering algorithm and other methods is that the DBSCAN clustering algorithm is not based on a variety of distance metrics, but rather is based on density. Therefore, the number of categories is not set in advance, but the number of categories is obtained by itself. Since the number of roads in different directions is uncertain in different video scenes. Therefore, the DBSCAN clustering algorithm can accurately obtain the road profile.
Preferably, the video data analysis device 10 may further analyze the road status according to the road profile and the received temperature and humidity information.
Specifically, extracting a road area in the video information according to the road contour to obtain a road area image; acquiring parameter values of the road area image, wherein the parameter values comprise gray values and reflection values; and analyzing the road state according to the parameter value and the received temperature and humidity information.
In this embodiment, the camera may include a temperature sensor and/or a humidity sensor thereon. The temperature sensor and/or the humidity sensor are/is used for detecting the temperature and the humidity of the current road environment and sending the detected temperature value and/or humidity value to the raspberry group. Thereby facilitating the raspberry group to detect the road surface condition according to the parameter value and the received temperature and humidity information. For example, when the gray value of the road area image is higher, the temperature of the environment in which the current road is located is lower.
The extraction module 102 is further configured to extract a target vehicle in the video information based on the road profile.
In this embodiment, the video foreground is obtained after processing the video information by a background difference algorithm. Since vehicles on the road need to be extracted, the video foreground is filtered through the road outline, and only target objects on the road are reserved. And extracting the target vehicle in the video information according to whether the size of the reserved target object accords with the vehicle size.
Specifically, the extracting the target vehicle in the video information based on the road profile includes:
processing each frame of image in the video information by a background difference method to obtain a foreground image;
filtering the foreground image based on the road contour to obtain a target object image;
comparing the size of the target object image with a preset size;
and when the size of the target object image is larger than or equal to the preset size, extracting the target object to obtain a target vehicle.
The identification module 105 is configured to identify a category of the target vehicle.
In the embodiment, the target vehicle is input into a pre-trained vehicle type recognition model; acquiring a recognition result output by the vehicle type recognition model; and determining the category of the target vehicle according to the identification result.
In this embodiment, the vehicle type recognition model is pre-trained, and the training process may include: pre-acquiring a plurality of vehicle images of different categories; dividing a plurality of vehicle images of different categories and vehicle categories into a training set of a first proportion and a test set of a second proportion, wherein the first proportion is far greater than the second proportion; inputting the training set into a preset deep neural network to perform supervised learning and training to obtain a vehicle type recognition model; inputting the test set into the vehicle type identification model for testing to obtain a test passing rate; and when the test passing rate is greater than or equal to a preset passing rate threshold value, ending the training of the vehicle type recognition model, and when the test passing rate is less than the preset passing rate threshold value, re-dividing the training set and the testing set, learning and training the vehicle type recognition model based on the new training set, and testing the passing rate of the vehicle type recognition model obtained by new training based on the new testing set. Since the vehicle type recognition model is not the focus of the present invention, the specific process of training the vehicle type recognition model is not described in detail herein. The training process of the vehicle type recognition model may be performed off-line, and when the category of the target vehicle needs to be recognized, an image of the target vehicle may be input on-line into the vehicle type recognition model, and the category of the target vehicle may be output through the vehicle type recognition model.
The category of the target vehicle may include bicycles, motorcycles, cars, buses, trucks, and the like.
Preferably, the video data analysis device 10 may further transmit the category and road status of the target vehicle to a server. The server is in communication connection with the raspberry group. The server can track the vehicle, count the vehicle, extract the traffic flow and the like according to the received category and road state of the target vehicle so as to support other intelligent traffic monitoring tasks.
Example III
Fig. 3 is a schematic diagram of a raspberry pie device according to a third embodiment of the present invention. The raspberry pi device 1 includes a memory 20, a processor 30, and a computer program 40, such as a video data analysis program, stored in the memory 20 and executable on the processor 30. The steps of the above-described embodiment of the video data analysis method, such as steps S1 to S6 shown in fig. 1, are implemented when the processor 30 executes the computer program 40. Alternatively, the processor 30, when executing the computer program 40, performs the functions of the modules/units of the apparatus embodiments described above, such as modules 101-105 in fig. 2.
Illustratively, the computer program 40 may be partitioned into one or more modules/units that are stored in the memory 20 and executed by the processor 30 to complete the present invention. The one or more modules/units may be a series of computer program instruction segments capable of performing a specific function describing the execution of the computer program 40 in the raspberry-sending device 1. For example, the computer program 40 may be divided into a receiving module 101, an extracting module 102, a filtering module 103, a processing module 104, and an identifying module 105 in fig. 2, and each module has a specific function as described in the second embodiment.
The raspberry pie device 1 is a microcomputer main board based on ARM, has strong data processing capability, takes an SD memory card or a MicroSD card as a memory hard disk, is provided with a USB interface, can be connected with a keyboard, a mouse and a network cable, and is provided with a television output interface for transmitting video analog signals and an HDMI high-definition video output interface. The components of the raspberry pie are all integrated on a main board which is only slightly larger than a credit card, and the raspberry pie has the basic functions of all computers, and can execute various functions such as electronic forms, word processing, game playing, high-definition video playing and the like by only connecting a display screen and a keyboard. The memory hard disk is used for storing the operation device, the software tool and the processing data of each type of raspberry group, so that the raspberry group can normally operate, and the stored software tool can increase the functional operation of the raspberry group, so that the raspberry group can process the data more conveniently.
The processor 30 may be a central processing unit (Central Processing Unit, CPU), and may include other general purpose processors, digital signal processors (Digital Signal Processor, DSP), application specific integrated circuits (Application Specific Integrated Circuit, ASIC), off-the-shelf programmable gate arrays (Field-Programmable Gate Array, FPGA), or other programmable logic devices, etc. The general purpose processor may be a microprocessor or the processor may be any conventional processor or the like, and the processor 30 is a control center of the raspberry-sending device 1, and connects the various parts of the entire raspberry-sending device 1 using various interfaces and lines.
The memory 20 may be used to store the computer program 40 and/or modules/units, and the processor 30 may implement the various functions of the raspberry-sending device 1 by running or executing the computer program and/or modules/units stored in the memory 20 and invoking data stored in the memory 20. The memory 20 may be an SD memory card or a MicroSD card.
The modules/units integrated in the raspberry-sending device 1 may be stored in a computer readable storage medium if implemented in the form of software functional units and sold or used as a stand alone product. Based on such understanding, the present invention may implement all or part of the flow of the method of the above embodiment, or may be implemented by a computer program to instruct related hardware, where the computer program may be stored in a computer readable storage medium, and when the computer program is executed by a processor, the computer program may implement the steps of each of the method embodiments described above. Wherein the computer program comprises computer program code which may be in source code form, object code form, executable file or some intermediate form etc.
In several embodiments provided herein, it should be understood that the disclosed raspberry pi apparatus and method may be implemented in other ways. For example, the raspberry-pie apparatus embodiment described above is merely illustrative, e.g., the division of the units is merely a logical function division, and there may be additional ways of dividing in practice.
In addition, each functional unit in the embodiments of the present invention may be integrated in the same processing unit, or each unit may exist alone physically, or two or more units may be integrated in the same unit. The integrated units can be realized in a form of hardware or a form of hardware and a form of software functional modules.
It will be evident to those skilled in the art that the invention is not limited to the details of the foregoing illustrative embodiments, and that the present invention may be embodied in other specific forms without departing from the spirit or essential characteristics thereof. The present embodiments are, therefore, to be considered in all respects as illustrative and not restrictive, the scope of the invention being indicated by the appended claims rather than by the foregoing description, and all changes which come within the meaning and range of equivalency of the claims are therefore intended to be embraced therein. Any reference sign in a claim should not be construed as limiting the claim concerned. Furthermore, it is evident that the word "comprising" does not exclude other elements or steps, and that the singular does not exclude a plurality. The units or raspberry-pie-means recited in the raspberry-pie-means claims may also be implemented in software or hardware by the same unit or raspberry-pie-means. The terms first, second, etc. are used to denote a name, but not any particular order.
Finally, it should be noted that the above-mentioned embodiments are merely for illustrating the technical solution of the present invention and not for limiting the same, and although the present invention has been described in detail with reference to the preferred embodiments, it should be understood by those skilled in the art that modifications and equivalents may be made to the technical solution of the present invention without departing from the spirit and scope of the technical solution of the present invention.

Claims (4)

1. A method of video data analysis, the method comprising:
receiving video information for monitoring road conditions;
extracting and tracking characteristic points in the video information;
filtering feature points in the video background to obtain feature points of the vehicle, wherein the method comprises the following steps: acquiring the moving speed of the feature points; comparing the speed of the characteristic points with a speed threshold; when the speed of the characteristic points is smaller than or equal to the speed threshold value, filtering the characteristic points; when the speed of the characteristic points is greater than the speed threshold, reserving the characteristic points;
obtaining a road contour according to the characteristic points of the vehicle, wherein the road contour comprises the following steps: acquiring the moving direction and position information of the characteristic points of the vehicle; clustering according to the moving direction and the position information of the feature points to obtain a plurality of clusters; obtaining a road profile according to the shape of each cluster;
extracting a target vehicle in the video information based on the road profile, comprising: extracting a road area in the video information according to the road contour, obtaining a road area image, obtaining a parameter value of the road area image, wherein the parameter value comprises a gray value and a reflection value, and analyzing a road state according to the parameter value and the received temperature and humidity information; a kind of electronic device with high-pressure air-conditioning system
Identifying a category of the target vehicle, comprising: inputting the target vehicle into a pre-trained vehicle type recognition model; acquiring a recognition result output by the vehicle type recognition model; and determining the category of the target vehicle according to the identification result.
2. A video data analysis apparatus, comprising:
the receiving module is used for receiving video information of the monitored road condition;
the extraction module is used for extracting and tracking the characteristic points in the video information;
the filtering module is used for filtering the characteristic points in the video background to obtain the characteristic points of the vehicle, and comprises the following steps: acquiring the moving speed of the feature points; comparing the speed of the characteristic points with a speed threshold; when the speed of the characteristic points is smaller than or equal to the speed threshold value, filtering the characteristic points; when the speed of the characteristic points is greater than the speed threshold, reserving the characteristic points;
the processing module is used for obtaining the road profile according to the characteristic points of the vehicle and comprises the following steps: acquiring the moving direction and position information of the characteristic points of the vehicle; clustering according to the moving direction and the position information of the feature points to obtain a plurality of clusters; obtaining a road profile according to the shape of each cluster;
the extraction module is further configured to extract a target vehicle in the video information based on the road profile, including: extracting a road area in the video information according to the road contour, obtaining a road area image, obtaining a parameter value of the road area image, wherein the parameter value comprises a gray value and a reflection value, and analyzing a road state according to the parameter value and the received temperature and humidity information; a kind of electronic device with high-pressure air-conditioning system
An identification module for identifying a category of the target vehicle, comprising: inputting the target vehicle into a pre-trained vehicle type recognition model; acquiring a recognition result output by the vehicle type recognition model; and determining the category of the target vehicle according to the identification result.
3. A raspberry pie apparatus comprising a processor for implementing the video data analysis method of claim 1 when executing a computer program stored in memory.
4. A computer readable storage medium, on which a computer program is stored, which computer program, when being executed by a processor, implements the video data analysis method according to claim 1.
CN201911339888.0A 2019-12-23 2019-12-23 Video data analysis method and device, raspberry group device and readable storage medium Active CN111126261B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911339888.0A CN111126261B (en) 2019-12-23 2019-12-23 Video data analysis method and device, raspberry group device and readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911339888.0A CN111126261B (en) 2019-12-23 2019-12-23 Video data analysis method and device, raspberry group device and readable storage medium

Publications (2)

Publication Number Publication Date
CN111126261A CN111126261A (en) 2020-05-08
CN111126261B true CN111126261B (en) 2023-05-26

Family

ID=70501294

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911339888.0A Active CN111126261B (en) 2019-12-23 2019-12-23 Video data analysis method and device, raspberry group device and readable storage medium

Country Status (1)

Country Link
CN (1) CN111126261B (en)

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH07210795A (en) * 1994-01-24 1995-08-11 Babcock Hitachi Kk Method and instrument for image type traffic flow measurement
JP2009217832A (en) * 2009-04-27 2009-09-24 Asia Air Survey Co Ltd Method and device for automatically recognizing road sign in video image, and storage medium which stores program of road sign automatic recognition
CN102426785A (en) * 2011-11-18 2012-04-25 东南大学 Traffic flow information perception method based on contour and local characteristic point and system thereof
JP2012160165A (en) * 2011-01-31 2012-08-23 Nec (China) Co Ltd Baseline band video monitoring system and method
CN106652465A (en) * 2016-11-15 2017-05-10 成都通甲优博科技有限责任公司 Method and system for identifying abnormal driving behavior on road
WO2018153211A1 (en) * 2017-02-22 2018-08-30 中兴通讯股份有限公司 Method and apparatus for obtaining traffic condition information, and computer storage medium
CN109191841A (en) * 2018-09-17 2019-01-11 天津中德应用技术大学 A kind of urban transportation intelligent management system based on raspberry pie
CN109919027A (en) * 2019-01-30 2019-06-21 合肥特尔卡机器人科技股份有限公司 A kind of Feature Extraction System of road vehicles
WO2019183751A1 (en) * 2018-03-26 2019-10-03 深圳市锐明技术股份有限公司 Detection and warning method for snow and ice in front of vehicle, storage medium, and server
CN110324583A (en) * 2019-07-15 2019-10-11 深圳中兴网信科技有限公司 A kind of video monitoring method, video monitoring apparatus and computer readable storage medium

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH07210795A (en) * 1994-01-24 1995-08-11 Babcock Hitachi Kk Method and instrument for image type traffic flow measurement
JP2009217832A (en) * 2009-04-27 2009-09-24 Asia Air Survey Co Ltd Method and device for automatically recognizing road sign in video image, and storage medium which stores program of road sign automatic recognition
JP2012160165A (en) * 2011-01-31 2012-08-23 Nec (China) Co Ltd Baseline band video monitoring system and method
CN102426785A (en) * 2011-11-18 2012-04-25 东南大学 Traffic flow information perception method based on contour and local characteristic point and system thereof
CN106652465A (en) * 2016-11-15 2017-05-10 成都通甲优博科技有限责任公司 Method and system for identifying abnormal driving behavior on road
WO2018153211A1 (en) * 2017-02-22 2018-08-30 中兴通讯股份有限公司 Method and apparatus for obtaining traffic condition information, and computer storage medium
WO2019183751A1 (en) * 2018-03-26 2019-10-03 深圳市锐明技术股份有限公司 Detection and warning method for snow and ice in front of vehicle, storage medium, and server
CN109191841A (en) * 2018-09-17 2019-01-11 天津中德应用技术大学 A kind of urban transportation intelligent management system based on raspberry pie
CN109919027A (en) * 2019-01-30 2019-06-21 合肥特尔卡机器人科技股份有限公司 A kind of Feature Extraction System of road vehicles
CN110324583A (en) * 2019-07-15 2019-10-11 深圳中兴网信科技有限公司 A kind of video monitoring method, video monitoring apparatus and computer readable storage medium

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
王慧英 ; 林灿奕 ; 陈燕春 ; .基于树莓派的在线车牌处理系统.科技创新导报.2018,(18),全文. *

Also Published As

Publication number Publication date
CN111126261A (en) 2020-05-08

Similar Documents

Publication Publication Date Title
EP3806064B1 (en) Method and apparatus for detecting parking space usage condition, electronic device, and storage medium
Bansod et al. Crowd anomaly detection and localization using histogram of magnitude and momentum
KR101935010B1 (en) Apparatus and method for recognizing license plate of car based on image
CN111274926B (en) Image data screening method, device, computer equipment and storage medium
CN112883948B (en) Semantic segmentation and edge detection model building and guardrail abnormity monitoring method
CN111783665A (en) Action recognition method and device, storage medium and electronic equipment
CN112562406B (en) Method and device for identifying off-line driving
CN114387591A (en) License plate recognition method, system, equipment and storage medium
CN111582032A (en) Pedestrian detection method and device, terminal equipment and storage medium
CN111079621A (en) Method and device for detecting object, electronic equipment and storage medium
CN114037834B (en) Semantic segmentation method and device based on fusion of vibration signal and RGB image
CN113869258A (en) Traffic incident detection method and device, electronic equipment and readable storage medium
Nguyen et al. Lane detection and tracking based on fully convolutional networks and probabilistic graphical models
CN113255500A (en) Method and device for detecting random lane change of vehicle
CN111339834B (en) Method for identifying vehicle driving direction, computer device and storage medium
CN113297939A (en) Obstacle detection method, system, terminal device and storage medium
CN111126261B (en) Video data analysis method and device, raspberry group device and readable storage medium
Sala et al. Measuring traffic lane‐changing by converting video into space–time still images
CN114724128B (en) License plate recognition method, device, equipment and medium
CN113591543B (en) Traffic sign recognition method, device, electronic equipment and computer storage medium
Thakare et al. Object interaction-based localization and description of road accident events using deep learning
CN112950961B (en) Traffic flow statistical method, device, equipment and storage medium
CN113902999A (en) Tracking method, device, equipment and medium
CN111639640A (en) License plate recognition method, device and equipment based on artificial intelligence
CN112270257A (en) Motion trajectory determination method and device and computer readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
EE01 Entry into force of recordation of patent licensing contract

Application publication date: 20200508

Assignee: Jiangsu Huada Smart High tech Co.,Ltd.

Assignor: SHENZHEN RESEARCH INSTITUTE OF TSINGHUA University INNOVATION CENTER IN ZHUHAI

Contract record no.: X2023980048677

Denomination of invention: Video data analysis methods and devices, raspberry pie devices and readable storage media

Granted publication date: 20230526

License type: Common License

Record date: 20231128

EE01 Entry into force of recordation of patent licensing contract