CN115797897A - Vehicle collision recognition method and system based on image processing - Google Patents
Vehicle collision recognition method and system based on image processing Download PDFInfo
- Publication number
- CN115797897A CN115797897A CN202310052945.7A CN202310052945A CN115797897A CN 115797897 A CN115797897 A CN 115797897A CN 202310052945 A CN202310052945 A CN 202310052945A CN 115797897 A CN115797897 A CN 115797897A
- Authority
- CN
- China
- Prior art keywords
- vehicle
- collision
- image
- features
- images
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000012545 processing Methods 0.000 title claims abstract description 54
- 238000000034 method Methods 0.000 title claims abstract description 39
- 238000004364 calculation method Methods 0.000 claims abstract description 41
- 238000013145 classification model Methods 0.000 claims abstract description 17
- 238000012549 training Methods 0.000 claims description 51
- 238000004422 calculation algorithm Methods 0.000 claims description 14
- 238000000605 extraction Methods 0.000 claims description 14
- 230000009467 reduction Effects 0.000 claims description 13
- 238000010606 normalization Methods 0.000 claims description 10
- 238000001514 detection method Methods 0.000 claims description 8
- 238000012216 screening Methods 0.000 claims description 6
- 238000007781 pre-processing Methods 0.000 claims description 3
- 239000000126 substance Substances 0.000 claims description 2
- 239000000463 material Substances 0.000 abstract description 5
- 238000010191 image analysis Methods 0.000 abstract description 3
- 238000010586 diagram Methods 0.000 description 5
- 230000008901 benefit Effects 0.000 description 4
- 238000005516 engineering process Methods 0.000 description 4
- 238000012544 monitoring process Methods 0.000 description 3
- 230000008569 process Effects 0.000 description 3
- 238000004458 analytical method Methods 0.000 description 2
- 230000009286 beneficial effect Effects 0.000 description 2
- 241000124033 Salix Species 0.000 description 1
- 238000013475 authorization Methods 0.000 description 1
- 238000005286 illumination Methods 0.000 description 1
- 230000010354 integration Effects 0.000 description 1
- 239000011159 matrix material Substances 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000003909 pattern recognition Methods 0.000 description 1
- 230000002265 prevention Effects 0.000 description 1
Images
Classifications
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02T—CLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
- Y02T10/00—Road transport of goods or passengers
- Y02T10/10—Internal combustion engine [ICE] based vehicles
- Y02T10/40—Engine management systems
Landscapes
- Image Analysis (AREA)
Abstract
The invention provides a vehicle collision recognition method and system based on image processing, wherein the method comprises the following steps: acquiring video information around a vehicle through video equipment, acquiring a frame of image from a corresponding video stream at preset time intervals, and acquiring a plurality of images to be identified; extracting the features of the image to be identified to obtain integral features and gradient features; calculating a similarity score based on integral features and gradient features through a pre-trained classification model; comparing the calculation result with a preset threshold value, and judging that a collision situation exists when the calculation result is greater than the threshold value; when the value is less than or equal to the threshold value, it is determined that there is no collision situation. Compared with the prior art, the similarity score is calculated based on the integral characteristic and the gradient characteristic, and is compared with the threshold value, whether a collision event exists in the image is judged, hardware equipment such as a distance sensor is not required to be arranged, the economic cost and the cost of manpower and material resources are reduced, and efficient image analysis and accurate vehicle collision judgment can be realized.
Description
Technical Field
The invention relates to the field of vehicle collision recognition analysis, in particular to a vehicle collision recognition method and system based on image processing.
Background
With the rise of the internet of things technology, the application of the internet of things technology to automobiles is continuously expanded, and meanwhile, the vehicle monitoring technology based on video and image processing is continuously improved, so that personalized and diversified services can be provided.
Whether prior art is judging the car and is bumping when, generally all be a plurality of distance sensor and automobile data recorder and cooperate the use to realize the prevention of collision accident and the collection of evidence in time. However, this method requires a plurality of devices to be matched with each other, and in the process of identifying the devices, a "matching hole" is easy to exist, and the collision event implemented based on this method has low accuracy in discrimination, high use requirement (requiring specific arrangement and material conditions), and requires certain labor and material costs.
Disclosure of Invention
The invention provides a vehicle collision recognition method and system based on image processing, and aims to solve the technical problem of improving the accuracy of vehicle collision judgment.
In order to solve the technical problem, an embodiment of the present invention provides a vehicle collision recognition method based on image processing, including:
acquiring video information around a vehicle through video equipment, acquiring a frame of image at preset time intervals from a video stream corresponding to the video information, and acquiring a plurality of images to be identified;
extracting the features of all images to be identified to obtain integral features and gradient features of the images to be identified;
calculating a similarity score based on the integral features and gradient features of all the images to be recognized through a pre-trained classification model;
comparing the calculation result with a preset threshold value, and judging that the image to be recognized has a collision situation when the calculation result is greater than the preset threshold value; and when the calculation result is less than or equal to the preset threshold value, judging that the image to be recognized does not have a collision situation.
Preferably, before the calculating the similarity score based on the integral features and the gradient features of all the images to be recognized, the method further includes:
acquiring a vehicle image dataset for training;
performing feature extraction on the vehicle image data set to obtain integral features, haar features and gradient features of the vehicle image data set; the vehicle image dataset comprises a number of colliding and non-colliding vehicle sample images;
constructing a plurality of first classifiers based on integral features, haar features and gradient features of the vehicle image dataset;
screening a plurality of second classifiers from the plurality of first classifiers by an Adaboost algorithm under the condition of meeting preset conditions about the training detection rate and the misjudgment rate;
and combining the plurality of second classifiers to form a cascade classifier, and obtaining the pre-trained classification model.
As a preferred scheme, the acquiring a vehicle image dataset for training specifically includes:
acquiring a plurality of collision videos;
cutting a plurality of collision vehicle sample images and non-collision vehicle sample images from the plurality of collision videos;
and carrying out normalization processing and graying processing on the cut out plurality of collision vehicle sample images and non-collision vehicle sample images to obtain the vehicle image data set for training.
As a preferred scheme, the video stream is an RTP protocol video stream; before the acquiring of the plurality of images to be recognized, the method further comprises:
extracting a picture in a video file from the RTP protocol video stream every 0.5 second;
and carrying out digital noise reduction processing on each extracted picture, converting nonstandard pixel points of each picture subjected to digital noise reduction processing into standard pixel points, and obtaining the plurality of images to be identified.
Preferably, the vehicle collision recognition method further includes: determining the moment corresponding to the image with the collision situation, and storing the corresponding videos in the preset time period before and after the moment.
Correspondingly, the embodiment of the invention also provides a vehicle collision recognition system based on image processing, which comprises an acquisition module, a feature extraction module, a similarity calculation module and a judgment module; wherein, the first and the second end of the pipe are connected with each other,
the acquisition module is used for acquiring video information around the vehicle through video equipment, acquiring a frame of image every preset time from a video stream corresponding to the video information, and acquiring a plurality of images to be identified;
the feature extraction module is used for extracting features of all images to be identified to obtain integral features and gradient features of the images to be identified;
the similarity calculation module is used for calculating a similarity score based on the integral characteristic and the gradient characteristic of all the images to be recognized through a pre-trained classification model;
the judging module is used for comparing the calculation result with a preset threshold value, and judging that the image to be recognized has a collision situation when the calculation result is greater than the preset threshold value; and when the calculation result is less than or equal to the preset threshold value, judging that the image to be recognized has no collision condition.
Preferably, the vehicle collision recognition system further comprises a classifier training module, wherein the classifier training module is configured to:
before the similarity calculation module calculates similarity scores based on integral features and gradient features of all the images to be recognized, a vehicle image data set for training is obtained;
carrying out feature extraction on the vehicle image data set to obtain integral features, haar features and gradient features of the vehicle image data set; the vehicle image dataset comprises a number of colliding and non-colliding vehicle sample images;
constructing a plurality of first classifiers based on integral features, haar features and gradient features of the vehicle image dataset;
screening a plurality of second classifiers from the plurality of first classifiers by an Adaboost algorithm under the condition of meeting preset conditions about the training detection rate and the misjudgment rate;
and combining the plurality of second classifiers to form a cascade classifier, and obtaining the pre-trained classification model.
As a preferred scheme, the classifier training module includes a training set obtaining unit, and the training set obtaining unit is configured to obtain a vehicle image data set for training, specifically:
the training set acquisition unit acquires a plurality of collision videos;
cutting a plurality of collision vehicle sample images and non-collision vehicle sample images from the plurality of collision videos;
and carrying out normalization processing and graying processing on the cut out plurality of collision vehicle sample images and non-collision vehicle sample images to obtain the vehicle image data set for training.
As a preferred scheme, the video stream is an RTP protocol video stream; the vehicle collision recognition system further comprises a preprocessing module for:
before the images to be identified are obtained, extracting a picture in a video file every 0.5 second from the RTP protocol video stream;
and carrying out digital noise reduction processing on each extracted picture, converting nonstandard pixel points of each picture subjected to digital noise reduction processing into standard pixel points, and obtaining the plurality of images to be identified.
Preferably, the vehicle collision recognition system further comprises a storage module, and the storage module is configured to determine a time corresponding to an image in which a collision situation exists, and store videos corresponding to the time before and after the time within a preset time period.
Compared with the prior art, the embodiment of the invention has the following beneficial effects:
the embodiment of the invention provides a vehicle collision recognition method and a vehicle collision recognition system based on image processing, wherein the vehicle collision recognition method comprises the following steps: acquiring video information around a vehicle through video equipment, acquiring a frame of image at preset time intervals from a video stream corresponding to the video information, and acquiring a plurality of images to be identified; extracting the features of all images to be identified to obtain integral features and gradient features of the images to be identified; calculating a similarity score based on the integral features and gradient features of all the images to be recognized through a pre-trained classification model; comparing the calculation result with a preset threshold value, and judging that the image to be recognized has a collision situation when the calculation result is greater than the preset threshold value; and when the calculation result is less than or equal to the preset threshold value, judging that the image to be recognized does not have a collision situation. Compared with the prior art, the method has the advantages that the similarity score is calculated based on the integral characteristic and the gradient characteristic of the image to be recognized and is compared with the preset threshold value, whether a collision event exists in the image or not is judged, hardware devices such as a distance sensor and the like are not required to be arranged, economic cost and manpower and material resource cost are reduced, efficient image analysis and accurate vehicle collision judgment can be achieved, and the purpose of monitoring and preventing various collision accidents is achieved.
Drawings
FIG. 1: the invention provides a flow chart of an embodiment of a vehicle collision recognition method based on image processing.
FIG. 2: the invention provides a schematic diagram of the principle of an embodiment of an integral characteristic acquisition method.
FIG. 3: the invention provides a schematic diagram of another embodiment of an integral characteristic acquisition method.
FIG. 4 is a schematic view of: the invention provides a principle schematic diagram of an embodiment of a classification model training method.
FIG. 5: the invention provides a structural schematic diagram of an embodiment of a vehicle collision recognition system based on image processing.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The first embodiment is as follows:
referring to fig. 1, fig. 1 is a diagram illustrating a vehicle collision recognition method based on image processing according to an embodiment of the present invention, including steps S1 to S4, wherein,
the method comprises the steps of S1, collecting video information around a vehicle through video equipment, collecting a frame of image every preset time from a video stream corresponding to the video information, and obtaining a plurality of images to be identified.
In this embodiment, the video device includes, but is not limited to, a general camera and a car recorder. The video stream is a Real-time Transport Protocol (RTP) video stream; before the acquiring of the plurality of images to be identified, the method further comprises:
extracting a picture in a video file from the RTP protocol video stream every 0.5 second;
and carrying out digital noise reduction processing on each extracted picture, converting nonstandard pixel points of each picture subjected to digital noise reduction processing into standard pixel points, and obtaining the plurality of images to be identified.
Among them, real-time Transport Protocol (RTP) is a Transport layer Protocol for multimedia data stream over the Internet. RTP specifies a standard packet format for the delivery of audio and video over the internet. The RTP protocol is commonly used in streaming media systems.
The RTP protocol video stream is uploaded to a server, and the server application program can distinguish the specific type of the video stream by analyzing the protocol header and the protocol body.
At present, the encoding formats of video streams include two encoding formats, h.264 and h.265, wherein the h.265 standard retains the original partial content of h.264, and some related technologies are improved to improve the relationship among the code stream, the encoding quality, the delay and the algorithm complexity, so as to achieve the optimal setting. A large number of devices on the market use h.265 coding by default. The H.265 coding video processing procedure is as follows:
extracting a picture in the video file every 0.5 second from the obtained video stream, then carrying out digital noise reduction processing on the extracted picture data, and converting the non-standard pixel points into standard pixel points to obtain the plurality of images to be identified.
And S2, extracting the features of all the images to be recognized to obtain integral features and gradient features of the images to be recognized.
Specifically, referring to fig. 2, the integral feature generally refers to the value of any point in the image being equal to the sum of all the point pixels in a rectangular area formed from the point (0.0) at the upper left corner of the image to the target point. As shown in fig. 2, the integral value at the point (x, y) is the sum of all the pixel points in the rectangular area (including the point).
Further, in order to improve the efficiency of calculating the integral map, fast calculation may be achieved using the integral values of adjacent points. Referring to fig. 3, the integrated value of the point (x, y) may be obtained by using the sum of the integrated values of the point (x-1,y) and the point (x, y-1), then subtracting the overlapped area, that is, subtracting the integrated value of (x-1,y-1), and finally adding the pixel value of the point (x, y).
The above integration principle can be expressed by the formula:
I(x,y) =I(x-1,y) + I(x,y-1) - I(x-1,y-1) + pixel(x,y);
further, in case of considering the boundary problem, i.e. considering the calculation of the first row and the first column, for the first row:
I(0,0) =pixel(0,0),x=0,y=0;
I(x,0) =I(x-1,0) + pixel(x,0),x>0,y=0;
and for the first column:
I(0,y) =I(0,y-1) + pixel(0,y),x=0,y>0;
implementing the embodiments of the present application, using integral features has the advantage that once the integral features are first calculated, the sum of rectangular regions of any size in the image can be calculated within a constant time. Under the application scenes of image blurring, edge extraction, object detection and the like, the calculation amount can be greatly reduced, and the calculation speed can be improved.
The gradient features describe the information of local area changes such as the edges and the corners of the image, and the robustness on the changes of illumination is strong.
And S3, calculating a similarity score based on the integral characteristic and the gradient characteristic of all the images to be recognized through a pre-trained classification model.
In this embodiment, before the calculating the similarity score based on the integral features and the gradient features of all the images to be recognized, the method further includes:
a vehicle image dataset, i.e. a training set, is acquired for training. Specifically, the acquiring a vehicle image dataset for training specifically includes:
acquiring a plurality of collision videos; cutting a plurality of collision vehicle sample images and non-collision vehicle sample images from the plurality of collision videos; and carrying out normalization processing and graying processing on the cut out plurality of collision vehicle sample images and non-collision vehicle sample images to obtain the vehicle image data set for training.
As a further preferable mode, 8000 crash vehicle sample images and 12000 crash vehicle sample images are obtained. The number of the images and the selection thought can be adjusted according to practical application, and the more the images are used, the higher the precision of the trained algorithm is. The normalization processing is performed on 8000 crash vehicle sample images and 12000 crash vehicle sample images in this embodiment because each pixel value range of the images is a value between 0 and 255, which is large for a computer, and therefore, the pixel value normalization processing is performed by dividing the pixel value by 255 to obtain a value between 0 and 1. The graying processing is performed in this embodiment, that is, the color image is converted into a grayscale image, which can occupy a smaller memory than the color image, and on the other hand, the contrast can be visually increased to highlight the target area of this embodiment.
Through normalization and graying processing, the sample image is processed into three sizes of 24 × 24, 32 × 32 and 48 × 48, and the three sizes are processed because the pixel sizes of the images extracted under different video resolutions are inconsistent and are compared in the same size, so that the accuracy of analysis can be improved.
After the normalization process, the graying process, the sample image may be subjected to collision object classification and collision direction classification, the collision object classification referring to collision targets of vehicles including, but not limited to, vehicles, buildings, bicycles, and the like. The collision direction classification refers to four directions of the front, rear, left, and right of the vehicle.
Carrying out feature extraction on the vehicle image data set to obtain integral features, haar features and gradient features of the vehicle image data set; the vehicle image dataset comprises a number of colliding and non-colliding vehicle sample images; in particular, harr is a feature descriptor that can be used to represent features of an object of interest to help find the object. Haar features are divided into four categories: edge features, linear features, center features and diagonal features, and feature templates are combined based on the four features. The characteristic template is provided with two rectangles of white and black, and the characteristic value of the template is defined as the sum of the white rectangle pixels minus the sum of the black rectangle pixels. After the characteristic forms are determined, the number of the Harr-like characteristics depends on the size of a training sample image matrix, the characteristic template is randomly placed in the sub-windows, one form is a characteristic, and finding out the characteristics of all the sub-windows is the basis of weak classification training in the subsequent steps. The embodiment may employ OpenCV to obtain integral features, haar features, and gradient features. OpenCV is known as Open Source Computer VisionLibrary, established by Intel in 1999 and now supported by Willow Garage. It is a cross-platform computer vision library issued based on BSD license authorization (open source), and can run on Linux, windows, and Mac 0S operating systems.
Constructing a number of first classifiers (weak classifiers) based on integral features, haar features and gradient features of the vehicle image dataset;
screening a plurality of second classifiers (strong classifiers) from the plurality of first classifiers by an Adaboost algorithm under the condition of meeting preset conditions about the training detection rate and the misjudgment rate;
and combining the plurality of second classifiers to form a cascade classifier, and obtaining the pre-trained classification model.
In fact, the pattern recognition training may adopt an Adaboost algorithm or an SVM algorithm, and the basic idea of the Adaboost algorithm is to train the same classifier (weak classifier) for different training sets, and then combine the classifiers obtained from the different training sets to form a final strong classifier. Different training sets in the algorithm are realized by adjusting the corresponding weight of each sample. At the beginning, the corresponding weight of each sample is the same, and for the sample with h1 classification error, the corresponding weight is increased; for correctly classified samples, the weights are reduced, so that the wrong samples are highlighted, and a new sample distribution U2 is obtained. And training the weak classifier again under the new sample distribution to obtain a weak classifier h2. And repeating the steps for T times to obtain T weak classifiers, and superposing the T weak classifiers according to a certain weight (boost) to obtain the final desired strong classifier. The training system overall framework is composed of a training part and a supplementary part, and further, the training system can comprise:
a. taking a sample set as input, and calculating and obtaining a rectangular feature set under a given rectangular feature prototype;
b. determining a threshold value according to a given weak learning algorithm by taking the feature set as input, and corresponding the features to weak classifiers one to obtain a weak classifier set;
c. selecting an optimal weak classifier by using an Adaboost algorithm to form a strong classifier under the condition of conforming to the training detection rate and the misjudgment rate by taking the weak classifier set as input;
d. taking the strong classifier set as input, and combining the strong classifier set into a cascade classifier;
e. the non-collision vehicle image set is used as input, the strong classifier is combined to be a temporary cascade classifier, and the non-collision vehicle image sample is screened and supplemented, and reference can be specifically made to fig. 4.
Through the trained classification model, a similarity score can be obtained.
S4, comparing a calculation result with a preset threshold, and judging that the image to be recognized has a collision situation when the calculation result is greater than the preset threshold; and when the calculation result is less than or equal to the preset threshold value, judging that the image to be recognized does not have a collision situation.
In the embodiment, if the score of the calculation result exceeds 85%, it can be determined that there is a collision, that is, the image to be recognized determines that there is a collision situation. And when the calculation result is less than or equal to 85%, judging that no collision exists, namely, the image to be recognized does not have a collision situation.
Further, the vehicle collision recognition method further includes: determining the moment corresponding to the image with the collision situation, and storing the corresponding videos in the preset time period before and after the moment. The operations of calling, viewing, downloading and the like can be carried out through a specific App or a webpage to acquire the specific collision video.
Correspondingly, referring to fig. 5, an embodiment of the present invention further provides a vehicle collision recognition system based on image processing, including an acquisition module 101, a feature extraction module 102, a similarity calculation module 103, and a determination module 104; wherein the content of the first and second substances,
the acquisition module 101 is configured to acquire video information around a vehicle through video equipment, acquire one frame of image every preset time from a video stream corresponding to the video information, and acquire a plurality of images to be identified;
the feature extraction module 102 is configured to perform feature extraction on all images to be identified, so as to obtain integral features and gradient features of the images to be identified;
the similarity calculation module 103 is configured to calculate, through a pre-trained classification model, a similarity score based on the integral features and the gradient features of all the images to be recognized;
the judging module 104 is configured to compare the calculation result with a preset threshold, and when the calculation result is greater than the preset threshold, judge that the image to be recognized has a collision situation; and when the calculation result is less than or equal to the preset threshold value, judging that the image to be recognized does not have a collision situation.
As a preferred embodiment, the vehicle collision recognition system further comprises a classifier training module for:
before the similarity calculation module calculates similarity scores based on integral features and gradient features of all the images to be recognized, a vehicle image data set for training is obtained;
performing feature extraction on the vehicle image data set to obtain integral features, haar features and gradient features of the vehicle image data set; the vehicle image dataset comprises a number of colliding and non-colliding vehicle sample images;
constructing a plurality of first classifiers based on integral features, haar features and gradient features of the vehicle image dataset;
screening a plurality of second classifiers from the plurality of first classifiers by an Adaboost algorithm under the condition of meeting preset conditions about the training detection rate and the misjudgment rate;
and combining the plurality of second classifiers to form a cascade classifier, and obtaining the pre-trained classification model.
As a preferred embodiment, the classifier training module comprises a training set acquisition unit for acquiring a vehicle image dataset for training, in particular:
the training set acquisition unit acquires a plurality of collision videos;
cutting a plurality of collision vehicle sample images and non-collision vehicle sample images from the plurality of collision videos;
and carrying out normalization processing and graying processing on the cut out plurality of collision vehicle sample images and non-collision vehicle sample images to obtain the vehicle image data set for training.
As a preferred embodiment, the video stream is an RTP protocol video stream; the vehicle collision recognition system further comprises a preprocessing module for:
before the images to be identified are obtained, extracting a picture in a video file every 0.5 second from the RTP protocol video stream;
and performing digital noise reduction processing on each extracted picture, converting the nonstandard pixel points of each picture subjected to digital noise reduction processing into standard pixel points, and obtaining the plurality of images to be identified.
As a preferred embodiment, the vehicle collision recognition system further includes a storage module, and the storage module is configured to determine a time corresponding to an image in which a collision situation exists, and store videos corresponding to a preset time period before and after the time.
Compared with the prior art, the embodiment of the invention has the following beneficial effects:
the embodiment of the invention provides a vehicle collision recognition method and a vehicle collision recognition system based on image processing, wherein the vehicle collision recognition method comprises the following steps: acquiring video information around a vehicle through video equipment, acquiring a frame of image at preset time intervals from a video stream corresponding to the video information, and acquiring a plurality of images to be identified; extracting the features of all images to be identified to obtain integral features and gradient features of the images to be identified; calculating a similarity score based on the integral features and gradient features of all the images to be recognized through a pre-trained classification model; comparing the calculation result with a preset threshold value, and judging that the image to be recognized has a collision situation when the calculation result is greater than the preset threshold value; and when the calculation result is less than or equal to the preset threshold value, judging that the image to be recognized does not have a collision situation. Compared with the prior art, the method has the advantages that the similarity score is calculated based on the integral characteristic and the gradient characteristic of the image to be recognized and is compared with the preset threshold value, whether a collision event exists in the image is judged, hardware equipment such as a distance sensor is not required to be arranged, economic cost, manpower and material resource cost are reduced, efficient image analysis and accurate vehicle collision judgment can be achieved, and the purposes of monitoring and preventing various collision accidents are achieved.
The above-mentioned embodiments are provided to further explain the objects, technical solutions and advantages of the present invention in detail, and it should be understood that the above-mentioned embodiments are only examples of the present invention and are not intended to limit the scope of the present invention. It should be understood that any modifications, equivalents, improvements and the like, which come within the spirit and principle of the invention, may occur to those skilled in the art and are intended to be included within the scope of the invention.
Claims (10)
1. A vehicle collision recognition method based on image processing is characterized by comprising the following steps:
acquiring video information around a vehicle through video equipment, acquiring a frame of image at preset time intervals from a video stream corresponding to the video information, and acquiring a plurality of images to be identified;
extracting the features of all images to be identified to obtain integral features and gradient features of the images to be identified;
calculating a similarity score based on the integral features and gradient features of all the images to be recognized through a pre-trained classification model;
comparing the calculation result with a preset threshold value, and judging that the image to be recognized has a collision situation when the calculation result is greater than the preset threshold value; and when the calculation result is less than or equal to the preset threshold value, judging that the image to be recognized does not have a collision situation.
2. The image-processing-based vehicle collision recognition method according to claim 1, wherein before the calculating of the similarity score based on the integral features and the gradient features of all the images to be recognized, the method further comprises:
acquiring a vehicle image dataset for training;
carrying out feature extraction on the vehicle image data set to obtain integral features, haar features and gradient features of the vehicle image data set; the vehicle image dataset comprises a number of colliding and non-colliding vehicle sample images;
constructing a plurality of first classifiers based on integral features, haar features and gradient features of the vehicle image dataset;
screening a plurality of second classifiers from the plurality of first classifiers by an Adaboost algorithm under the condition of meeting preset conditions about the training detection rate and the misjudgment rate;
and combining the plurality of second classifiers to form a cascade classifier, and obtaining the pre-trained classification model.
3. The image-processing-based vehicle collision recognition method according to claim 2, wherein the acquiring of the vehicle image dataset for training is specifically:
acquiring a plurality of collision videos;
cutting out a plurality of collision vehicle sample images and non-collision vehicle sample images from the plurality of collision videos;
and carrying out normalization processing and graying processing on the cut out plurality of collision vehicle sample images and non-collision vehicle sample images to obtain the vehicle image data set for training.
4. The image processing-based vehicle collision recognition method according to any one of claims 1 to 3, wherein the video stream is an RTP protocol video stream; before the acquiring of the plurality of images to be identified, the method further comprises:
extracting a picture in a video file from the RTP protocol video stream every 0.5 second;
and carrying out digital noise reduction processing on each extracted picture, converting nonstandard pixel points of each picture subjected to digital noise reduction processing into standard pixel points, and obtaining the plurality of images to be identified.
5. A vehicle collision recognition method based on image processing according to any one of claims 1 to 3, characterized in that the vehicle collision recognition method further comprises: determining the moment corresponding to the image with the collision condition, and storing the corresponding videos in a preset time period before and after the moment.
6. A vehicle collision recognition system based on image processing is characterized by comprising an acquisition module, a feature extraction module, a similarity calculation module and a judgment module; wherein the content of the first and second substances,
the acquisition module is used for acquiring video information around the vehicle through video equipment, acquiring a frame of image every preset time from a video stream corresponding to the video information, and acquiring a plurality of images to be identified;
the feature extraction module is used for extracting features of all images to be identified to obtain integral features and gradient features of the images to be identified;
the similarity calculation module is used for calculating a similarity score based on the integral characteristic and the gradient characteristic of all the images to be recognized through a pre-trained classification model;
the judging module is used for comparing the calculation result with a preset threshold value, and judging that the image to be recognized has a collision situation when the calculation result is greater than the preset threshold value; and when the calculation result is less than or equal to the preset threshold value, judging that the image to be recognized does not have a collision situation.
7. The image-processing-based vehicle collision recognition system of claim 6, further comprising a classifier training module to:
before the similarity calculation module calculates similarity scores based on integral features and gradient features of all the images to be recognized, a vehicle image data set for training is obtained;
performing feature extraction on the vehicle image data set to obtain integral features, haar features and gradient features of the vehicle image data set; the vehicle image dataset comprises a number of colliding and non-colliding vehicle sample images;
constructing a plurality of first classifiers based on integral features, haar features and gradient features of the vehicle image dataset;
screening a plurality of second classifiers from the plurality of first classifiers by an Adaboost algorithm under the condition of meeting preset conditions about the training detection rate and the misjudgment rate;
and combining the plurality of second classifiers to form a cascade classifier, and obtaining the pre-trained classification model.
8. An image processing based vehicle collision recognition system according to claim 7, characterized in that the classifier training module comprises a training set acquisition unit for acquiring a vehicle image dataset for training, in particular:
the training set acquisition unit acquires a plurality of collision videos;
cutting a plurality of collision vehicle sample images and non-collision vehicle sample images from the plurality of collision videos;
and carrying out normalization processing and graying processing on the cut out plurality of collision vehicle sample images and non-collision vehicle sample images to obtain the vehicle image data set for training.
9. An image processing based vehicle collision recognition system according to any one of claims 6 to 8, wherein the video stream is an RTP protocol video stream; the vehicle collision recognition system further comprises a preprocessing module for:
before the images to be identified are obtained, extracting a picture in a video file every 0.5 second from the RTP protocol video stream;
and carrying out digital noise reduction processing on each extracted picture, converting nonstandard pixel points of each picture subjected to digital noise reduction processing into standard pixel points, and obtaining the plurality of images to be identified.
10. The image-processing-based vehicle collision recognition system according to any one of claims 6 to 8, wherein the vehicle collision recognition system further comprises a storage module, the storage module is configured to determine a time corresponding to the image in the collision situation and store a corresponding video within a preset time period before and after the time.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310052945.7A CN115797897A (en) | 2023-02-03 | 2023-02-03 | Vehicle collision recognition method and system based on image processing |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310052945.7A CN115797897A (en) | 2023-02-03 | 2023-02-03 | Vehicle collision recognition method and system based on image processing |
Publications (1)
Publication Number | Publication Date |
---|---|
CN115797897A true CN115797897A (en) | 2023-03-14 |
Family
ID=85429567
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202310052945.7A Pending CN115797897A (en) | 2023-02-03 | 2023-02-03 | Vehicle collision recognition method and system based on image processing |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN115797897A (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN117173385A (en) * | 2023-10-24 | 2023-12-05 | 四川思极科技有限公司 | Detection method, device, medium and equipment of transformer substation |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102637257A (en) * | 2012-03-22 | 2012-08-15 | 北京尚易德科技有限公司 | Video-based detection and recognition system and method of vehicles |
CN105574552A (en) * | 2014-10-09 | 2016-05-11 | 东北大学 | Vehicle ranging and collision early warning method based on monocular vision |
CN107657237A (en) * | 2017-09-28 | 2018-02-02 | 东南大学 | Car crass detection method and system based on deep learning |
CN108629963A (en) * | 2017-03-24 | 2018-10-09 | 纵目科技(上海)股份有限公司 | Traffic accident report method based on convolutional neural networks and system, car-mounted terminal |
WO2020119314A1 (en) * | 2018-12-14 | 2020-06-18 | 阿里巴巴集团控股有限公司 | Car accident handling method and apparatus, and electronic device |
CN112070039A (en) * | 2020-09-11 | 2020-12-11 | 广州亚美智造科技有限公司 | Vehicle collision detection method and system based on Hash coding |
CN112744174A (en) * | 2021-01-18 | 2021-05-04 | 深圳广联赛讯股份有限公司 | Vehicle collision monitoring method, device, equipment and computer readable storage medium |
-
2023
- 2023-02-03 CN CN202310052945.7A patent/CN115797897A/en active Pending
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102637257A (en) * | 2012-03-22 | 2012-08-15 | 北京尚易德科技有限公司 | Video-based detection and recognition system and method of vehicles |
CN105574552A (en) * | 2014-10-09 | 2016-05-11 | 东北大学 | Vehicle ranging and collision early warning method based on monocular vision |
CN108629963A (en) * | 2017-03-24 | 2018-10-09 | 纵目科技(上海)股份有限公司 | Traffic accident report method based on convolutional neural networks and system, car-mounted terminal |
CN107657237A (en) * | 2017-09-28 | 2018-02-02 | 东南大学 | Car crass detection method and system based on deep learning |
WO2020119314A1 (en) * | 2018-12-14 | 2020-06-18 | 阿里巴巴集团控股有限公司 | Car accident handling method and apparatus, and electronic device |
CN112070039A (en) * | 2020-09-11 | 2020-12-11 | 广州亚美智造科技有限公司 | Vehicle collision detection method and system based on Hash coding |
CN112744174A (en) * | 2021-01-18 | 2021-05-04 | 深圳广联赛讯股份有限公司 | Vehicle collision monitoring method, device, equipment and computer readable storage medium |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN117173385A (en) * | 2023-10-24 | 2023-12-05 | 四川思极科技有限公司 | Detection method, device, medium and equipment of transformer substation |
CN117173385B (en) * | 2023-10-24 | 2024-01-26 | 四川思极科技有限公司 | Detection method, device, medium and equipment of transformer substation |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN101937508B (en) | License plate localization and identification method based on high-definition image | |
WO2020173022A1 (en) | Vehicle violation identifying method, server and storage medium | |
JP4942510B2 (en) | Vehicle image recognition apparatus and method | |
CN105809184B (en) | Method for real-time vehicle identification and tracking and parking space occupation judgment suitable for gas station | |
CN110298300B (en) | Method for detecting vehicle illegal line pressing | |
KR101176552B1 (en) | Method and apparatus for recognizing speed limit signs and method for recognizing image | |
CN111382704A (en) | Vehicle line-pressing violation judgment method and device based on deep learning and storage medium | |
EP2662827A1 (en) | Video analysis | |
CN109460722B (en) | Intelligent license plate recognition method | |
KR101727487B1 (en) | Content Based Analyzing Device for Vehicle and Method Using the Same | |
US20220101037A1 (en) | System and Method for License Plate Recognition | |
US6738512B1 (en) | Using shape suppression to identify areas of images that include particular shapes | |
CN115797897A (en) | Vehicle collision recognition method and system based on image processing | |
US11482012B2 (en) | Method for driving assistance and mobile device using the method | |
CN112766046B (en) | Target detection method and related device | |
CN114067282A (en) | End-to-end vehicle pose detection method and device | |
CN113053164A (en) | Parking space identification method using look-around image | |
Persada et al. | Automatic face and VLP’s recognition for smart parking system | |
CN111178204B (en) | Video data editing and identifying method and device, intelligent terminal and storage medium | |
CN111814773A (en) | Lineation parking space identification method and system | |
KR102489884B1 (en) | Image processing apparatus for improving license plate recognition rate and image processing method using the same | |
US20240054795A1 (en) | Automatic Vehicle Verification | |
Nguyen et al. | Real-time license plate localization based on a new scale and rotation invariant texture descriptor | |
KR20200066890A (en) | Non-supervised-learning-based poor vehicle number image recovery learning method and device | |
Kadambari et al. | Deep Learning Based Traffic Surveillance System For Missing and Suspicious Car Detection |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20230314 |