CN110533098B - Method for identifying loading type of green traffic vehicle compartment based on convolutional neural network - Google Patents

Method for identifying loading type of green traffic vehicle compartment based on convolutional neural network Download PDF

Info

Publication number
CN110533098B
CN110533098B CN201910803745.4A CN201910803745A CN110533098B CN 110533098 B CN110533098 B CN 110533098B CN 201910803745 A CN201910803745 A CN 201910803745A CN 110533098 B CN110533098 B CN 110533098B
Authority
CN
China
Prior art keywords
image
green traffic
convolutional neural
classification
neural network
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910803745.4A
Other languages
Chinese (zh)
Other versions
CN110533098A (en
Inventor
王萍
张书颖
靳引利
孙铸
韩万水
王军
杨干
李文杰
马党利
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Changan University
Original Assignee
Changan University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Changan University filed Critical Changan University
Priority to CN201910803745.4A priority Critical patent/CN110533098B/en
Publication of CN110533098A publication Critical patent/CN110533098A/en
Application granted granted Critical
Publication of CN110533098B publication Critical patent/CN110533098B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/217Validation; Performance evaluation; Active pattern learning techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • G06F18/254Fusion techniques of classification results, e.g. of results related to same input data

Abstract

A method for identifying a loading type of a green traffic vehicle compartment based on a convolutional neural network comprises the following steps: step 1, acquiring a green traffic image; step 2, formulating an image validity judgment standard of the green traffic through a relative evaluation method in the image quality evaluation method; step 3, increasing the number of training samples; step 4, detecting a carriage target; step 5, dividing the green traffic vehicles into 8 types according to compartment-loading types; step 6, training the carriage-loading type classification; and 7, judging the loading type of the green traffic vehicle compartment to be identified. Aiming at the problem of unbalanced image type and quantity, unbalanced data is processed by adopting a data oversampling method, so that the balance of various sample quantities is achieved. The problem that the randomly selected rejected data in the undersampling method possibly contain the key characteristic information of the type is solved.

Description

Method for identifying loading type of green traffic vehicle compartment based on convolutional neural network
Technical Field
The invention belongs to the technical field of vehicle identification, and particularly relates to a method for identifying a loading type of a green traffic vehicle compartment based on a convolutional neural network.
Background
The image classification algorithm is divided into the conventional classification algorithms according to time, and comprises the following steps: k-proximity algorithm, SVM support vector machine, Bayesian algorithm, etc. With the great improvement of the computing power of computers, the deep learning algorithm gradually becomes the mainstream application at present. The artificial neural network simulates the action principle of neurons in the brain, and finally a network model capable of learning autonomously is obtained. The convolutional neural network model mainly relates to an AlexNet network model, a VGGNet network model and a ResNet network model. The target detection algorithm based on the traditional image processing has insufficient capabilities in the aspects of data processing capability, recognition rate and the like, and is difficult to meet the requirements of practical application in various aspects of time efficiency, performance, speed, intellectualization and the like. The over-fitting and generalization capability of the trained network model is insufficient due to the unbalanced data with small sample size and uneven distribution, and the data resampling can be generally divided into under-sampling and over-sampling.
The KNN algorithm has huge calculation amount and consumes more memories. The green traffic image is classified, so that the data size is large, and the problem of data imbalance exists in the multi-classification condition. When non-equilibrium data is classified, the KNN algorithm may cause the problem that large-capacity samples in k neighbors of image samples to be classified account for most, and finally classification errors are caused. The SVM algorithm achieves linear divisibility by kernel function mapping to a high-dimensional feature space, so its generalization capability depends largely on the selected kernel function. In addition, the SVM has poor effect on multi-classification problems and is only limited to small cluster samples, large samples in a green traffic image database need to be subjected to multi-classification subsequently, the classification efficiency is low, and a proper kernel function is difficult to seek; secondly, because manual inspection shooting is not standard, the quality of image data is poor, and target features are not clear, the use of an SVM algorithm is difficult when a model for extracting features is designed, and the quality of the designed features seriously influences the subsequent classification accuracy. The naive Bayes algorithm has a basic limitation that the relationship among data needs to be independent, and the basic assumption is difficult to realize in the classification of the green traffic images, so the naive Bayes algorithm is not ideal. The undersampling method in data resampling has defects, randomly selected rejected data may contain key feature information of the class, and a classifier may only learn partial information of samples of most classes during learning, so that the classification performance of the classifier on the most classes is affected. And for the convolutional neural network, the larger the training data amount is, the more outstanding the network performance is.
The scholars at home and abroad continuously break through the research of the vehicle classification and identification technology but still have the defects. Firstly, most of researches do not establish the classification standard of the vehicle under a specific scene, and the classification standard file established by the country is relied on, so that the application scene has great limitation. Second, in the articles of research into vehicle classification algorithms, fewer learners use a variety of methods to fuse and improve the accuracy of the classification algorithms.
Disclosure of Invention
The invention aims to provide a method for identifying the loading type of a green traffic vehicle compartment based on a convolutional neural network, so as to solve the problem.
In order to achieve the purpose, the invention adopts the following technical scheme:
a method for identifying a loading type of a green traffic vehicle compartment based on a convolutional neural network comprises the following steps:
step 1, acquiring a green traffic image;
step 2, formulating a green traffic image validity judgment standard through a relative evaluation method in the image quality evaluation method, and selecting an image through the green traffic image validity judgment standard as a training sample of a subsequent classification and identification experiment convolutional neural network model;
step 3, training unbalanced data in image classification by artificially synthesizing a few samples, and increasing the number of training samples by a data enhancement method;
step 4, after data enhancement, carrying out compartment target detection by using a target detection algorithm YOLOv2 framework;
step 5, according to the sequence of the carriages and the sequence of the loading modes, dividing the green traffic vehicles into 8 types according to the carriage-loading types, carrying out classification numbering on the carriage-loading types, and using the obtained carriage position images as training data of convolutional neural network classification, wherein 80% of the obtained carriage position images are used for network training images, and 20% of the obtained carriage position images are used for testing the classification accuracy;
step 6, training the carriage-loading type classification on a training set by using three convolutional neural networks of AlexNet, VGG-16 and ResNet-152;
and 7, detecting three convolutional neural networks of AlexNet, VGG-16 and ResNet-152 which are trained, operating the three networks on a test set, carrying out classification test to obtain the average accuracy and the overfitting ratio of the three networks, judging the trained neural network according to the average accuracy and the overfitting ratio, and selecting the network with the highest accuracy to judge the loading type of the green traffic vehicle compartment to be identified.
Further, in step 1, a green traffic image is obtained from a green traffic image database in a green traffic management system; the checked vehicle is photographed through the photographing tool and stored in the management system, and the photographing contents comprise a vehicle side part, a vehicle head part, a vehicle tail part and transportation goods of the vehicle.
Further, in the step 2, the image validity judgment standard of the green traffic vehicle is that the image quality is divided into five levels according to five aspects of image definition, background complexity, contour integrity, shooting angle and key part shielding.
Further, in step 3, the data enhancement is to generate more images by using a random transformation method based on the existing image samples, and the generated images are guaranteed to be still the same as the original images from the classification perspective, and the method adopted comprises the following steps: and horizontally turning the picture, and adjusting the saturation or the contrast.
Further, in step 4, the resolution of the YOLOv2 network input image is 416 × 416, the size of the output grid is 13 × 13, and after adopting a fixed frame, each grid predicts 9 prediction frames to reach 1521; after the car part is identified, the designated target area is identified, and the part of the image irrelevant to the target is removed through image cutting, so that the part of the car to be identified is left.
Further, image segmentation is a method for segmenting an object identified in a target detection link from other parts, a target area and a redundant area required in an image are bounded by a red frame line, RGB color values of pixel points in the image at the bounding frame are different from those of other areas, the image is converted into a matrix, and RGB values at the red frame line are determined, the image segmentation firstly extracts point coordinates of the RGB values in the image, secondly finds coordinates of two top points (x1, y1) of a left lower top point and a right upper top point in the red point, and (x2, y2) as the bounding point is as shown in the following formula:
x1=min{x[RGB(z)=(225,0,31)]},y1=min{y[RGB(z)=(225,0,31)]}
x2=max{x[RGB(z)=(225,0,31)]},y1=max{y[RGB(z)=(225,0,31)]}
therefore, the whole red frame is determined to be cut, most redundant information in the original image after cutting is removed, and finally the core part of the carriage is only left after image classification.
Further, in step 5, the freight vehicles are divided into four types according to the types of the carriages: a breast board truck, a bin gate truck, a tank truck or a van; the loading modes of the cargos are divided into 3 types: open, canvas wrapped or closed.
Further, in step 6, firstly, a training target is set on the three networks with the initial parameters as the minimum loss function, and after a plurality of iterations, the trained network is obtained after the loss function value is reduced to a predefined value.
Compared with the prior art, the invention has the following technical effects:
the method aims at realizing accurate recognition of the carriage-loading type of the green traffic vehicle, and combines a target detection algorithm, unbalanced data set processing and the like with a convolutional neural network model. The method mainly has the following advantages that,
firstly, the method comprises the following steps: and a green traffic image validity judgment standard is made through a relative evaluation method in the image quality evaluation method. And dividing the image quality into five levels from five aspects of image definition, background complexity, contour integrity, shooting angle and key part shielding. The images selected by using the judgment standard can be used as training samples of a subsequent classification and identification experiment convolutional neural network model, so that the problem of poor level of the training samples is avoided;
secondly, the method comprises the following steps: aiming at the problem of unbalanced image type and quantity, unbalanced data is processed by adopting a data oversampling method, so that the balance of various sample quantities is achieved. The problem that the randomly selected rejected data in the under-sampling method possibly contain the key characteristic information of the type is solved;
thirdly, the method comprises the following steps: aiming at the problems that the image contains complex information and is difficult to grasp key points, image preprocessing operation is carried out to remove a complex background irrelevant to a target in the image, so that the image only leaves a core part, and the classification accuracy is greatly improved;
fourthly: the classification standard of the carriage-loading type under a specific scene is established, so that the application scene has great adaptability;
fifth, the method comprises the following steps: aiming at the problems of huge number of images and slow operation speed which are difficult to solve by the traditional image classification algorithm, a convolutional neural network is used for classification;
sixth: the accuracy of the classification algorithm is improved by fusing a target detection algorithm, unbalanced data set processing and the like with a plurality of methods of a convolutional neural network model.
Drawings
Fig. 1 shows a green traffic vehicle compartment-loading type identification process.
Fig. 2 is a network structure diagram of YOLOv 2.
Fig. 3 is a matrix form of an image.
FIG. 4 is a diagram of the effective manual evaluation criteria of the green traffic image.
Fig. 5 is a diagram of an AlexNet network architecture.
Fig. 6 is a view showing the structure of the VGG-16 network.
Fig. 7 is a diagram of a ResNet quick link structure.
Fig. 8 is a green traffic vehicle compartment-loading type classification diagram.
FIG. 9 is a graph of AlexNet loss rate, accuracy change and accuracy per classification.
FIG. 10 is a graph of VGG-16 loss rate, accuracy change and accuracy per classification.
FIG. 11 is a graph of ResNet-152 loss rate, accuracy change and accuracy per classification.
Detailed Description
The invention is further described below with reference to the accompanying drawings:
referring to fig. 1 to 11, a method for identifying a loading type of a green traffic vehicle compartment based on a convolutional neural network includes the following steps:
step 1, acquiring a green traffic image;
step 2, formulating a green traffic image validity judgment standard through a relative evaluation method in the image quality evaluation method, and selecting an image through the green traffic image validity judgment standard as a training sample of a subsequent classification and identification experiment convolutional neural network model;
step 3, training unbalanced data in image classification by artificially synthesizing a few samples, and increasing the number of training samples by a data enhancement method;
step 4, after data enhancement, carrying out compartment target detection by using a target detection algorithm YOLOv2 framework;
step 5, according to the sequence of the carriages and the sequence of the loading modes, dividing the green traffic vehicles into 8 types according to the carriage-loading types, carrying out classification numbering on the carriage-loading types, and using the obtained carriage position images as training data of convolutional neural network classification, wherein 80% of the obtained carriage position images are used for network training images, and 20% of the obtained carriage position images are used for testing the classification accuracy;
step 6, training the carriage-loading type classification on a training set by using three convolutional neural networks of AlexNet, VGG-16 and ResNet-152;
and 7, detecting three convolutional neural networks of AlexNet, VGG-16 and ResNet-152 which are trained, operating the three networks on a test set, carrying out classification test to obtain the average accuracy and the overfitting ratio of the three networks, judging the trained neural network according to the average accuracy and the overfitting ratio, and selecting the network with the highest accuracy to judge the loading type of the green traffic vehicle compartment to be identified.
In the step 1, a green traffic image is obtained from a green traffic image database in a green traffic management system; the checked vehicle is photographed through the photographing tool and stored in the management system, and the photographing contents comprise a vehicle side part, a vehicle head part, a vehicle tail part and transportation goods of the vehicle.
In the step 2, the effectiveness judgment standard of the image of the green traffic vehicle is that the image quality is divided into five levels according to five aspects of image definition, background complexity, contour integrity, shooting angle and key part shielding.
In step 3, data enhancement is to generate more images by using a random transformation method based on the existing image samples, and the generated images are still the same as the original images in classification, wherein the adopted method comprises the following steps: and horizontally turning the picture, and adjusting the saturation or the contrast.
In step 4, the resolution of the input image of the YOLOv2 network is 416 × 416, the size of the output grid is 13 × 13, and after adopting a fixed frame, each grid predicts 9 prediction frames to reach 1521; after the car part is identified, the designated target area is identified, and the part of the image irrelevant to the target is removed through image cutting, so that the part of the car to be identified is left.
The image cutting is a method for segmenting an object identified in a target detection link from other parts, a target area and a redundant area required in an image take a red frame line as a boundary, the RGB color value of a pixel point at the boundary frame in the image is different from the pixel point of other areas, the image is converted into a matrix, the RGB value at the red frame line is determined, the image cutting firstly extracts the point coordinates of the RGB value in the image, secondly, two vertex coordinates (x1, y1), (x2, y2) of a left lower vertex and a right upper vertex in a red point are searched as the boundary points, and the following formula is shown as follows:
x1=min{x[RGB(z)=(225,0,31)]},y1=min{y[RGB(z)=(225,0,31)]}
x2=max{x[RGB(z)=(225,0,31)]},y1=max{y[RGB(z)=(225,0,31)]}
therefore, the whole red frame is determined to be cut, most redundant information in the original image after cutting is removed, and finally the core part of the carriage is only left after image classification.
In step 5, the freight vehicle is divided into four types according to the types of the carriages: a breast board truck, a bin gate truck, a tank truck or a van; the loading modes of the cargos are divided into 3 types: open, canvas wrapped or closed.
In step 6, firstly, a training target is set on three networks with initial parameters as the minimum loss function, and after a plurality of iterations, the trained network is obtained after the loss function value is reduced to a predefined value.
Example (b):
the method aims at realizing accurate recognition of the loading type of the green traffic vehicle compartment, and combines a target detection algorithm, unbalanced data set processing and the like with a convolutional neural network model. Each part is detailed as follows:
image importing: selecting car side photographs shot in a time period of 2018.12.1-2018.12.31 from a MySQL database of an expressway green traffic management platform in Shaanxi province, judging the image quality according to a green traffic image validity judgment standard shown in figure 5, manually judging and determining 1373 valid images, 2024 invalid images and 3397 images in total, wherein the total number of the car side photographs is 3397.
And (3) unbalanced data processing: and removing invalid pictures, artificially synthesizing a few samples of the valid pictures, and increasing the number of training samples by adopting a data enhancement method. So-called data enhancement, available methods include: and horizontally turning the picture, adjusting saturation, adjusting contrast and the like. And carrying out unbalanced data processing on the effective images to obtain 5310 effective images.
Target detection: after the unbalanced data mathematical processing, the target detection algorithm YOLOv2 framework is used for detecting the carriage, the carriage is used as the whole target for detection and identification, the carriage part is identified, and after the error data are removed, 5303 effective images are obtained.
Image cutting: after the axle contour is identified, the part of the image outside the identified area needs to be cut off. Because the red frame line is used as the boundary of the target area and the redundant area required in the image, the RGB color value of the pixel point at the boundary frame in the image is different from the pixel point of other areas. Therefore, the image is converted into a matrix, the RGB value at the red frame line is (225, 0, 31), so the image segmentation firstly extracts the point coordinates of the RGB value (225, 0, 31) in the image, secondly finds two vertex coordinates (x1, y1) of the lower left vertex and the upper right vertex in the red point, (x2, y2) as boundary points, and finally leaves only the car region image by removing the redundant part of the image except the car by image cropping, at this time, the effective image 5303 sheets are left.
Training a convolutional neural network: the green traffic vehicles are classified into 6 types according to the compartment-loading type, see fig. 8, and 4508 of the effective images 5303 of the obtained compartment part images are selected as training sets of convolutional neural network classification, and 795 of the effective images are selected as test sets. The distribution of the image samples for the 6 car-load types is shown in the following table:
type (B) Number of samples collected manually Number of samples after data enhancement Number of samples after target detection and image cutting
11 297 891 888
12 87 783 779
21 319 957 957
22 228 912 912
33 147 882 882
43 295 885 885
And (3) training the carriage-loading type classification on a training set by using three convolutional neural networks of AlexNet, VGG-16 and ResNet-152 respectively, firstly setting a training target on the three networks with initial parameters as loss function reduction, and after the iteration is completed, obtaining three trained classification networks, wherein the number of iterations is 20.
Testing a convolutional neural network: and 6 classification tests are carried out on the carriage-loading type on the test set by using the trained three convolutional neural networks of AlexNet, VGG-16 and ResNet-152. The loss rate, accuracy change and accuracy of each class in the confusion matrix for the experimental process are shown in fig. 9, 10 and 11. The following table shows the test results of different networks for classification of axle types of green traffic vehicles:
Figure BDA0002183039850000081
Figure BDA0002183039850000091
from the above graph, it can be seen that the classification accuracy of ResNet-152 is 98.82% at the highest, and the overfitting ratio is the lowest. In the aspect of average training time consumption, ResNet-152 reaches 110.20 seconds/generation, and AlexNet takes only 9.85 seconds/generation at least. The average time consumption and classification accuracy of the VGG-16 network are at an intermediate level. However, the overfitting phenomenon occurs in all the three network models in the axle classification experiment, and the reason may be that the number of samples of a certain type is small, the sample difference after data enhancement is small, and the model of convolutional neural network learning is too detailed. And judging the selection of the trained neural network according to the overfitting ratio and the average accuracy in the test result, and judging the loading type of the green traffic compartment to be identified.
The green traffic vehicle compartment-loading type identification method can be used for identifying and classifying the green traffic vehicle compartment-loading type. The method combines a target detection algorithm, non-equilibrium data set processing and the like with a convolutional neural network model. By preprocessing the pictures in the early stage, part of factors which can cause the reduction of classification accuracy are eliminated, and an effective picture library is established. Meanwhile, the problem of sample imbalance is solved by adopting a data enhancement method. A user only needs to complete the establishment of the training picture library at first, and gradually carries out each step to complete the training of the neural network, and the trained neural network can be directly used for identifying the green traffic vehicle compartment-loading type in subsequent identification, so that the identification efficiency of the green traffic vehicle compartment-loading type is greatly improved.
The overfitting phenomenon appears in all the three network models in the axle classification experiment, and probably because the quantity of certain samples is very small, the sample difference after data enhancement is small, and the model of convolutional neural network learning is too fine, so that a large number of samples can be collected in practical application, and the quantity of the samples is ensured. All three networks showed good classification performance in the experiment, while ResNet-152 showed stronger performance in the multi-classification experiment than the other two networks, although the time consumption was longer, the accuracy rate was 98.82%, and the performance was excellent.
Therefore, the invention can smoothly complete the task of identifying and classifying the loading type of the green traffic vehicle.

Claims (7)

1. A method for identifying a loading type of a green traffic vehicle compartment based on a convolutional neural network is characterized by comprising the following steps:
step 1, acquiring a green traffic image;
step 2, formulating a green traffic image validity judgment standard through a relative evaluation method in the image quality evaluation method, and selecting an image through the green traffic image validity judgment standard as a training sample of a subsequent classification and identification experiment convolutional neural network model;
step 3, processing unbalanced data in image classification by artificially synthesizing a few samples, and increasing the number of training samples by a data enhancement method;
step 4, after data enhancement, carrying out compartment target detection by using a target detection algorithm YOLOv2 frame to obtain a compartment part image;
step 5, according to the sequence of the carriages and the sequence of the loading modes, dividing the green traffic vehicles into 6 types according to the carriage-loading types, classifying and numbering the carriage-loading types, and taking the obtained carriage part images as training data for convolutional neural network classification, wherein 80% of the obtained carriage part images are used for network training images, and 20% of the obtained carriage part images are used for testing the classification accuracy;
step 6, training the carriage-loading type classification on a training set by using three convolutional neural networks of AlexNet, VGG-16 and ResNet-152;
step 7, detecting three convolutional neural networks of AlexNet, VGG-16 and ResNet-152 which are trained, operating the three networks on a test set, carrying out classification test to obtain the average accuracy and the overfitting ratio of the three networks, judging the trained neural network according to the average accuracy and the overfitting ratio, selecting the network with the highest accuracy, and judging the loading type of the green traffic vehicle compartment to be identified;
in step 3, data enhancement is to generate more images by using a random transformation method based on the existing image samples, and the generated images are still the same as the original images in classification, wherein the adopted method comprises the following steps: and horizontally turning the picture, and adjusting the saturation or the contrast.
2. The method for identifying the loading type of the green traffic vehicle compartment based on the convolutional neural network as claimed in claim 1, wherein in step 1, the green traffic vehicle image is obtained from a green traffic vehicle image database in a green traffic vehicle management system; the checked vehicle is photographed through the photographing tool and stored in the management system, and the photographing contents comprise a vehicle side part, a vehicle head part, a vehicle tail part and transportation goods of the vehicle.
3. The method for identifying the loading type of the green-passage vehicle compartment based on the convolutional neural network as claimed in claim 1, wherein in step 2, the image validity criterion of the green-passage vehicle is to divide the image quality into five levels through five aspects of image definition, background complexity, contour integrity, shooting angle and key part occlusion.
4. The method for identifying the loading type of the green traffic vehicle compartment based on the convolutional neural network as claimed in claim 1, wherein in step 4, the resolution of the YOLOv2 network input image is 416 x 416, the output grid size is 13 x 13, and after a fixed frame is adopted, 9 prediction frames are predicted for each grid, which reaches 1521; after the car part is identified, the designated target area is identified, and the part of the image irrelevant to the target is removed through image cutting, so that the part of the car to be identified is left.
5. The method of claim 4, wherein the image segmentation is a method for segmenting the object identified in the target detection link from other parts, the required target region and the redundant region in the image are bounded by a red frame line, the RGB color values of the pixels in the image at the bounding frame are different from those of the pixels in other regions, the image is transformed into a matrix, the RGB values at the red frame line are determined, the image segmentation firstly extracts the point coordinates of the RGB values in the image, and secondly finds two vertex coordinates (x1, y1) of the lower left vertex and the upper right vertex in the red point, and (x2, y2) are used as the boundary points as follows:
Figure 987728DEST_PATH_IMAGE001
therefore, the whole red frame is determined to be cut, most redundant information in the original image after cutting is removed, and finally the core part of the carriage is only left after image classification.
6. The method for identifying the loading type of the green traffic vehicle compartment based on the convolutional neural network as claimed in claim 1, wherein in step 5, the freight vehicles are divided into four types according to the compartment types: a breast board truck, a bin gate truck, a tank truck or a van; the loading modes of the cargos are divided into 3 types: open, canvas wrapped or closed.
7. The method for identifying the loading type of the green traffic vehicle compartment based on the convolutional neural network as claimed in claim 1, wherein in step 6, a training target is set on the three types of networks for setting initial parameters to be the minimum loss function, and after a plurality of iterations, the trained network is obtained when the loss function value is reduced to a predefined value.
CN201910803745.4A 2019-08-28 2019-08-28 Method for identifying loading type of green traffic vehicle compartment based on convolutional neural network Active CN110533098B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910803745.4A CN110533098B (en) 2019-08-28 2019-08-28 Method for identifying loading type of green traffic vehicle compartment based on convolutional neural network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910803745.4A CN110533098B (en) 2019-08-28 2019-08-28 Method for identifying loading type of green traffic vehicle compartment based on convolutional neural network

Publications (2)

Publication Number Publication Date
CN110533098A CN110533098A (en) 2019-12-03
CN110533098B true CN110533098B (en) 2022-03-29

Family

ID=68664924

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910803745.4A Active CN110533098B (en) 2019-08-28 2019-08-28 Method for identifying loading type of green traffic vehicle compartment based on convolutional neural network

Country Status (1)

Country Link
CN (1) CN110533098B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112432654B (en) * 2020-11-20 2023-04-07 浙江华锐捷技术有限公司 State analysis method and device for muck truck and storage medium

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106096531A (en) * 2016-05-31 2016-11-09 安徽省云力信息技术有限公司 A kind of traffic image polymorphic type vehicle checking method based on degree of depth study
CN107220603A (en) * 2017-05-18 2017-09-29 惠龙易通国际物流股份有限公司 Vehicle checking method and device based on deep learning
CN108898044A (en) * 2018-04-13 2018-11-27 顺丰科技有限公司 Charging ratio acquisition methods, device, system and storage medium
CN109416250A (en) * 2017-10-26 2019-03-01 深圳市锐明技术股份有限公司 Carriage status detection method, carriage status detection device and the terminal of haulage vehicle
CN109934121A (en) * 2019-02-21 2019-06-25 江苏大学 A kind of orchard pedestrian detection method based on YOLOv3 algorithm
CN109993138A (en) * 2019-04-08 2019-07-09 北京易华录信息技术股份有限公司 A kind of car plate detection and recognition methods and device
CN110176143A (en) * 2019-06-10 2019-08-27 长安大学 A kind of highway traffic congestion detection method based on deep learning algorithm

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9665802B2 (en) * 2014-11-13 2017-05-30 Nec Corporation Object-centric fine-grained image classification
US10380438B2 (en) * 2017-03-06 2019-08-13 Honda Motor Co., Ltd. System and method for vehicle control based on red color and green color detection

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106096531A (en) * 2016-05-31 2016-11-09 安徽省云力信息技术有限公司 A kind of traffic image polymorphic type vehicle checking method based on degree of depth study
CN107220603A (en) * 2017-05-18 2017-09-29 惠龙易通国际物流股份有限公司 Vehicle checking method and device based on deep learning
CN109416250A (en) * 2017-10-26 2019-03-01 深圳市锐明技术股份有限公司 Carriage status detection method, carriage status detection device and the terminal of haulage vehicle
CN108898044A (en) * 2018-04-13 2018-11-27 顺丰科技有限公司 Charging ratio acquisition methods, device, system and storage medium
CN109934121A (en) * 2019-02-21 2019-06-25 江苏大学 A kind of orchard pedestrian detection method based on YOLOv3 algorithm
CN109993138A (en) * 2019-04-08 2019-07-09 北京易华录信息技术股份有限公司 A kind of car plate detection and recognition methods and device
CN110176143A (en) * 2019-06-10 2019-08-27 长安大学 A kind of highway traffic congestion detection method based on deep learning algorithm

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
ResNet-Based Vehicle Classification and Localization in Traffic Surveillance Systems;Heechul Jung等;《 2017 IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW)》;20170824;第934-940页 *
基于改进VGG卷积神经网络的前方车辆目标检测;陈毅等;《数学制造科学》;20181231;第282-287页 *

Also Published As

Publication number Publication date
CN110533098A (en) 2019-12-03

Similar Documents

Publication Publication Date Title
CN110532946B (en) Method for identifying axle type of green-traffic vehicle based on convolutional neural network
CN109816024B (en) Real-time vehicle logo detection method based on multi-scale feature fusion and DCNN
CN108345911B (en) Steel plate surface defect detection method based on convolutional neural network multi-stage characteristics
CN109961049B (en) Cigarette brand identification method under complex scene
CN110135503B (en) Deep learning identification method for parts of assembly robot
CN109509187B (en) Efficient inspection algorithm for small defects in large-resolution cloth images
CN109284669A (en) Pedestrian detection method based on Mask RCNN
CN110399884B (en) Feature fusion self-adaptive anchor frame model vehicle detection method
CN108416348A (en) Plate location recognition method based on support vector machines and convolutional neural networks
CN110909800A (en) Vehicle detection method based on fast R-CNN improved algorithm
CN104598885B (en) The detection of word label and localization method in street view image
CN113569667B (en) Inland ship target identification method and system based on lightweight neural network model
CN116310785B (en) Unmanned aerial vehicle image pavement disease detection method based on YOLO v4
CN113920107A (en) Insulator damage detection method based on improved yolov5 algorithm
CN114627383B (en) Small sample defect detection method based on metric learning
CN111738114B (en) Vehicle target detection method based on anchor-free accurate sampling remote sensing image
CN114926407A (en) Steel surface defect detection system based on deep learning
CN106650823A (en) Probability extreme learning machine integration-based foam nickel surface defect classification method
CN113096085A (en) Container surface damage detection method based on two-stage convolutional neural network
CN113256624A (en) Continuous casting round billet defect detection method and device, electronic equipment and readable storage medium
CN110599453A (en) Panel defect detection method and device based on image fusion and equipment terminal
CN111340019A (en) Grain bin pest detection method based on Faster R-CNN
CN115294089A (en) Steel surface defect detection method based on improved YOLOv5
CN116012291A (en) Industrial part image defect detection method and system, electronic equipment and storage medium
CN113205026A (en) Improved vehicle type recognition method based on fast RCNN deep learning network

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant