CN106257490A - The method and system of detection driving vehicle information - Google Patents
The method and system of detection driving vehicle information Download PDFInfo
- Publication number
- CN106257490A CN106257490A CN201610575048.4A CN201610575048A CN106257490A CN 106257490 A CN106257490 A CN 106257490A CN 201610575048 A CN201610575048 A CN 201610575048A CN 106257490 A CN106257490 A CN 106257490A
- Authority
- CN
- China
- Prior art keywords
- vehicle
- pictures
- appearance
- training
- angle
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 65
- 238000001514 detection method Methods 0.000 title abstract description 17
- 238000012549 training Methods 0.000 claims abstract description 57
- 238000013527 convolutional neural network Methods 0.000 claims abstract description 50
- 239000003086 colorant Substances 0.000 claims description 6
- 238000000605 extraction Methods 0.000 claims description 6
- 230000008676 import Effects 0.000 abstract 1
- 238000004422 calculation algorithm Methods 0.000 description 10
- 238000007726 management method Methods 0.000 description 10
- 238000012545 processing Methods 0.000 description 6
- 238000010586 diagram Methods 0.000 description 5
- 238000013528 artificial neural network Methods 0.000 description 4
- 230000006870 function Effects 0.000 description 4
- 230000008569 process Effects 0.000 description 4
- 238000012360 testing method Methods 0.000 description 4
- 238000004364 calculation method Methods 0.000 description 3
- 238000005516 engineering process Methods 0.000 description 3
- 238000011835 investigation Methods 0.000 description 3
- 238000005070 sampling Methods 0.000 description 3
- 238000012706 support-vector machine Methods 0.000 description 3
- 206010061274 Malocclusion Diseases 0.000 description 2
- 206010039203 Road traffic accident Diseases 0.000 description 2
- 238000004458 analytical method Methods 0.000 description 2
- 239000000463 material Substances 0.000 description 2
- 238000010295 mobile communication Methods 0.000 description 2
- 238000011022 operating instruction Methods 0.000 description 2
- 238000007781 pre-processing Methods 0.000 description 2
- 230000009467 reduction Effects 0.000 description 2
- 230000003068 static effect Effects 0.000 description 2
- 238000012935 Averaging Methods 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000007635 classification algorithm Methods 0.000 description 1
- 238000004140 cleaning Methods 0.000 description 1
- 238000012937 correction Methods 0.000 description 1
- 238000013501 data transformation Methods 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 239000000284 extract Substances 0.000 description 1
- 238000005286 illumination Methods 0.000 description 1
- 238000003709 image segmentation Methods 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 230000010354 integration Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 238000000513 principal component analysis Methods 0.000 description 1
- 238000011160 research Methods 0.000 description 1
- 239000007787 solid Substances 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
- 238000013519 translation Methods 0.000 description 1
- 238000012795 verification Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/50—Context or environment of the image
- G06V20/56—Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
- G06V20/58—Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads
- G06V20/584—Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads of vehicle lights or traffic lights
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/214—Generating training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
- G06F18/241—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
- G06F18/2413—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on distances to training or reference patterns
- G06F18/24147—Distances to closest patterns, e.g. nearest neighbour classification
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V2201/00—Indexing scheme relating to image or video recognition or understanding
- G06V2201/08—Detecting or categorising vehicles
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Data Mining & Analysis (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Artificial Intelligence (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Evolutionary Biology (AREA)
- Evolutionary Computation (AREA)
- Bioinformatics & Computational Biology (AREA)
- General Engineering & Computer Science (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Life Sciences & Earth Sciences (AREA)
- Multimedia (AREA)
- Image Analysis (AREA)
Abstract
The present invention provides a kind of method detecting driving vehicle information, including: at least according to vehicle angles and vehicle appearance, vehicle pictures is classified, to generate multiple vehicle pictures of plurality of classes;Based on multiple vehicle pictures, degree of depth convolutional neural networks is trained;Degree of depth convolutional neural networks after utilization training at least obtains vehicle angles feature and vehicle appearance feature, the training linear classifier of the image block of the presumptive area in multiple vehicle pictures;Degree of depth convolutional neural networks after utilization training at least obtains vehicle angles feature and the vehicle appearance feature of the image block of the presumptive area in vehicle pictures to be identified, import the linear classifier after training to differentiate, at least determine the angle of vehicle and the outward appearance of vehicle.Present invention also offers the system of detection driving vehicle information.The method and system of the present invention can detect the information of driving vehicle, is effectively improved traffic administration efficiency, improves recognition accuracy, has the most wide application prospect.
Description
Technical Field
The invention relates to the field of computer vision of intelligent traffic, in particular to a method and a system for detecting information of a running vehicle.
Background
With the improvement of science and technology, the quality of life of people is improved, vehicles become important vehicles in human life, and in an urban Intelligent transportation System (ITS for short), a very important application is to quickly find corresponding vehicle targets according to various information such as vehicle body angles, vehicle appearances and the like, particularly to unlicensed vehicles, so that accurate angle and appearance identification can be used as effective indexes to quickly search vehicles.
In the process of implementing the invention, the inventor finds that at least the following problems exist in the related art:
at present, for the research of vehicle angle identification, vehicles to be identified are mainly searched from massive videos or pictures through a manual method for analysis and evidence obtaining, and particularly in case detection, a large amount of manpower and material resources are consumed, so that the case detection consumes a long time and has low efficiency. In addition, modern traffic sections are complex, vehicles are various, and videos shot by the intersection monitors sometimes cannot well reflect the driving directions of the vehicles. In addition, since there is a possibility that a plate may be replaced or a plate may be hidden in a vehicle involved in a case, angle recognition of the vehicle is also increasingly applied to vehicle traffic management.
Currently, many corresponding methods, such as vehicle brand recognition, vehicle style recognition, and vehicle color recognition, have been developed for vehicle appearance recognition. Among them, a considerable number of methods in the vehicle color recognition technology use conventional color feature extraction methods such as: HSV (Hue Saturation Value) color features, YUV color features, RGB color features, and many methods are methods using SVM (Support Vector Machine) in discriminant analysis, but these prior art methods are sensitive to illumination conditions, such as a blurred scene, a night scene, and a highlight scene, and have a low recognition rate due to limited generalization capability.
In addition, the traditional vehicle color identification method mainly focuses on the vehicle color identification technology of selecting the head or the tail of the vehicle, and because the part capable of reflecting the vehicle color on the head or the tail of the vehicle is few, and the color of a window, a lamp and the like in the vehicle may be greatly different from the vehicle color, the traditional method needs to remove the areas possibly influencing the vehicle color identification through an image segmentation method, and has the disadvantages of large calculation amount, high cost and resource consumption and long time consumption.
Disclosure of Invention
An embodiment of the present invention provides a method and a system for detecting information of a traveling vehicle, so as to solve at least one of the above technical problems.
In a first aspect, an embodiment of the present invention provides a method for detecting traveling vehicle information, including:
classifying the vehicle pictures according to at least a vehicle angle and a vehicle appearance to generate a plurality of vehicle pictures of various categories;
training a deep convolutional neural network based on a plurality of vehicle pictures;
utilizing the trained deep convolutional neural network to at least obtain the vehicle angle characteristics and the vehicle appearance characteristics of the image blocks in the preset area in the plurality of vehicle pictures, and training a linear classifier;
and at least acquiring the vehicle angle characteristic and the vehicle appearance characteristic of the image block of the preset area in the vehicle picture to be recognized by using the trained deep convolutional neural network, importing the trained linear classifier for judgment, and at least determining the angle of the vehicle and the appearance of the vehicle.
In a second aspect, an embodiment of the present invention provides a system for detecting information of a traveling vehicle, including:
the classification module is used for classifying the vehicle pictures at least according to the vehicle angles and the vehicle appearances so as to generate a plurality of vehicle pictures of various categories;
the first training module is used for training the deep convolutional neural network based on a plurality of vehicle pictures;
the second training module is used for at least acquiring vehicle angle characteristics and vehicle appearance characteristics of image blocks in a preset area in a plurality of vehicle pictures by using the trained deep convolutional neural network, and training a linear classifier;
and the determining module is used for at least acquiring the vehicle angle characteristic and the vehicle appearance characteristic of the image block of the preset area in the vehicle picture to be recognized by utilizing the trained deep convolutional neural network, importing the trained linear classifier for judgment, and at least determining the angle of the vehicle and the appearance of the vehicle.
In a third aspect, the embodiment of the present invention further provides a non-volatile computer storage medium storing computer-executable instructions for performing any one of the above-described methods for detecting traveling vehicle information according to the present invention.
In a fourth aspect, an embodiment of the present invention further provides a detection apparatus, including: at least one processor, and a memory; wherein the memory stores operating instructions executable by the at least one processor, the instructions being executable by the at least one processor to enable the at least one processor to perform any one of the above-described methods of detecting moving vehicle information of the present invention.
The method and the system provided by the embodiment of the invention can greatly improve the management efficiency of the intelligent traffic management system, and can directly use the trained deep convolutional neural network and the classifier to at least judge the angle characteristic and the appearance characteristic of the vehicle without detecting the vehicle for any vehicle image in multimedia (such as video and images) in the traffic management system, thereby locking a target vehicle and judging the driving direction of the vehicle, and greatly saving the system processing time. The method and the system have strong anti-interference performance, can effectively solve the problem of information error of the running vehicle in the prior art, improve the identification accuracy, greatly save manpower and material resources, and have wide application prospect.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, and it is obvious that the drawings in the following description are some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to these drawings without creative efforts.
Fig. 1 is a flowchart of a method of detecting information on a traveling vehicle according to an embodiment of the present invention;
FIG. 2 is a flow chart of an alternative method of detecting moving vehicle information in accordance with an embodiment of the present invention;
FIG. 3 is a flow chart of an alternative method of detecting moving vehicle information in accordance with an embodiment of the present invention;
FIG. 4 is a flow chart of determining a vehicle angle characteristic according to an embodiment of the present invention;
FIG. 5 is a schematic structural diagram of a system for detecting information on a traveling vehicle according to an embodiment of the present invention;
FIG. 6 is a diagram illustrating a second training module according to an embodiment of the present invention;
fig. 7 is a schematic configuration diagram of a detection apparatus that detects information of a running vehicle according to an embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present invention clearer, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, but not all, embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Referring to fig. 1, there is shown a method of detecting traveling vehicle information according to an embodiment of the present invention, including:
s11: classifying the vehicle pictures according to at least a vehicle angle and a vehicle appearance to generate a plurality of vehicle pictures of various categories;
s12: training a deep convolutional neural network based on a plurality of vehicle pictures;
s13: utilizing the trained deep convolutional neural network to at least obtain the vehicle angle characteristics and the vehicle appearance characteristics of the image blocks in the preset area in the plurality of vehicle pictures, and training a linear classifier;
s14: and at least acquiring the vehicle angle characteristic and the vehicle appearance characteristic of the image block of the preset area in the vehicle picture to be recognized by using the trained deep convolutional neural network, importing the trained linear classifier for judgment, and at least determining the angle of the vehicle and the appearance of the vehicle.
The method provided by the illustrated embodiment is largely divided into two parts, including a training phase and a testing phase. The training stage is the first three steps in the method, firstly, at least angle and appearance classification is carried out on the vehicle pictures to generate a plurality of vehicle pictures of various categories, and meanwhile, the vehicle pictures are put into a designed deep convolutional neural network to train the learning parameters of each network layer to obtain the trained deep convolutional neural network; on the basis, the vehicle features are extracted through the trained deep convolutional neural network, and the parameters of the linear classifier are trained according to the vehicle features, so that the trained linear classifier is obtained. The testing stage is a detection verification of the obtained trained deep volume and the neural network and the linear classifier on the vehicle picture.
The method provided by the embodiment at least can effectively solve the problem of low vehicle angle and appearance feature recognition rate in the prior art. The method can be applied to various scenes, such as unexpected traffic accidents (such as wrong running) recorded by a vehicle driving recorder or a road vehicle monitor, and can assist a traffic police to quickly and accurately judge the running characteristics of each vehicle and the appearance and running angle characteristics of the vehicle with driving errors, so that the accidents are more quickly and effectively processed, and the traffic order is recovered; the automobile data recorder of the automobile can also be applied to automatic driving, and if the driving angle of the driven automobile deviates, the automobile collision problem can be avoided in time by automatically judging the appearances and the driving angles of other automobiles, so that the traffic accidents are reduced; for a vehicle monitor at an intersection, if it is detected that vehicles traveling in multiple directions (such as reverse direction, left direction, right direction, and forward direction) exist in the road segment, it can be determined that the road segment is a complex road segment; for example, in case investigation, compared with the prior art, the case investigation system can solve the problems of vehicle identification errors and the like caused by human factors, can effectively lock the vehicle to be identified according to the appearance characteristics of the vehicle, and then can distinguish the driving direction of the vehicle related to the case according to the angle characteristics of the vehicle, thereby carrying out vehicle route control in a targeted manner and accelerating the case investigation progress.
Referring to fig. 2, which is an alternative embodiment of the method shown in fig. 1, the appearance of the vehicle in the method shown in fig. 1 includes the color of the vehicle, and therefore, the specific implementation process for step S11 is as follows:
s21: the vehicle pictures are classified according to the vehicle angles of the first quantity category and the vehicle colors of the second quantity category to generate a plurality of vehicle pictures of the first quantity multiplied by the second quantity and the plurality of categories.
In the illustrated embodiment, the appearance feature of the vehicle is a vehicle color feature, so that the classification of the multiple vehicle pictures is to discriminate and classify the vehicle according to the driving angle and the color, and at this time, the preset angle types are 7 types (the first number), which are a forward vehicle head, a backward vehicle tail, a leftward vehicle head, a leftward vehicle tail, a rightward vehicle head, a rightward vehicle tail, and a lateral vehicle body. The method can be used for firstly carrying out angle classification on the vehicle pictures to obtain 7 vehicle picture groups with different angles, and carrying out color classification on the vehicle pictures in each group on the basis, wherein the color types at the moment are 7 (second number) and are respectively red, yellow, blue, green, white, black and brown. Similarly, color classification may be performed first, and then angle classification may be performed, so as to obtain 7 × 7 vehicle picture groups, that is, 49 different color and different angle features.
Further, the deep convolutional neural network and the linear classifier are trained according to the classified vehicle image feature information (angle and color) to perform parameter training, then in a testing stage, the trained deep convolutional neural network is used for obtaining the angle feature and the color feature of the vehicle, the linear classifier is used for distinguishing the obtained features, the probabilities that the vehicle belongs to different categories are obtained through calculation respectively, the maximum probability value in the obtained probabilities is selected, and the vehicle feature is determined to be the vehicle feature corresponding to the maximum probability value and at least comprises the color feature and the angle feature of the vehicle.
Referring to fig. 3, which shows an alternative embodiment of the method shown in fig. 1, the specific implementation process of step S13 in the method shown in fig. 1 includes:
s31: extracting the characteristics of image blocks in a preset area in a plurality of vehicle pictures by using the trained deep convolutional neural network, wherein the characteristics of the image blocks at least comprise gradient characteristics and edge texture characteristics;
s32: and determining at least an angle characteristic of the vehicle and an appearance characteristic of the vehicle based on the extracted features of the image blocks, and training a linear classifier by using at least the angle characteristic of the vehicle and the appearance characteristic of the vehicle.
In the embodiment shown, the vehicle pictures mainly comprise static pictures and dynamic pictures, wherein the static pictures mainly refer to a single picture (such as an electronic picture of a vehicle), and the vehicle images can be detected by using a method of combining sliding window characteristic detection and vehicle window positioning; the dynamic picture mainly refers to a video picture, a moving vehicle in the video can be detected according to a foreground detection algorithm of multi-model Gaussian discrimination, and then a vehicle image is obtained by combining a sliding window characteristic detection method.
In the embodiment shown, the features of the vehicle image are extracted based on the feature extraction method, the angle type and the appearance type of the vehicle are determined based on the extracted features, and a plurality of vehicle pictures are classified to obtain vehicle picture groups with different angles and different appearance types. The preset angle types of the invention are 7 types, namely a forward vehicle head, a backward vehicle tail, a leftward vehicle head, a leftward vehicle tail, a rightward vehicle head, a rightward vehicle tail and a lateral vehicle body. The appearance type of the vehicle can be a color feature, and can also be a style, a brand, and the like representing the appearance of the vehicle. The color features are not described in detail herein, the style features are appearance features of a vehicle, such as a truck, a car, or a bus, and the brand features are brands of vehicles, such as people, a bmw, an audi, and the like.
Further, the extracted features of the vehicle image at least comprise a HOG feature (Histogram of oriented gradients) and an LBP feature (Local Binary Patterns), wherein the HOG feature is extracted by calculating and counting gradient direction histograms of Local regions of the vehicle image to form the vehicle feature; the LBP characteristic is extracted by describing the local space structure of the image to extract the image texture characteristic. The method comprises the steps of determining a vehicle driving angle, firstly determining a vehicle driving route according to a gradient direction obtained by an HOG characteristic extraction method, then classifying the vehicle head and the vehicle tail through a KNN clustering algorithm, simultaneously extracting LBP characteristics to obtain vehicle head and vehicle tail judgment, and finally determining the vehicle driving angle according to the vehicle shape and the vehicle driving gradient direction. For example (see fig. 4), the driving route of the vehicle is determined to be a straight line in the left direction according to the driving gradient direction of the vehicle obtained by the HOG feature, and the straight line can be classified by the KNN algorithm at this time, namely, a vehicle head driving in the left front direction and a vehicle tail driving in the left rear direction; and then, judging the texture of the head and the tail of the vehicle obtained according to the LBP characteristics, wherein if the gradient direction points to the head direction at the moment, the driving route of the vehicle is driven in the left front direction at the moment.
Furthermore, the classification of various vehicle pictures is mainly classified according to a classification algorithm, such as a KNN algorithm (k-Nearest Neighbor algorithm), which does not need training, has low complexity, and is suitable for a multi-classification problem (multi-modal, where an object has multiple class labels). The gradient direction straight lines obtained by the HOG features can be classified into four types, namely straight lines in the vertical direction, the left direction, the right direction and the horizontal direction, wherein the straight lines in the vertical direction, the left direction and the right direction can be classified into a forward vehicle head, a backward vehicle tail, a left vehicle head, a left vehicle tail, a right vehicle head and a right vehicle tail according to a KNN algorithm; for vehicles traveling in the horizontal direction, a classification into lateral bodies is possible, resulting in a total of 7 types of angular features. Because the KNN algorithm mainly depends on the surrounding limited adjacent vehicle pictures, the KNN method is more suitable for the vehicle picture sets to be classified with more intersections or overlaps than other methods (such as an SVM method).
The illustrated embodiment provides a training process for a picture of a vehicle that includes training a deep convolutional neural network and training a linear classifier. For training of the deep convolutional neural network, in order to improve the operation efficiency, the number of volume base layers and sampling layers of the deep convolutional network adopted by the method is small, and the method specifically comprises four volume base layers and three downsampling layers. The input layer reads in the simply normalized pictures, and each volume base layer comprises a plurality of vehicle pictures of different types, so that the image characteristics of a plurality of vehicle pictures can be obtained at the same position; each volume base layer is followed by a down-sampling layer for performing operations of local averaging and reducing the resolution of the vehicle picture, and the down-sampling method is not limited, so that the obtained features have invariance to deformation and translation. The training of the deep neural network is embodied in the training and fixing of network weight parameters corresponding to each structure in the network, and the deep convolutional neural network after training is obtained. The deep neural network can perform good feature extraction and classification work under the condition that the vehicle image features are comprehensive and the quantity is sufficient, has strong tolerance on bad data, and can adapt to various different data environments. The vehicle features extracted by the deep neural network are described by a fully connected layer of the last layer in the network.
Furthermore, the preset area of the vehicle image is the lower half part of the vehicle image, and the upper half part of the vehicle comprises irrelevant areas such as windows and the like, so that the lower half part can be selected to effectively tolerate the deviation of the vehicle, and the description precision is improved.
Furthermore, the classifier is selected according to whether the classification of the vehicle picture is mutually exclusive or not, each training sample in the method disclosed by the invention is only marked with one type label, so that a linear classifier, such as a SoftMax classifier, can be selected, and the classifier has a better classification effect under the condition of robust features. After the features and the corresponding categories of the image blocks in the preset area in the multiple vehicle pictures are extracted by using the trained deep convolutional neural network, various parameters in the SoftMax classifier are trained to obtain the classifier capable of distinguishing the vehicle type. And then, in a test stage, the features of the image blocks of the preset area in the vehicle picture to be recognized can be obtained by using the trained deep convolutional neural network, and the trained linear classifier is introduced to at least determine the angle and the appearance of the vehicle.
In the illustrated embodiment, some classification errors may exist in the obtained multi-class vehicle picture. For example, a head moving in the left direction is classified into a head moving in the right direction due to misjudgment, and these errors may cause imbalance in picture classification. When the pictures are not balanced, for example, there are more vehicle pictures in the forward angle category, and there are fewer vehicle pictures in other categories, it is likely that when a new vehicle picture is input, the number of vehicle pictures in the forward angle category among the K adjacent pictures in the vehicle picture is the majority, and the classifier is caused to classify the new picture incorrectly. It is possible that the vehicle appearance feature classification may also have classification errors, and therefore training on the classified data is required to delete or correct the erroneous classification.
Further, due to the fact that the images of the vehicles classified by the KNN algorithm may have repetition, inconsistency and the like, in order to avoid an overfitting (over) problem caused by data imbalance, before training the classified images of the vehicles, preprocessing operations on data are required, and the preprocessing operations include data cleaning, data integration, data transformation, data reduction and the like. Such as randomly scaling, cropping, or copying a small number of vehicle pictures to substantially equalize the number with other types.
Furthermore, before the classified vehicle pictures are trained, for the high-resolution vehicle pictures classified by the KNN algorithm, the principal component analysis algorithm is further used to perform the dimension reduction operation on the classified multiple vehicle pictures of multiple classes, including geometric correction such as rotation and scaling, so that all the vehicle pictures are aligned, and the training is facilitated.
The method provided by the embodiment extracts the image features of the vehicle from the vehicle picture, classifies the image features according to preset types, such as the angle of the vehicle and the appearance of the vehicle, trains the deep convolutional neural network and the classifier according to the classification, and detects and identifies the vehicle driving category features in the picture to be identified. The method can greatly improve the management efficiency of the intelligent traffic management system, and can directly utilize the trained deep convolutional neural network and the classifier to acquire the vehicle running characteristic information for any vehicle image in multimedia (such as videos and images) in the traffic management system, thereby greatly saving the system processing time. The method has strong anti-interference performance, effectively solves the problem of vehicle running information identification error caused by human judgment error or external factor interference in the existing traffic vehicle identification scene, and has wide application prospect.
Referring to fig. 5, a schematic structural diagram of a system for detecting information on a traveling vehicle according to an embodiment of the present invention is shown, including:
the classification module is used for classifying the vehicle pictures at least according to the vehicle angles and the vehicle appearances so as to generate a plurality of vehicle pictures of various categories;
the first training module is used for training the deep convolutional neural network based on a plurality of vehicle pictures;
the second training module is used for at least acquiring vehicle angle characteristics and vehicle appearance characteristics of image blocks in a preset area in a plurality of vehicle pictures by using the trained deep convolutional neural network, and training a linear classifier;
and the determining module is used for at least acquiring the vehicle angle characteristic and the vehicle appearance characteristic of the image block of the preset area in the vehicle picture to be recognized by utilizing the trained deep convolutional neural network, importing the trained linear classifier for judgment, and at least determining the angle of the vehicle and the appearance of the vehicle.
In an alternative embodiment, the vehicle appearance comprises a vehicle color,
the classification module is to:
the vehicle pictures are classified according to the vehicle angles of the first quantity category and the vehicle colors of the second quantity category to generate a plurality of vehicle pictures of the first quantity multiplied by the second quantity and the plurality of categories.
In an alternative embodiment, the first number and the second number are both seven, and the categories of vehicle angles include: the device comprises a forward vehicle head, a backward vehicle tail, a left vehicle head, a left vehicle tail, a right vehicle head, a right vehicle tail and a lateral vehicle body; the categories of vehicle colors include: red, yellow, blue, green, white, black, brown.
In an alternative embodiment, the second training module comprises (see fig. 6):
the feature extraction component is used for extracting features of image blocks in a preset area in the multiple vehicle pictures by using the trained deep convolutional neural network, wherein the features of the image blocks at least comprise gradient features and edge texture features;
and the characteristic training component is used for determining at least an angle characteristic of the vehicle and an appearance characteristic of the vehicle based on the extracted characteristics of the image blocks, and training the linear classifier by using at least the angle characteristic of the vehicle and the appearance characteristic of the vehicle.
In an alternative embodiment, the predetermined area in the picture of the vehicle is the lower half of the picture of the vehicle.
According to the system provided by the embodiment of the invention, the image features of the relevant vehicle are extracted from the vehicle picture and then classified according to the preset types, such as the angle of the vehicle and the appearance of the vehicle, a classifier for identifying various vehicle angles and vehicle appearance features is trained according to the classification, and the deep convolutional neural network and the classifier are utilized to identify the class features in the vehicle picture to be identified. The system can greatly improve the management efficiency of the intelligent traffic management system, and can directly acquire the vehicle running characteristic information through the trained deep convolutional neural network and the classifier for any vehicle image in multimedia (such as videos and images) in the traffic management system without detecting the vehicle, thereby greatly saving the system processing time. The system has strong anti-interference performance, effectively solves the problem of vehicle running characteristic information identification error caused by human judgment error or external factor interference in the existing traffic vehicle identification scene, and has wide application prospect.
An embodiment of the present invention provides a computer storage medium for detecting traveling vehicle information, the computer storage medium storing computer-executable instructions that can execute the method for detecting traveling vehicle information in any of the above method embodiments, configured to:
classifying the vehicle pictures according to at least a vehicle angle and a vehicle appearance to generate a plurality of vehicle pictures of various categories;
training a deep convolutional neural network based on a plurality of vehicle pictures;
utilizing the trained deep convolutional neural network to at least obtain the vehicle angle characteristics and the vehicle appearance characteristics of the image blocks in the preset area in the plurality of vehicle pictures, and training a linear classifier;
and at least acquiring the vehicle angle characteristic and the vehicle appearance characteristic of the image block of the preset area in the vehicle picture to be recognized by using the trained deep convolutional neural network, importing the trained linear classifier for judgment, and at least determining the angle of the vehicle and the appearance of the vehicle.
Fig. 7 is a schematic structural diagram of a detection apparatus for detecting information of a traveling vehicle according to an embodiment of the present invention, and as shown in fig. 7, the detection apparatus includes:
at least one processor; and the number of the first and second groups,
a memory for storing operating instructions executable by at least one processor; wherein,
the memory stores instructions executable by the at least one processor to enable the at least one processor to:
executing a classification instruction stored in a memory, and classifying the vehicle pictures at least according to the vehicle angle and the vehicle appearance to generate a plurality of vehicle pictures of various categories;
executing a deep convolutional neural network training instruction, and training the deep convolutional neural network based on a plurality of vehicle pictures;
utilizing the trained deep convolutional neural network to at least obtain the vehicle angle characteristics and the vehicle appearance characteristics of image blocks in a preset area in a plurality of vehicle pictures, executing a training classifier instruction, and training a linear classifier;
and at least acquiring the vehicle angle characteristic and the vehicle appearance characteristic of the image block of the preset area in the vehicle picture to be recognized by using the trained deep convolutional neural network, importing the trained linear classifier for judgment, and at least determining the angle of the vehicle and the appearance of the vehicle.
The detection apparatus shown in fig. 7 further includes an input device and an output device, the processor 410, the memory 420, the input device 430, and the output device 440 may be connected by a bus or other means, which is exemplified in fig. 7, wherein,
the memory 420, which is a non-volatile computer-readable storage medium, may be used to store non-volatile software programs, non-volatile computer-executable programs, and components, such as program instructions/components (e.g., the classification module, the first training module, the second training module, and the determination module shown in fig. 5) corresponding to the method for detecting information of a moving vehicle according to the embodiment of the present invention. The processor 410 executes various functional applications of the server and data processing by executing the nonvolatile software programs, instructions and components stored in the memory 420, that is, implements the method for detecting the traveling vehicle information according to the above-described method embodiment.
The memory 420 may include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required for at least one function; the storage data area may store angular characteristics of the vehicle image, appearance characteristic information, and the like. Further, the memory 420 may include high speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other non-volatile solid state storage device.
The input device 430 is used for training the deep convolutional neural network and the classifier to input the vehicle picture and the picture of the vehicle picture to be recognized. The output device 440 may include a display device such as a display screen.
The one or more components are stored in the memory 420 and, when executed by the one or more processors 410, perform a method for detecting traveling vehicle information in any of the method embodiments described above.
A detection device, the vehicle appearance characteristic is a vehicle color characteristic,
the processor is configured to:
the vehicle pictures are classified according to the vehicle angles of the first quantity category and the vehicle colors of the second quantity category to generate a plurality of vehicle pictures of the first quantity multiplied by the second quantity and the plurality of categories.
A detection apparatus, a processor configured to:
extracting the characteristics of image blocks in a preset area in a plurality of vehicle pictures by using the trained deep convolutional neural network, wherein the characteristics of the image blocks at least comprise gradient characteristics and edge texture characteristics;
and determining at least an angle characteristic of the vehicle and an appearance characteristic of the vehicle based on the extracted features of the image blocks, and training a linear classifier by using at least the angle characteristic of the vehicle and the appearance characteristic of the vehicle.
The product can execute the method provided by the embodiment of the invention, and has corresponding functional components and beneficial effects of the execution method. For technical details that are not described in detail in this embodiment, reference may be made to the method provided by the embodiment of the present invention.
The detection device for detecting the information of the running vehicle of the embodiment of the invention exists in various forms including, but not limited to:
(1) a mobile communication device: the device is characterized by having a mobile communication function and mainly aims to provide detection of the driving angle and the appearance of the vehicle. Such terminals include: smart phones (e.g., iPhone), etc.
(2) Ultra mobile personal computer device: the equipment belongs to the category of personal computers, has calculation and processing functions and generally has the characteristic of mobile internet access. Such terminals include: PDA, MID, and UMPC devices, etc., such as ipads.
(3) A portable device: such devices may be used to detect vehicle angle and appearance characteristic information. This type of device comprises: vehicle-mounted devices, and the like.
(4) A server: the device for providing the computing service comprises a processor, a hard disk, a memory, a system bus and the like, and the server is similar to a general computer architecture, but has higher requirements on processing capacity, stability, reliability, safety, expandability, manageability and the like because of the need of providing high-reliability service.
(5) Other electronic devices with functions of detecting vehicle angles and appearance characteristics.
The above-described embodiments of the apparatus are merely illustrative, wherein the modules described as separate parts may or may not be physically separate, and the parts displayed as modules may or may not be physical modules, may be located in one place, or may be distributed on a plurality of network modules. Some or all of the components can be selected according to actual needs to achieve the purpose of the solution of the embodiment. One of ordinary skill in the art can understand and implement it without inventive effort.
Through the above description of the embodiments, those skilled in the art will clearly understand that each embodiment can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware. With this understanding in mind, the above-described technical solutions may be embodied in the form of a software product, which can be stored in a computer-readable storage medium such as ROM/RAM, magnetic disk, optical disk, etc., and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to execute the methods described in the embodiments or some parts of the embodiments.
Finally, it should be noted that: the above examples are only intended to illustrate the technical solution of the present invention, but not to limit it; although the present invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; and such modifications or substitutions do not depart from the spirit and scope of the corresponding technical solutions of the embodiments of the present invention.
Claims (10)
1. A method of detecting traveling vehicle information, comprising:
classifying the vehicle pictures according to at least a vehicle angle and a vehicle appearance to generate a plurality of vehicle pictures of various categories;
training a deep convolutional neural network based on the plurality of vehicle pictures;
utilizing the trained deep convolutional neural network to at least obtain the vehicle angle characteristics and the vehicle appearance characteristics of the image blocks in the preset area in the plurality of vehicle pictures, and training a linear classifier;
and at least acquiring the vehicle angle characteristic and the vehicle appearance characteristic of the image block of the preset area in the vehicle picture to be recognized by utilizing the trained deep convolutional neural network, importing the trained linear classifier for discrimination, and at least determining the angle of the vehicle and the appearance of the vehicle.
2. The method of claim 1, wherein the vehicle appearance comprises a vehicle color,
the classifying the vehicle pictures according to at least a vehicle angle and a vehicle appearance to generate a plurality of vehicle pictures of a plurality of categories comprises:
the vehicle pictures are classified according to the vehicle angles of the first quantity category and the vehicle colors of the second quantity category to generate a plurality of vehicle pictures of the first quantity multiplied by the second quantity and the plurality of categories.
3. The method of claim 2, wherein the first number and the second number are both seven, the category of vehicle angles comprising: the device comprises a forward vehicle head, a backward vehicle tail, a left vehicle head, a left vehicle tail, a right vehicle head, a right vehicle tail and a lateral vehicle body; the categories of the vehicle color include: red, yellow, blue, green, white, black, brown.
4. The method of claim 1, wherein the obtaining at least vehicle angle features and vehicle appearance features of image blocks of a predetermined area in the plurality of vehicle pictures by using the trained deep convolutional neural network, and the training of the linear classifier comprises:
extracting the features of image blocks in a preset area in the plurality of vehicle pictures by using the trained deep convolutional neural network, wherein the features of the image blocks at least comprise gradient features and edge texture features;
and determining at least an angle characteristic of the vehicle and an appearance characteristic of the vehicle based on the extracted features of the image blocks, and training a linear classifier by using at least the vehicle angle characteristic and the vehicle appearance characteristic.
5. The method according to any one of claims 1-4, wherein the predetermined area in the vehicle picture is a lower half of the vehicle picture.
6. A system for detecting traveling vehicle information, comprising:
the classification module is used for classifying the vehicle pictures at least according to the vehicle angles and the vehicle appearances so as to generate a plurality of vehicle pictures of various categories;
the first training module is used for training the deep convolutional neural network based on the plurality of vehicle pictures;
the second training module is used for at least acquiring the vehicle angle characteristics and the vehicle appearance characteristics of the image blocks in the preset area in the plurality of vehicle pictures by using the trained deep convolutional neural network, and training a linear classifier;
and the determining module is used for at least acquiring the vehicle angle characteristic and the vehicle appearance characteristic of the image block of the preset area in the vehicle picture to be recognized by utilizing the trained deep convolutional neural network, importing the trained linear classifier for judgment, and at least determining the angle of the vehicle and the appearance of the vehicle.
7. The system of claim 6, wherein the vehicle appearance comprises a vehicle color,
the classification module is to: the vehicle pictures are classified according to the vehicle angles of the first quantity category and the vehicle colors of the second quantity category to generate a plurality of vehicle pictures of the first quantity multiplied by the second quantity and the plurality of categories.
8. The system of claim 7, wherein the first number and the second number are each seven, the categories of vehicle angles comprising: the device comprises a forward vehicle head, a backward vehicle tail, a left vehicle head, a left vehicle tail, a right vehicle head, a right vehicle tail and a lateral vehicle body; the categories of the vehicle color include: red, yellow, blue, green, white, black, brown.
9. The system of claim 6, wherein the second training module comprises:
the feature extraction component is used for extracting features of image blocks of a preset area in the plurality of vehicle pictures by using the trained deep convolutional neural network, wherein the features of the image blocks at least comprise gradient features and edge texture features;
and the characteristic training component is used for determining at least an angle characteristic of the vehicle and an appearance characteristic of the vehicle based on the extracted characteristics of the image blocks, and training a linear classifier by using at least the vehicle angle characteristic and the vehicle appearance characteristic.
10. The system according to any one of claims 6-9, wherein the predetermined area in the vehicle picture is a lower half of the vehicle picture.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201610575048.4A CN106257490A (en) | 2016-07-20 | 2016-07-20 | The method and system of detection driving vehicle information |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201610575048.4A CN106257490A (en) | 2016-07-20 | 2016-07-20 | The method and system of detection driving vehicle information |
Publications (1)
Publication Number | Publication Date |
---|---|
CN106257490A true CN106257490A (en) | 2016-12-28 |
Family
ID=57713769
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201610575048.4A Pending CN106257490A (en) | 2016-07-20 | 2016-07-20 | The method and system of detection driving vehicle information |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN106257490A (en) |
Cited By (13)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109034086A (en) * | 2018-08-03 | 2018-12-18 | 北京旷视科技有限公司 | Vehicle recognition methods, apparatus and system again |
CN109063768A (en) * | 2018-08-01 | 2018-12-21 | 北京旷视科技有限公司 | Vehicle recognition methods, apparatus and system again |
CN109190639A (en) * | 2018-08-16 | 2019-01-11 | 新智数字科技有限公司 | A kind of vehicle color identification method, apparatus and system |
CN109272504A (en) * | 2018-10-17 | 2019-01-25 | 广汽丰田汽车有限公司 | The detection of vehicle bumps defect, retroactive method, apparatus and system |
CN110163910A (en) * | 2019-03-22 | 2019-08-23 | 腾讯科技(深圳)有限公司 | Subject localization method, device, computer equipment and storage medium |
CN110503126A (en) * | 2018-05-18 | 2019-11-26 | 罗伯特·博世有限公司 | Method and apparatus for improving the training of classifier |
CN110569692A (en) * | 2018-08-16 | 2019-12-13 | 阿里巴巴集团控股有限公司 | multi-vehicle identification method, device and equipment |
CN111339834A (en) * | 2020-02-04 | 2020-06-26 | 浙江大华技术股份有限公司 | Method for recognizing vehicle traveling direction, computer device, and storage medium |
CN111917766A (en) * | 2020-07-29 | 2020-11-10 | 江西科技学院 | Method for detecting communication abnormity of vehicle-mounted network |
CN113065533A (en) * | 2021-06-01 | 2021-07-02 | 北京达佳互联信息技术有限公司 | Feature extraction model generation method and device, electronic equipment and storage medium |
CN113095266A (en) * | 2021-04-19 | 2021-07-09 | 北京经纬恒润科技股份有限公司 | Angle identification method, device and equipment |
CN117542003A (en) * | 2024-01-08 | 2024-02-09 | 大连天成电子有限公司 | Freight train model judging method based on image feature analysis |
CN118413744A (en) * | 2024-07-01 | 2024-07-30 | 成都建工路桥建设有限公司 | Automatic highway inspection system |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20060153459A1 (en) * | 2005-01-10 | 2006-07-13 | Yan Zhang | Object classification method for a collision warning system |
CN104102900A (en) * | 2014-06-30 | 2014-10-15 | 南京信息工程大学 | Vehicle identification system |
CN104463241A (en) * | 2014-10-31 | 2015-03-25 | 北京理工大学 | Vehicle type recognition method in intelligent transportation monitoring system |
CN105243398A (en) * | 2015-09-08 | 2016-01-13 | 西安交通大学 | Method of improving performance of convolutional neural network based on linear discriminant analysis criterion |
-
2016
- 2016-07-20 CN CN201610575048.4A patent/CN106257490A/en active Pending
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20060153459A1 (en) * | 2005-01-10 | 2006-07-13 | Yan Zhang | Object classification method for a collision warning system |
CN104102900A (en) * | 2014-06-30 | 2014-10-15 | 南京信息工程大学 | Vehicle identification system |
CN104463241A (en) * | 2014-10-31 | 2015-03-25 | 北京理工大学 | Vehicle type recognition method in intelligent transportation monitoring system |
CN105243398A (en) * | 2015-09-08 | 2016-01-13 | 西安交通大学 | Method of improving performance of convolutional neural network based on linear discriminant analysis criterion |
Cited By (19)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110503126A (en) * | 2018-05-18 | 2019-11-26 | 罗伯特·博世有限公司 | Method and apparatus for improving the training of classifier |
CN109063768A (en) * | 2018-08-01 | 2018-12-21 | 北京旷视科技有限公司 | Vehicle recognition methods, apparatus and system again |
CN109034086B (en) * | 2018-08-03 | 2021-03-23 | 北京旷视科技有限公司 | Vehicle weight identification method, device and system |
CN109034086A (en) * | 2018-08-03 | 2018-12-18 | 北京旷视科技有限公司 | Vehicle recognition methods, apparatus and system again |
CN109190639A (en) * | 2018-08-16 | 2019-01-11 | 新智数字科技有限公司 | A kind of vehicle color identification method, apparatus and system |
CN110569692A (en) * | 2018-08-16 | 2019-12-13 | 阿里巴巴集团控股有限公司 | multi-vehicle identification method, device and equipment |
CN109272504A (en) * | 2018-10-17 | 2019-01-25 | 广汽丰田汽车有限公司 | The detection of vehicle bumps defect, retroactive method, apparatus and system |
CN110163910B (en) * | 2019-03-22 | 2021-09-28 | 腾讯科技(深圳)有限公司 | Object positioning method, device, computer equipment and storage medium |
CN110163910A (en) * | 2019-03-22 | 2019-08-23 | 腾讯科技(深圳)有限公司 | Subject localization method, device, computer equipment and storage medium |
CN111339834A (en) * | 2020-02-04 | 2020-06-26 | 浙江大华技术股份有限公司 | Method for recognizing vehicle traveling direction, computer device, and storage medium |
CN111339834B (en) * | 2020-02-04 | 2023-06-02 | 浙江大华技术股份有限公司 | Method for identifying vehicle driving direction, computer device and storage medium |
CN111917766A (en) * | 2020-07-29 | 2020-11-10 | 江西科技学院 | Method for detecting communication abnormity of vehicle-mounted network |
CN111917766B (en) * | 2020-07-29 | 2022-10-18 | 江西科技学院 | Method for detecting communication abnormity of vehicle-mounted network |
CN113095266A (en) * | 2021-04-19 | 2021-07-09 | 北京经纬恒润科技股份有限公司 | Angle identification method, device and equipment |
CN113065533A (en) * | 2021-06-01 | 2021-07-02 | 北京达佳互联信息技术有限公司 | Feature extraction model generation method and device, electronic equipment and storage medium |
CN117542003A (en) * | 2024-01-08 | 2024-02-09 | 大连天成电子有限公司 | Freight train model judging method based on image feature analysis |
CN117542003B (en) * | 2024-01-08 | 2024-04-02 | 大连天成电子有限公司 | Freight train model judging method based on image feature analysis |
CN118413744A (en) * | 2024-07-01 | 2024-07-30 | 成都建工路桥建设有限公司 | Automatic highway inspection system |
CN118413744B (en) * | 2024-07-01 | 2024-09-03 | 成都建工路桥建设有限公司 | Automatic highway inspection system |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN106257490A (en) | The method and system of detection driving vehicle information | |
Wang et al. | An effective method for plate number recognition | |
US8447139B2 (en) | Object recognition using Haar features and histograms of oriented gradients | |
CN102609686B (en) | Pedestrian detection method | |
KR101596299B1 (en) | Apparatus and Method for recognizing traffic sign board | |
CN108875600A (en) | A kind of information of vehicles detection and tracking method, apparatus and computer storage medium based on YOLO | |
CN109190444B (en) | Method for realizing video-based toll lane vehicle feature recognition system | |
CN106295541A (en) | Vehicle type recognition method and system | |
Rios-Cabrera et al. | Efficient multi-camera vehicle detection, tracking, and identification in a tunnel surveillance application | |
CN112381775A (en) | Image tampering detection method, terminal device and storage medium | |
Gonçalves et al. | License plate recognition based on temporal redundancy | |
Yousef et al. | SIFT based automatic number plate recognition | |
Ye et al. | A two-stage real-time YOLOv2-based road marking detector with lightweight spatial transformation-invariant classification | |
Hechri et al. | Automatic detection and recognition of road sign for driver assistance system | |
Saleh et al. | Traffic signs recognition and distance estimation using a monocular camera | |
Yang et al. | A vehicle license plate recognition system based on fixed color collocation | |
Amato et al. | Moving cast shadows detection methods for video surveillance applications | |
CN113159024A (en) | License plate recognition technology based on improved YOLOv4 | |
Grbić et al. | Automatic vision-based parking slot detection and occupancy classification | |
Yao et al. | Coupled multivehicle detection and classification with prior objectness measure | |
Chen et al. | Robust and real-time traffic light recognition based on hierarchical vision architecture | |
Mammeri et al. | North-American speed limit sign detection and recognition for smart cars | |
Pirgazi et al. | An End‐to‐End Deep Learning Approach for Plate Recognition in Intelligent Transportation Systems | |
Shang et al. | A novel method for vehicle headlights detection using salient region segmentation and PHOG feature | |
Emami et al. | Real time vehicle make and model recognition based on hierarchical classification |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
WD01 | Invention patent application deemed withdrawn after publication |
Application publication date: 20161228 |
|
WD01 | Invention patent application deemed withdrawn after publication |