CN115661720B - Target tracking and identifying method and system for shielded vehicle - Google Patents

Target tracking and identifying method and system for shielded vehicle Download PDF

Info

Publication number
CN115661720B
CN115661720B CN202211407202.9A CN202211407202A CN115661720B CN 115661720 B CN115661720 B CN 115661720B CN 202211407202 A CN202211407202 A CN 202211407202A CN 115661720 B CN115661720 B CN 115661720B
Authority
CN
China
Prior art keywords
target vehicle
vehicle
target
image data
frame
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202211407202.9A
Other languages
Chinese (zh)
Other versions
CN115661720A (en
Inventor
余劲
蔡越
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing Zhilan Xinlian Information Technology Co ltd
Original Assignee
Nanjing Zhilan Xinlian Information Technology Co ltd
Filing date
Publication date
Application filed by Nanjing Zhilan Xinlian Information Technology Co ltd filed Critical Nanjing Zhilan Xinlian Information Technology Co ltd
Priority to CN202211407202.9A priority Critical patent/CN115661720B/en
Publication of CN115661720A publication Critical patent/CN115661720A/en
Application granted granted Critical
Publication of CN115661720B publication Critical patent/CN115661720B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Abstract

The invention provides a target tracking and identifying method and a target tracking and identifying system for a shielded vehicle, which belong to the technical field of image data processing, wherein the method comprises the following steps: constructing a model for data analysis; capturing video data of a target vehicle in running; dividing video data in a mode of taking video frames as units; traversing the video data, reading each frame of image data in the video data according to frames, and analyzing the position of the vehicle in the image data to obtain the motion characteristics and the visual appearance characteristics of the target vehicle; judging whether a target vehicle is detected in the current frame; if yes, continuing to read the video data of the next frame; if the target vehicle motion characteristics do not exist, predicting the position of the target vehicle in the current frame based on the obtained target vehicle motion characteristics; and summarizing the positions of the vehicles in each frame of image data to obtain the whole-course driving track of the vehicles. By predicting the blocked position of the target vehicle, the possible running track of the blocked vehicle can be effectively judged, and the following loss of the target vehicle is reduced.

Description

Target tracking and identifying method and system for shielded vehicle
Technical Field
The invention belongs to the technical field of image data processing, and particularly relates to a target tracking and identifying method and system for a shielded vehicle.
Background
In the development trend of intelligent traffic, effective vehicle running information can be provided for traffic control by tracking target vehicles in real time, so that the vehicle tracking technology occupies a non-negligible position in traffic management. Aiming at the actual tracking requirement of the vehicle, the prior art usually adopts a target image data analysis method to classify and identify the target object of the picture.
However, in the practical application process, the complex running environment often causes the phenomenon that the tracked target vehicle is partially or fully blocked, so that the tracked target is lost, and the robustness of real-time tracking is reduced.
Disclosure of Invention
The invention aims to: a method and a system for tracking and identifying an object of a blocked vehicle are provided to solve the above problems in the prior art. Through the prediction of the shielded position of the target vehicle, the possible running track of the vehicle after being shielded is effectively judged, so that the following loss phenomenon after being shielded is reduced.
The technical scheme is as follows: in a first aspect, a method for tracking and identifying an object of an occluded vehicle is provided, which specifically includes the following steps:
Step1, constructing a target vehicle detection model and a track prediction model for data analysis;
to improve the performance of the target vehicle detection model and the trajectory prediction model, the built model is first performance trained before performing data analysis.
And aiming at the performance of the target vehicle detection model, optimizing the learning capacity of the target vehicle detection model by adopting a classification loss function before the target vehicle detection is executed.
Aiming at the performance of the track prediction model, before the track prediction of the target vehicle is executed, judging the error between the prediction frame and the boundary frame where the actual target is positioned through the mahalanobis distance between the prediction frame and the boundary frame where the actual target is positioned and the cosine distance of the apparent characteristic in the training process; the parameters of the kalman filter are then optimally updated based on the error values.
Step 2, capturing video data of a target vehicle in running through an information acquisition device;
step 3, dividing the video data in a mode of taking video frames as units;
Step 4, the target vehicle detection model reads each frame of image data in the video data according to frames in a mode of traversing the video data, analyzes the position of the vehicle in the image data and acquires the motion characteristics and the visual appearance characteristics of the target vehicle;
Step 5, judging whether the target vehicle detection model in the current frame detects the target vehicle or not; if yes, continuing to read the video data of the next frame; if the motion characteristics of the target vehicle do not exist, predicting the position of the target vehicle in the current frame by adopting the track prediction model;
and 6, summarizing the positions of the vehicles in each frame of image data to obtain the whole-course driving track of the vehicles.
In some implementations of the first aspect, the process of performing target vehicle detection recognition using the target vehicle detection model includes the steps of:
Step 4.1, the target vehicle detection model receives image data corresponding to a current frame;
In order to improve the analysis accuracy of the data, after the image data corresponding to the current frame is obtained, the method further comprises preprocessing the image data in a special environment. When the actual running environment of the target vehicle is a low-light environment, the feature information of the target vehicle is weakened, so that the difficulty of feature extraction is reduced while the contrast of the image data is improved by performing contrast enhancement operation, and the method specifically comprises the following steps:
step 3.1, receiving image data divided according to frames;
step 3.2, judging the running environment of the target vehicle; when the running environment of the target vehicle is a low-light environment, jumping to the step 3.3; otherwise, jumping to the step 4;
Step 3.3, converting the RGB mode of the image data into an HIS mode;
Step 3.4, constructing a brightness adjustment function based on the HIS mode;
Step 3.5, performing brightness adjustment on the converted image data by using a brightness adjustment function;
And 3.6, outputting the adjusted image data.
Step 4.2, dividing the received image data into a preset number of grid areas;
Step 4.3, predicting N prediction boundary boxes in the grid area according to the characteristic data corresponding to the image data; wherein N is a natural number;
step 4.4, judging whether the target vehicle exists in the prediction boundary box or not through the confidence coefficient value obtained through calculation;
Step 4.5, outputting an analysis result;
Wherein the expression for judging whether the target vehicle exists in the boundary box according to the confidence coefficient is as follows:
in the formula, pr represents whether a label of a target vehicle exists in a preset boundary box, and the value is 1 when the label exists, otherwise, the value is 0; Representing the intersection ratio of the prediction bounding box and the real bounding box;
And acquiring a prediction boundary box with the maximum confidence in the prediction boundary box in a traversing manner, and taking the prediction boundary box with the maximum confidence as the position of the target vehicle in the current frame.
When the target vehicle is blocked, the track prediction model is utilized to execute the target vehicle position prediction process specifically comprising the following steps:
step 5.1, the track prediction model receives the motion characteristics and the visual appearance characteristics of the target vehicle extracted from the previous frame;
Step 5.2, constructing a Kalman filter, correlating the extracted motion characteristics and visual appearance characteristics of the target vehicle, and predicting the current position of the target vehicle; the Kalman filter performs prediction on the target vehicle position in the current frame, and specifically comprises the following steps:
Step 5.2.1, taking the received characteristic information as an initial condition;
Step 5.2.2, constructing a state transition matrix;
step 5.2.3, estimating the mean value and covariance of the motion state of the target vehicle by using a state transfer function;
Ht=Fxt-1
Pt=FPt-1FT+Q
Wherein X t represents the state of the target vehicle feature and the position; x t-1 represents the mean value at time t-1; f represents a state transition matrix; q represents the covariance of gaussian noise; p t denotes the covariance matrix corresponding to X t;
and 5.2.4, obtaining the position of the detection frame where the predicted target vehicle is located according to the estimated value.
In some implementations of the first aspect, the recognition accuracy of the vehicle in the low-light environment is effectively improved by performing the contrast enhancement processing operation on the acquired image data for the phenomenon that the contrast between the target vehicle and the surrounding environment is low in the low-light environment. Meanwhile, since the luminance and chromaticity of the HIS mode in the color space are separated, it takes a greater advantage than the RGB mode employed in the related art.
The conversion expression from the RGB mode to the HIS mode is as follows:
wherein R represents red in RGB mode; g represents green in RGB mode; b represents blue in RGB mode; h represents a hue in the HIS mode; i represents the brightness in HIS mode; s denotes the degree to which the solid color in the HIS mode is diluted by white light;
the brightness adjustment function expression is:
Y=αIγ
Wherein Y represents the brightness of the output image; i represents the brightness of the input image; alpha represents a preset correction coefficient; gamma denotes the control coefficient.
In a second aspect, a target tracking recognition system of an occluded vehicle is provided, and the target tracking recognition system is used for realizing a target tracking recognition method of the occluded vehicle, and specifically comprises the following modules:
the model construction module is used for constructing a data analysis model;
a data capturing module for capturing video data of the travel of the target vehicle;
a division module for dividing video data;
The target detection module is used for detecting, identifying and extracting the characteristics of the target vehicle for the video data;
the track prediction module is used for predicting the running track of the target vehicle;
the track integrating module is used for integrating the positions of the target vehicles to form a running track;
and the track output module is used for outputting the running track.
In some implementations of the second aspect, to achieve the tracking requirement of the target vehicle, a model building module is first used to build a target vehicle detection model and a trajectory prediction model, and used for subsequent data analysis. In the actual application process, the information acquisition equipment in the data acquisition module is used for capturing video data in the running process of the vehicle, and the dividing module is used for dividing the video data according to the analysis requirements.
And based on the divided video data, detecting and identifying the target vehicle by utilizing a target vehicle detection model in the target detection module, extracting characteristics, and using the extracted data as a basis for subsequent data analysis. Because the situation that the target vehicle is blocked occurs in the actual target vehicle detection process, the position of the target vehicle at the blocked time is predicted by adopting a track prediction model in the track prediction module based on the characteristic data extracted by the target detection module.
Integrating by utilizing a track integrating module based on the detected target vehicle position and the predicted position, so as to obtain the whole-course running track of the vehicle; and finally, outputting an integration result of the track integration module by adopting the track output module.
In a third aspect, an apparatus for target tracking identification of an occluded vehicle is provided, the apparatus comprising: a processor and a memory storing computer program instructions.
The processor reads and executes the computer program instructions to implement a target tracking identification method of the occluded vehicle.
In a fourth aspect, a computer-readable storage medium having computer program instructions stored thereon is presented. The computer program instructions, when executed by the processor, implement a target tracking recognition method for an occluded vehicle.
The beneficial effects are that: the invention provides a target tracking and identifying method and a system for a shielded vehicle, wherein the target vehicle is detected and identified through a constructed target vehicle detection model, and the positions of the vehicle at different time points are summarized to obtain the vehicle running track, so that the tracking of the vehicle is realized. In addition, aiming at the phenomenon that the target vehicle may be blocked in the tracking process, the embodiment further realizes the position prediction of the target vehicle in the video frame under the condition of no blocking object through the proposed track prediction model.
Drawings
FIG. 1 is a flow chart of data processing according to the present invention.
Detailed Description
In the following description, numerous specific details are set forth in order to provide a more thorough understanding of the present invention. It will be apparent, however, to one skilled in the art that the invention may be practiced without one or more of these details. In other instances, well-known features have not been described in detail in order to avoid obscuring the invention.
The applicant believes that in the technical field of vehicle tracking application, due to the influence of practical environmental factors, such as illumination, buildings, pedestrians and trees, the target is often blocked, and then the following loss of the target is caused. Aiming at the phenomenon of tracking loss caused by the occlusion of a target vehicle, the target tracking identification method and system of the occluded vehicle are provided, and the possible running track of the occluded vehicle is effectively judged through the prediction of the running path of the vehicle, so that the occurrence of the tracking loss phenomenon after the occlusion is reduced.
Example 1
In one embodiment, aiming at the phenomenon that a vehicle is blocked, in the actual tracking application facing the target vehicle, a target tracking identification method of the blocked vehicle is provided for predicting the running track of the vehicle, so that the vehicle tracking in the blocking process is realized. As shown in fig. 1, the method specifically includes the following steps:
Step1, constructing a target vehicle detection model and a track prediction model for data analysis;
step 2, capturing video data of a target vehicle in running through an information acquisition device;
Step 3, dividing video data in a mode of taking video frames as units;
And 4, reading each frame of image data in the video data according to frames by the target vehicle detection model in a traversing video data mode, analyzing the position of the vehicle in the image data, and obtaining the motion characteristics and the visual appearance characteristics of the target vehicle.
Specifically, the process of the target vehicle detection model executing target vehicle detection recognition includes the following steps: firstly, receiving image data corresponding to a current frame; secondly, dividing the received image data into a preset number of grid areas; thirdly, predicting N prediction boundary boxes in the grid area, wherein N is a natural number; calculating confidence coefficient values of all the obtained prediction boundary frames from times, obtaining a prediction boundary frame with the maximum confidence coefficient in the prediction boundary frame in a traversing mode, and taking the prediction boundary frame with the maximum confidence coefficient as the position of the target vehicle in the current frame; and finally, outputting an analysis result.
Wherein the expression for judging whether the target vehicle exists in the boundary box according to the confidence coefficient is as follows:
in the formula, pr represents whether a label of a target vehicle exists in a preset boundary box, and the value is 1 when the label exists, otherwise, the value is 0; representing the intersection ratio of the prediction bounding box and the real bounding box.
In a further embodiment, the classification loss function is used to optimize the learning ability of the target vehicle detection model in order to improve the performance of the target vehicle detection model. The classification loss function expression employed in the preferred embodiment is:
Wherein N represents the number of targets, N represents the current target index, the superscript 2 represents the square of the norm, the subscript 2 represents the sum of squares of the absolute values of the vector elements, y n represents the corresponding position parameter in the class division of the current image frame as a calculation sample in the deep convolutional network, And representing the target image frames in the depth convolution network as position parameters corresponding to the division categories. In a further embodiment, based on the classification loss function adopted, further providing a classification cross loss function, adding a parameter factor, and placing the attention of the target vehicle detection model on a difficult and misclassified sample; the two-class cross-loss function expression is:
Wherein y represents the output after the activation function, the value range is between 0 and 1, and the larger the output probability is, the smaller the loss is for positive samples due to the common cross entropy; for negative samples, the smaller the output probability, the smaller the penalty. Therefore, the loss function at this time is slow in the iterative process of a large number of simple samples and may not be optimized to be optimal. To reduce the loss of the easily classified samples, so that the whole network focuses more on difficult and misprimed samples, two primers alpha and gamma are introduced, namely:
Where α represents a balance factor for balancing positive and negative sample importance and γ represents sample importance, preferably 0.25.
Step 5, judging whether the target vehicle detection model detects the target vehicle in the current frame; if the target vehicle exists, continuing to read the next frame; if the target vehicle does not exist, predicting the position of the target vehicle in the current frame by adopting a track prediction model based on the obtained motion characteristics of the target vehicle;
Specifically, when the target vehicle detection model does not detect the target vehicle, the current frame indicates that no target vehicle exists, that is, the target vehicle is blocked. In order to effectively obtain a running path of a vehicle, a track prediction model is adopted to predict the position of the target vehicle in a current frame based on the obtained motion characteristics of the target vehicle, and the predicted position is taken as the position of the target vehicle in the current frame.
When the target vehicle is blocked, the track prediction model is utilized to execute the target vehicle position prediction process specifically comprising the following steps:
firstly, a track prediction model receives the motion characteristics and the visual appearance characteristics of a target vehicle extracted from a previous frame; secondly, a Kalman filter is constructed, the extracted motion characteristics and visual appearance characteristics of the target vehicle are associated, and the current position of the target vehicle is predicted.
The process of predicting the target vehicle position in the current frame by using the Kalman filter specifically comprises the following steps:
Step ①, taking the received characteristic information as an initial condition;
step ②, constructing a state transition matrix;
Step ③, estimating a moving state mean and covariance of the target vehicle by using a state transfer function;
Xt=Fxt-1
Pt=FPt-1FT+Q
Wherein X t represents the state of the target vehicle feature and the position; x t-1 represents the mean value at time t-1; f represents a state transition matrix; q represents the covariance matrix of Gaussian noise; p t represents the covariance matrix corresponding to X t. The state X t-1 at the time t-1 can be used for effectively predicting the state X t at the time t, and the covariance matrix P t at the time t can be effectively obtained based on the covariance matrix P t-1 at the time t-1 and the system noise matrix Q.
And ④, obtaining the position of the detection frame where the predicted target vehicle is located according to the estimated value.
In a further embodiment, in order to improve the performance of the trajectory prediction model, model performance optimization training is further performed. In the training process, judging the error between the predicted frame and the boundary frame where the actual target is positioned through the mahalanobis distance between the predicted frame and the boundary frame where the actual target is positioned and the cosine distance of the apparent characteristic; the parameters of the kalman filter are then optimally updated based on the error values.
And 6, summarizing the positions of the vehicles in each frame of image data to obtain the whole-course driving track of the vehicles.
In a further embodiment, the target vehicle detection model comprises: darknet-53 networks, feature map pyramid FPN network structures, and residual structures. When the target vehicle is executed by using the target vehicle detection model at the position of the current frame, a space pooling module is further provided for the image data in the input model, and the problem of inconsistency of the data image data in size is overcome by adopting a fixed pooling method.
Specifically, the spatial pooling module comprises: the input layer, the pooling layer and the connection layer, wherein the pooling layer is formed by the parallel convolution kernels with different scales. The space pooling module is used for parallelly entering into pooling layers formed by different convolution kernels after passing through the input layers according to received data, and finally, integrating output data of the pooling layers through the connecting layers.
The pooling operation can effectively increase receptive fields, and the introduction of the spatial pooling module enables the target vehicle detection model to effectively extract multi-scale depth features with different receptive fields.
According to the embodiment, the target vehicle is detected and identified through the constructed target vehicle detection model, and the vehicle running track is obtained by summarizing the positions of the vehicles at different time points, so that the tracking of the vehicles is realized. In addition, aiming at the phenomenon that the target vehicle may be blocked in the tracking process, the embodiment further realizes the position prediction of the target vehicle in the video frame under the condition of no blocking object through the proposed track prediction model.
Example two
In a further embodiment based on the first embodiment, a low light environment such as night often causes the actual vehicle tracking to be affected, resulting in an unobvious comparison of the vehicle information. In the low-light environment, the color features, texture features and the like of the vehicle are weakened, so that the contrast is not obvious, and the difficulty of feature extraction is increased. According to the embodiment, aiming at the application environment in the low light, the contrast enhancement processing is carried out on the acquired picture, so that the vehicle identification accuracy in the low light environment is improved.
Specifically, the collected image data is usually presented in an RGB mode, but since the RGB model still has defects in color rendering, the embodiment preferably converts the RGB mode into an HIS mode with higher color through mode conversion; and then, adjusting the background brightness of the image based on the converted data, realizing background enhancement and improving the contrast between the target and the surrounding environment.
The HIS mode separates color information and gray information, expresses the attribute of pure color through a tone component H, expresses the degree measurement of dilution of pure color by white light through a saturation component S, and expresses the brightness degree of the color through a brightness I.
The expression for the RGB mode to HIS mode conversion is:
wherein R represents red in RGB mode; g represents green in RGB mode; b represents blue in RGB mode; h represents a hue in the HIS mode; i represents the brightness in HIS mode; s denotes the degree to which the solid color in the HIS mode is diluted by white light.
After the mode conversion is completed, brightness adjustment is carried out on the background in the image based on the converted data, and the corresponding adjustment expression is as follows:
Y=αIγ
Wherein Y represents the brightness of the output image; i represents the brightness of the input image; alpha represents a preset correction coefficient; gamma denotes the control coefficient.
According to the method and the device, aiming at the phenomenon that the contrast between the target vehicle and the surrounding environment is low in the low-light environment, the identification accuracy of the vehicle in the low-light environment is effectively improved through the processing operation of contrast enhancement on the acquired image data. Meanwhile, since the luminance and chromaticity of the HIS mode in the color space are separated, it takes a greater advantage than the RGB mode employed in the related art.
Example III
In one embodiment, a target tracking recognition system of an occluded vehicle is provided for implementing a target tracking recognition method of the occluded vehicle, and the system specifically includes the following modules: the system comprises a model construction module, a data capturing module, a dividing module, a target detection module, a track prediction module, a track integration module and a track output module.
Specifically, the model construction module is used for constructing a target vehicle detection model and a track prediction model according to the image data analysis requirements; the data capturing module is used for capturing video data in the running process of the target vehicle; the dividing module is used for dividing the video data; the target detection module is used for reading the divided video data, detecting and identifying a target vehicle in the video data and extracting corresponding vehicle characteristics; the track prediction module is used for predicting the running track of the target vehicle by utilizing the track prediction model; the track integration module is used for integrating the identified position of the target vehicle and obtaining the whole-course running track of the vehicle; the track output module is used for outputting the integration result of the track integration module.
In a further embodiment, a model building module is first employed to build a target vehicle detection model and a trajectory prediction model based on data analysis purposes, for tracking requirements of the target vehicle. Aiming at the tracking analysis requirement of the target vehicle, an information acquisition device in a data acquisition module is adopted to capture video data in the running process of the vehicle, and a division module is utilized to divide the video data according to the analysis requirement. Then, the target detection module adopts a target vehicle detection model to detect and identify the target vehicle and extract the characteristics of the video data, and the extracted data is used as the basis of subsequent data analysis. Because the situation that the target vehicle is blocked occurs in the actual target vehicle detection process, the position of the target vehicle at the blocked time is predicted by adopting a track prediction model in the track prediction module based on the characteristic data extracted by the target detection module. Integrating by utilizing a track integrating module based on the detected target vehicle position and the predicted position, so as to obtain the whole-course running track of the vehicle; and finally, outputting an integration result of the track integration module by adopting the track output module.
Example IV
In one embodiment, an object tracking identification device for an occluded vehicle is provided, the device comprising: a processor and a memory storing computer program instructions.
The processor reads and executes the computer program instructions to implement a target tracking identification method of the occluded vehicle.
Example five
In one embodiment, a computer-readable storage medium having computer program instructions stored thereon is presented.
Wherein the computer program instructions, when executed by the processor, implement a target tracking recognition method for an occluded vehicle.
As described above, although the present invention has been shown and described with reference to certain preferred embodiments, it is not to be construed as limiting the invention itself. Various changes in form and details may be made therein without departing from the spirit and scope of the invention as defined by the appended claims.

Claims (4)

1. The target tracking and identifying method for the shielded vehicle is characterized by comprising the following steps of:
Step 1, constructing a target vehicle detection model and a track prediction model for data analysis; the target vehicle detection model comprises the following components: darknet-53 networks, a feature map pyramid FPN network structure and a residual structure; further included in the structure is a spatial pooling module comprising: the device comprises an input layer, a pooling layer and a connecting layer, wherein the pooling layer is formed by parallel convolution kernels with different scales and is used for receiving data transmitted by the input layer and transmitting the processed data to the connecting layer;
step 2, capturing video data of a target vehicle in running through an information acquisition device;
step 3, dividing the video data in a mode of taking video frames as units; when the actual running environment of the target vehicle is a low-light environment, the feature information of the target vehicle is weakened, and in order to improve the contrast of the image data, the difficulty of feature extraction is reduced by executing contrast enhancement operation, which specifically comprises the following steps:
step 3.1, receiving image data divided according to frames;
step 3.2, judging the running environment of the target vehicle; when the running environment of the target vehicle is a low-light environment, jumping to the step 3.3; otherwise, jumping to the step 4;
Step 3.3, converting the RGB mode of the image data into an HIS mode;
Step 3.4, constructing a brightness adjustment function based on the HIS mode;
Step 3.5, performing brightness adjustment on the converted image data by using a brightness adjustment function;
Step 3.6, outputting the adjusted image data;
step 4, the target vehicle detection model reads each frame of image data in the video data according to frames in a mode of traversing the video data, analyzes the position of the vehicle in the image data and acquires the motion characteristics and the visual appearance characteristics of the target vehicle; the process of executing target vehicle detection and identification by the target vehicle detection model comprises the following steps:
Step 4.1, the target vehicle detection model receives image data corresponding to a current frame;
step 4.2, dividing the received image data into a preset number of grid areas;
Step 4.3, predicting N prediction boundary boxes in the grid area according to the characteristic data corresponding to the image data; wherein N is a natural number;
step 4.4, judging whether the target vehicle exists in the prediction boundary box or not through the confidence coefficient value obtained through calculation;
Step 4.5, outputting an analysis result;
Wherein the expression for judging whether the target vehicle exists in the boundary box according to the confidence coefficient is as follows:
in the formula, pr represents whether a label of a target vehicle exists in a preset boundary box, and the value is 1 when the label exists, otherwise, the value is 0; Representing the intersection ratio of the prediction bounding box and the real bounding box;
obtaining a prediction boundary frame with the maximum confidence in the prediction boundary frame in a traversing way, and taking the prediction boundary frame with the maximum confidence as the position of the target vehicle in the current frame;
Step 5, judging whether the target vehicle detection model in the current frame detects the target vehicle or not; if yes, continuing to read the video data of the next frame; if the motion characteristics of the target vehicle do not exist, predicting the position of the target vehicle in the current frame by adopting the track prediction model; when the target vehicle is blocked, the track prediction model is utilized to execute the target vehicle position prediction process specifically comprising the following steps:
step 5.1, the track prediction model receives the motion characteristics and the visual appearance characteristics of the target vehicle extracted from the previous frame;
Step 5.2, constructing a Kalman filter, correlating the extracted motion characteristics and visual appearance characteristics of the target vehicle, and predicting the current position of the target vehicle; the Kalman filter performs prediction on the target vehicle position in the current frame, and specifically comprises the following steps:
Step 5.2.1, taking the received characteristic information as an initial condition;
Step 5.2.2, constructing a state transition matrix;
step 5.2.3, estimating the mean value and covariance of the motion state of the target vehicle by using a state transfer function;
Xt=Fxt-1
Pt=FPt-1FT+Q
Wherein X t represents the state of the target vehicle feature and the position; x t-1 represents the mean value at time t-1; f represents a state transition matrix; q represents the covariance of gaussian noise; p t denotes the covariance matrix corresponding to x t;
step 5.2.4, obtaining the position of a detection frame where the predicted target vehicle is located according to the estimated value;
step 6, summarizing the positions of the vehicles in each frame of image data to obtain the whole-course running track of the vehicles;
the conversion expression from the RGB mode to the HIS mode is as follows:
wherein R represents red in RGB mode; g represents green in RGB mode; b represents blue in RGB mode; h represents a hue in the HIS mode; i represents the brightness in HIS mode; s denotes the degree to which the solid color in the HIS mode is diluted by white light;
the brightness adjustment function expression is:
Y=αIγ
Wherein Y represents the brightness of the output image; i represents the brightness of the input image; alpha represents a preset correction coefficient; beta represents a control coefficient;
Before the target vehicle detection is executed, optimizing learning capacity of a target vehicle detection model by adopting a classification loss function; the classification loss function expression is:
Wherein y' represents the output after the activation function, and the value range is between 0 and 1; alpha represents a balance factor for balancing positive and negative sample importance, and gamma represents sample importance;
in order to improve the performance of the track prediction model, further executing model performance optimization training;
in the training process, judging the error between the predicted frame and the boundary frame where the actual target is positioned through the mahalanobis distance between the predicted frame and the boundary frame where the actual target is positioned and the cosine distance of the apparent characteristic; the parameters of the kalman filter are then optimally updated based on the error values.
2. An object tracking recognition system of an occluded vehicle, for realizing object tracking recognition of an occluded vehicle according to claim 1, comprising the following modules:
a model construction module configured to construct a target vehicle detection model and a trajectory prediction model for image data analysis according to requirements;
A data capturing module configured to capture video data of a target vehicle traveling using the information collecting device;
a dividing module configured to divide video data according to a manner of video frames;
a target detection module configured to perform detection and identification of a target vehicle, and feature extraction of the video data using a target vehicle detection model;
the track prediction module is used for realizing the running track prediction of the target vehicle by utilizing the track prediction model based on the characteristics extracted by the target detection module;
the track integrating module is used for integrating the identified position of the target vehicle and obtaining the whole-course running track of the vehicle;
and the track output module is used for outputting the integration result of the track integration module.
3. An object tracking identification device for an occluded vehicle, the device comprising:
a processor and a memory storing computer program instructions;
The processor reads and executes the computer program instructions to implement the method of object tracking identification of an occluded vehicle as claimed in claim 1.
4. A computer readable storage medium having stored thereon computer program instructions which when executed by a processor implement the method of object tracking identification of an occluded vehicle as claimed in claim 1.
CN202211407202.9A 2022-11-10 Target tracking and identifying method and system for shielded vehicle Active CN115661720B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211407202.9A CN115661720B (en) 2022-11-10 Target tracking and identifying method and system for shielded vehicle

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211407202.9A CN115661720B (en) 2022-11-10 Target tracking and identifying method and system for shielded vehicle

Publications (2)

Publication Number Publication Date
CN115661720A CN115661720A (en) 2023-01-31
CN115661720B true CN115661720B (en) 2024-07-02

Family

ID=

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112287906A (en) * 2020-12-18 2021-01-29 中汽创智科技有限公司 Template matching tracking method and system based on depth feature fusion
CN113674328A (en) * 2021-07-14 2021-11-19 南京邮电大学 Multi-target vehicle tracking method

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112287906A (en) * 2020-12-18 2021-01-29 中汽创智科技有限公司 Template matching tracking method and system based on depth feature fusion
CN113674328A (en) * 2021-07-14 2021-11-19 南京邮电大学 Multi-target vehicle tracking method

Similar Documents

Publication Publication Date Title
CN110956094B (en) RGB-D multi-mode fusion personnel detection method based on asymmetric double-flow network
CN111723748B (en) Infrared remote sensing image ship detection method
CN108961235B (en) Defective insulator identification method based on YOLOv3 network and particle filter algorithm
CN111460968B (en) Unmanned aerial vehicle identification and tracking method and device based on video
CN108615226B (en) Image defogging method based on generation type countermeasure network
CN110929593B (en) Real-time significance pedestrian detection method based on detail discrimination
CN113065558A (en) Lightweight small target detection method combined with attention mechanism
CN107273832B (en) License plate recognition method and system based on integral channel characteristics and convolutional neural network
CN103020992B (en) A kind of video image conspicuousness detection method based on motion color-associations
CN112200045B (en) Remote sensing image target detection model establishment method based on context enhancement and application
CN101826228B (en) Detection method of bus passenger moving objects based on background estimation
CN112949673A (en) Feature fusion target detection and identification method based on global attention
CN113361326B (en) Wisdom power plant management and control system based on computer vision target detection
CN104598924A (en) Target matching detection method
CN105930794A (en) Indoor scene identification method based on cloud computing
CN113052006B (en) Image target detection method, system and readable storage medium based on convolutional neural network
CN110490155B (en) Method for detecting unmanned aerial vehicle in no-fly airspace
CN101739667B (en) Non-downsampling contourlet transformation-based method for enhancing remote sensing image road
CN114580541A (en) Fire disaster video smoke identification method based on time-space domain double channels
CN112418087A (en) Underwater video fish identification method based on neural network
CN111027564A (en) Low-illumination imaging license plate recognition method and device based on deep learning integration
CN114708615A (en) Human body detection method based on image enhancement in low-illumination environment, electronic equipment and storage medium
CN113128308A (en) Pedestrian detection method, device, equipment and medium in port scene
CN115661720B (en) Target tracking and identifying method and system for shielded vehicle
CN106355566A (en) Smoke and flame detection method applied to fixed camera dynamic video sequence

Legal Events

Date Code Title Description
PB01 Publication
SE01 Entry into force of request for substantive examination
GR01 Patent grant