CN120472404A - Adaptive context-aware vehicle target tracking and recognition system - Google Patents
Adaptive context-aware vehicle target tracking and recognition systemInfo
- Publication number
- CN120472404A CN120472404A CN202510969127.2A CN202510969127A CN120472404A CN 120472404 A CN120472404 A CN 120472404A CN 202510969127 A CN202510969127 A CN 202510969127A CN 120472404 A CN120472404 A CN 120472404A
- Authority
- CN
- China
- Prior art keywords
- tracking
- value
- module
- context
- parameters
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/50—Context or environment of the image
- G06V20/52—Surveillance or monitoring of activities, e.g. for recognising suspicious objects
- G06V20/54—Surveillance or monitoring of activities, e.g. for recognising suspicious objects of traffic, e.g. cars on the road, trains or boats
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/25—Fusion techniques
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/20—Analysis of motion
- G06T7/246—Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/54—Extraction of image or video features relating to texture
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/56—Extraction of image or video features relating to colour
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/77—Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
- G06V10/776—Validation; Performance evaluation
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/96—Management of image or video recognition tasks
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30248—Vehicle exterior or interior
- G06T2207/30252—Vehicle exterior; Vicinity of vehicle
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Evolutionary Computation (AREA)
- Artificial Intelligence (AREA)
- Data Mining & Analysis (AREA)
- Software Systems (AREA)
- Health & Medical Sciences (AREA)
- General Health & Medical Sciences (AREA)
- Medical Informatics (AREA)
- Databases & Information Systems (AREA)
- Computing Systems (AREA)
- Life Sciences & Earth Sciences (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Bioinformatics & Computational Biology (AREA)
- Evolutionary Biology (AREA)
- General Engineering & Computer Science (AREA)
- Traffic Control Systems (AREA)
Abstract
The invention relates to the technical field of vehicle target tracking and identification, and discloses a vehicle target tracking and identification system with self-adaptive context awareness. The system comprises a multi-mode information acquisition module, a target feature extraction module, a scene dynamic perception module, a context association analysis module and a tracking strategy generation module. The system comprises a multi-mode information acquisition module, a target feature extraction module, a scene dynamic sensing module, a context correlation analysis module, a tracking strategy generation module and a tracking strategy generation module, wherein the multi-mode information acquisition module acquires multi-dimensional data and triggers a correlation module, the target feature extraction module extracts various features of a vehicle and evaluates traceability, the scene dynamic sensing module monitors system parameters and predicts dynamic sensing efficiency, the context correlation analysis module jointly analyzes evaluation values and generates regulation signals, and the tracking strategy generation module generates tracking maintenance and calculation force distribution parameters based on the regulation signals. The system can adapt to complex dynamic scenes and improve the stability and the accuracy of tracking and identifying the vehicle target.
Description
Technical Field
The invention relates to the technical field of vehicle target tracking and identification, in particular to a vehicle target tracking and identification system with self-adaptive context awareness.
Background
Along with the rapid development of intelligent traffic systems, vehicle target tracking and recognition technology is widely applied to scenes such as automatic driving, traffic monitoring and vehicle-road coordination. However, current vehicle target tracking recognition systems still face a number of challenges in complex dynamic environments. In practical applications, traffic scenes often exhibit a high degree of dynamics and uncertainty. For example, on urban roads in peak periods, the number of vehicles is dense, the driving state is changeable, and actions such as acceleration, deceleration, lane changing, overtaking and the like frequently occur, and meanwhile, the vehicles can be influenced by weather conditions (such as rain, heavy fog and strong light), so that the visual characteristics of the vehicles are obviously changed. Most of traditional tracking systems rely on a single vision sensor to collect data, so that dynamic information of a vehicle is difficult to comprehensively capture, and when conditions such as shielding, abrupt illumination change and the like are met, the problems of tracking drift and even target loss are easy to occur.
Some systems have attempted to introduce multi-sensor fusion techniques in the prior art, but have had limitations in data processing and feature extraction. Most systems adopt a fixed feature extraction mode, only extract part of the features of the vehicle, and cannot dynamically adjust an extraction strategy according to scene changes. For example, some systems focus only on the contours and color features of the vehicle, and ignore the recognition value of the texture detail features in complex environments, which makes it difficult to accurately distinguish the target vehicle in a scene with more similar vehicles, and reduces the tracking accuracy.
Existing systems lack flexibility in allocation of computing resources when processing real-time data. With the increase of the number of vehicles and the improvement of data acquisition precision, the real-time data flow of the system is rapidly increased, and the problems of local calculation load and communication delay are increasingly prominent. The conventional system generally adopts a fixed calculation power distribution mode, and can not dynamically adjust the resource distribution of edge calculation and cloud calculation according to real-time data flow and calculation load, so that the response speed of the system is reduced, the tracking delay is increased, and even the phenomenon of data processing congestion occurs under the condition of high load.
The existing system has insufficient utilization of the context information and lacks depth perception and association analysis of scene dynamic changes. For example, when the system is in different communication environments or computing conditions, its tracking performance can be significantly affected, but existing systems often cannot timely perceive these changes and make adjustments accordingly. The context information comprises environmental illumination, road conditions, communication signal intensity and the like, the information is closely related to the tracking state of the vehicle, and the suitability of the system in a complex scene is reduced due to the fact that the system is ignored, so that stable tracking effect is difficult to maintain.
The strategy generation mechanism of the existing tracking system is single and lacks multi-level optimization capability. In the tracking process, when abnormal conditions such as fuzzy target characteristics and shielding occur, the system cannot quickly generate an effective coping strategy, and can only rely on a preset fixed algorithm for processing, so that the tracking stability and the robustness are insufficient. For example, in a highway scene, when a vehicle is running fast and a short occlusion occurs, an existing system may lose a target due to the fact that tracking parameters cannot be adjusted in time, and a subsequent tracking effect is affected.
The current vehicle target tracking and identifying system has obvious defects in the aspects of dynamic scene adaptability, multi-mode information fusion, dynamic calculation distribution, context correlation analysis and the like, and is difficult to meet the requirements of high-precision and high-stability tracking and identifying of the vehicle target in a complex traffic environment.
Disclosure of Invention
The present invention is directed to a vehicle target tracking recognition system with adaptive context awareness, which solves the above-mentioned problems.
To achieve the above object, the present invention provides an adaptive context-aware vehicle target tracking recognition system, the system comprising:
the multi-mode information acquisition module is used for carrying out multi-dimensional acquisition on visual characteristic data, motion parameter data and environmental context information of the vehicle to obtain a scene multi-source data set, carrying out quantitative recognition on an abnormal tracking state based on the scene multi-source data set, generating a tracking early warning signal, and triggering a scene dynamic sensing module and a context association analysis module according to the generated tracking early warning signal;
The target feature extraction module is used for carrying out feature extraction on the contour feature parameters, the color distribution features and the texture detail features of the vehicle, and carrying out dynamic evaluation on the traceability of the target to obtain a feature evaluation reference value;
the scene dynamic sensing module is used for monitoring real-time data flow, local calculation load and communication delay parameters of the system, and performing predictive analysis on the dynamic sensing efficiency of the scene to obtain a dynamic sensing evaluation value;
The context association analysis module is used for receiving the characteristic evaluation reference value and the dynamic perception evaluation value, carrying out joint analysis on the context association efficiency of the system, and generating an edge regulation signal and a cloud reinforcement signal;
The tracking strategy generation module is used for receiving the edge regulation signals and the cloud reinforcement signals, carrying out multistage optimization strategy analysis and generating tracking maintenance parameters and calculation force distribution parameters of the vehicle.
Preferably, the quantitatively identifying the abnormal tracking state based on the scene multisource data set includes:
Extracting target ambiguity, color offset and contour integrity in visual feature data of a vehicle to obtain an ambiguity value, a color offset value and a contour integrity value, and carrying out weighted calculation on the values of the extracted three values to obtain a tracking anomaly contribution degree;
Extracting signal interruption frequency, parameter matching degree and data packet loss rate in sensor communication state parameters of a system to obtain a communication interruption value, a parameter matching value and a data packet loss value, marking the communication interruption value, the parameter matching value and the data packet loss value as tracking stability characteristic values, setting a tracking stability threshold value, comparing and analyzing the characteristic values with the threshold value, marking the sensor as an abnormal sensor when the characteristic values are smaller than the threshold value, counting the ratio of the number of the abnormal sensors to the total number of the sensors in the current system to obtain a communication abnormal rate, extracting illumination intensity change and shielding occurrence frequency in environmental context information, and carrying out dynamic simulation calculation to obtain an environmental interference evaluation value;
and multiplying the values of the tracking anomaly contribution degree, the communication anomaly rate and the environment interference evaluation value by corresponding weight coefficients respectively and adding the values to obtain an anomaly state fusion value, comparing the anomaly state fusion value with a preset anomaly threshold value, and generating a tracking early warning signal if the fusion value is higher than the threshold value.
Preferably, the dynamically evaluating the traceability of the target includes:
the contour feature parameters of the vehicle are spatially distributed and analyzed to generate a corresponding contour change distribution diagram and a corresponding shielding risk assessment diagram;
Extracting a standard tracked vehicle contour reference distribution diagram from a system database, performing topological comparison on a contour change distribution diagram of a target vehicle and the reference distribution diagram, calculating the contour matching degree of the contour change distribution diagram and the reference distribution diagram, and performing normalization processing to obtain a target tracking potential index;
Extracting the shielding occurrence frequency and the characteristic loss proportion from a shielding risk evaluation chart of the vehicle, and marking the shielding occurrence frequency and the characteristic loss proportion as shielding characteristic values respectively;
Extracting a standard shielding risk threshold value from a system database, and performing difference calculation on a shielding characteristic value and the threshold value to obtain a trackability deviation degree;
And carrying out weighted fusion on the target tracking potential index and the value of the trackability deviation degree to obtain a characteristic evaluation reference value.
Preferably, the predicting analysis of the dynamic sensing efficiency of the scene includes:
Extracting the number of image frames, the number of motion parameters and the log storage amount in the real-time data flow of the system to obtain a data flow parameter set;
extracting historical tracking data of similar scenes from a system database, constructing a performance prediction model based on a dynamic balance algorithm, inputting a flow parameter set into the model, and outputting a data processing rate, a calculation power utilization rate and a delay fluctuation rate in a target time interval;
And carrying out normalization calculation on the data processing speed, the calculation power utilization rate and the numerical value of the delay fluctuation rate to obtain a dynamic perception evaluation value.
Preferably, the performing a joint analysis on the context-dependent performance of the system includes:
Taking the quantification result of the abnormal tracking state of the system, setting a correction factor of the quantification result, and obtaining a tracking influence correction value through calculation processing;
carrying out normalization calculation on the values of the characteristic evaluation reference value, the dynamic perception evaluation value and the tracking influence correction value to obtain a context association evaluation value;
setting a context association evaluation threshold, generating an edge regulation signal if the association evaluation value is greater than or equal to the threshold, and generating a cloud reinforcement signal if the association evaluation value is less than the threshold.
Preferably, the performing multi-level optimization strategy analysis includes:
If the edge regulation signal is captured, triggering a tracking maintenance instruction, and dynamically adjusting the size of a tracking window and the feature matching precision of the vehicle according to the instruction to generate tracking maintenance parameters;
If the cloud reinforcement signal is captured, triggering a calculation force distribution instruction, and dynamically planning the local storage capacity and the cloud computing task of the system according to the instruction to generate calculation force distribution parameters.
Preferably, the system further comprises:
The right management and control module is used for monitoring the tracking operation right of the system in real time, extracting the identification of an operation main body, the allowable operation range and the illegal operation characteristics, and generating a right association evaluation value;
and the context association analysis module is further combined with the authority association evaluation value to perform operation authority constraint analysis on the context association efficiency.
Preferably, the real-time monitoring of the tracking operation authority of the system includes:
Acquiring the authority type and the history violation record of an operation main body through analyzing license data of the system during operation control;
matching the authority type with a preset operation permission list, and calculating the deviation degree of the authority type;
counting the difference rate of the override operation frequency and the total operation frequency in the history violation records to obtain an operation behavior abnormality index;
and carrying out weighted fusion on the authority type deviation degree and the operation behavior abnormality index to generate an authority association evaluation value.
Preferably, the system further comprises:
The dynamic adjustment module is used for adjusting the resource configuration frequency of the system operation and maintenance in real time according to the generated tracking maintenance parameter and calculation force distribution parameter, and feeding back the adjusted parameter to the multi-mode information acquisition module to form a closed-loop optimization flow.
Preferably, the adjusting process of the dynamic adjusting module includes:
If the tracking maintenance parameter relates to the tracking window size adjustment, synchronously adjusting the sensor scheduling parameter in the resource configuration;
If the computing power distribution parameters relate to cloud computing task optimization, synchronously adjusting network bandwidth distribution parameters in resource configuration;
Inputting the adjusted parameters into a multi-mode information acquisition module, and restarting the optimization verification process.
Compared with the prior art, the invention has the beneficial effects that:
The multi-mode information acquisition module can acquire the visual characteristic data, the motion parameter data and the environmental context information of the vehicle in a multi-dimensional manner, and the limitation that the traditional system depends on a single sensor is changed. The fusion acquisition mode of the multi-source data enables the system to capture the characteristic information of the vehicle in different scenes more comprehensively, and even under the conditions that the vehicle is dense, the illumination changes greatly or the shielding exists, tracking errors caused by the fact that single data sources are insufficient can be reduced through mutual complementation of the multi-dimension data. Meanwhile, the module can quantitatively identify the abnormal tracking state and trigger the corresponding module, so that timely response to the abnormal condition in the tracking process is realized, and the continuous influence of the abnormal state is avoided.
The target feature extraction module extracts contour feature parameters, color distribution features and texture detail features of the vehicle, and dynamically evaluates the traceability of the target, thereby breaking through the constraint of the traditional fixed feature extraction mode. By comprehensively considering various characteristic parameters, the system can more accurately characterize the target vehicle, and the distinguishing capability of similar vehicles in more scenes is improved. The dynamic evaluation mechanism enables the system to adjust the tracking strategy in real time according to the change of the target characteristics, and when the target characteristics are blurred or changed, the system can timely sense and provide reference for the adjustment of the follow-up tracking strategy, so that the adaptability of the system to the dynamic change of the target characteristics is enhanced.
The scene dynamic perception module monitors real-time data flow, local calculation load and communication delay parameters of the system, predicts and analyzes the dynamic perception efficiency of the scene, and solves the problem of stiff calculation power distribution of the traditional system. By monitoring key parameters of system operation in real time, the system can sense the variation trend of data processing pressure and communication delay in advance, and provides basis for subsequent calculation power distribution and strategy adjustment. The predictive analysis capability enables the system to be prepared in advance before the data flow is suddenly increased or the calculation load is overlarge, so that tracking delay or data loss caused by insufficient resources is avoided, and stable operation of the system in a dynamic scene is ensured.
The context association analysis module receives the characteristic evaluation reference value and the dynamic perception evaluation value, performs joint analysis on the context association efficiency of the system, generates an edge regulation signal and a cloud reinforcement signal, and realizes the cooperative linkage among the modules of the system. The traditional system often isolates the work of each module and lacks effective association analysis, and the module can grasp the overall running state of the system more comprehensively by comprehensively considering the target characteristics and the evaluation result of scene dynamics. Based on the generated regulation and control signals, tasks of edge calculation and cloud computing can be reasonably distributed, so that the edge end can rapidly process tasks with high real-time requirements, and the cloud is responsible for complex data analysis and strategy optimization, so that the overall operation efficiency of the system is improved.
The tracking strategy generation module receives the edge regulation signal and the cloud reinforcement signal, performs multistage optimization strategy analysis, generates tracking maintenance parameters and calculation force distribution parameters of the vehicle, and overcomes the defect of single strategy of the traditional system. The multi-level optimization strategy enables the system to generate targeted tracking parameters and calculation force distribution schemes according to different scenes and target states. For example, when the target features are clear and the scene is stable, a relatively simple tracking strategy is adopted and less calculation force is distributed, and when the target features are fuzzy or the scene is complex, more complex tracking maintenance parameters are started and calculation force support is increased, so that the utilization efficiency of calculation force resources is improved while the tracking precision is ensured.
The system can realize accurate and stable tracking and identification of the vehicle target under a complex dynamic environment through cooperative work of the modules, effectively overcomes the defects of the traditional system in aspects of multi-source data fusion, feature extraction adaptability, calculation power distribution flexibility and the like, and improves the overall performance of tracking and identification of the vehicle target.
Drawings
FIG. 1 is a schematic diagram of the operation of an adaptive context-aware vehicle target tracking recognition system according to the present invention;
FIG. 2 is a flow chart of anomaly tracking state quantitative identification;
FIG. 3 is a flow chart of dynamic evaluation of object traceability;
FIG. 4 is a flow chart of scene dynamic perceptual efficacy prediction;
FIG. 5 is a flow chart of a context-dependent performance joint analysis.
Detailed Description
The following description of the embodiments of the present invention will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present invention, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
Referring to fig. 1-5, the present invention provides a vehicle target tracking and identifying system with adaptive context awareness, which includes a multi-modal information acquisition module, a target feature extraction module, a scene dynamic awareness module, a context association analysis module and a tracking strategy generation module. The system is described in detail below in conjunction with the specific details:
The multi-mode information acquisition module is used for carrying out multi-dimensional acquisition on the visual characteristic data, the motion parameter data and the environmental context information of the vehicle to obtain a scene multi-source data set. And carrying out quantitative identification on the abnormal tracking state based on the scene multisource data set, generating a tracking early warning signal, and triggering a scene dynamic sensing module and a context association analysis module according to the tracking early warning signal.
The target feature extraction module is used for carrying out feature extraction on the contour feature parameters, the color distribution features and the texture detail features of the vehicle, and carrying out dynamic evaluation on the traceability of the target to obtain a feature evaluation reference value.
The scene dynamic perception module is used for monitoring real-time data flow, local calculation load and communication delay parameters of the system, and carrying out predictive analysis on the dynamic perception efficiency of the scene to obtain a dynamic perception evaluation value.
The context correlation analysis module is used for receiving the characteristic evaluation reference value and the dynamic perception evaluation value, carrying out joint analysis on the context correlation efficiency of the system, and generating an edge regulation signal and a cloud reinforcement signal.
The tracking strategy generation module is used for receiving the edge regulation signals and the cloud reinforcement signals, carrying out multistage optimization strategy analysis and generating tracking maintenance parameters and calculation force distribution parameters of the vehicle.
Embodiment 1. The process of quantitatively identifying the abnormal tracking state by the multimodal information acquisition module begins with a deep parsing of the vehicle visual characteristic data.
Focusing on target ambiguity, color offset and contour integrity in visual features, extracting the features by a specific image analysis algorithm. Aiming at the target ambiguity, the sharpness and pixel gradient change of the vehicle edge in the image are analyzed to be converted into quantifiable ambiguity values, the color offset is calculated to obtain a specific color offset value by comparing RGB color space distribution differences of the vehicle in different frame images, and the contour integrity value is determined according to the continuity degree of the vehicle contour lines in the image and the duty ratio of the missing parts. And after the three values are obtained, weighting calculation is carried out according to a preset proportion, so that the tracking abnormality contribution degree is obtained, and the values reflect the comprehensive influence of the visual characteristic layer, which can cause the tracking abnormality.
And synchronously unfolding and analyzing the sensor communication state parameters of the system while acquiring the contribution degree of the tracking abnormality. And extracting signal interruption frequency, parameter matching degree and data packet loss rate from real-time communication data of the sensor, respectively converting the parameters into a communication interruption value, a parameter matching value and a data packet loss value, and integrating and marking the three values as tracking stability characteristic values. A tracking stability threshold is preset in the system, and is determined based on the normal operating parameter range of the sensor. Comparing the calculated tracking stability characteristic value with the threshold value, and marking the sensor as an abnormal sensor when the characteristic value is smaller than the threshold value, which means that the corresponding sensor is abnormal in the communication process. And counting the number of the sensors marked as abnormal in the current system, and calculating the ratio of the number of the sensors to the total number of the sensors in the system to obtain the communication abnormal rate, wherein the ratio intuitively represents the abnormal condition of the communication layer of the sensors.
Meanwhile, the illumination intensity change and the shielding occurrence frequency in the environmental context information are collected and analyzed. The illumination intensity change obtains illumination values of different time points in real time through the light sensor, the illumination fluctuation range in unit time is calculated, and the occlusion occurrence frequency is used for counting the occlusion times of the vehicle by other objects through the comparison of continuous frame images. And combining the two items of data to perform dynamic simulation calculation, and considering the influence of illumination change on image acquisition quality and the interference of shielding on target identification continuity to finally obtain an environmental interference evaluation value, wherein the value reflects the interference degree of external environmental factors on the tracking process.
After the calculation of the contribution degree of the tracking abnormality, the communication abnormality rate and the environment interference evaluation value is completed, corresponding weight coefficients are respectively assigned to the three values, and the coefficients are set according to the actual influence degree of each factor on the tracking abnormality under different scenes. And multiplying the three numerical values with the weight coefficients of the three numerical values, and then summing the three numerical values to obtain the abnormal state fusion value. An anomaly threshold is preset in the system, and the threshold is determined based on a critical value of normal tracking and anomaly tracking in a large amount of historical tracking data. Comparing the abnormal state fusion value with an abnormal threshold value, if the abnormal state fusion value is higher than the threshold value, indicating that the tracking state of the current system is obviously abnormal, generating a tracking early warning signal by the multi-mode information acquisition module, and triggering the scene dynamic sensing module and the context correlation analysis module to enter corresponding working states so as to cope with possible tracking abnormal conditions. The whole process realizes accurate quantitative identification of the abnormal tracking state of the system through multi-dimensional data acquisition and comprehensive analysis, and provides a reliable basis for subsequent tracking adjustment.
Embodiment 2. When the object feature extraction module dynamically evaluates the trackability of the object, spatial distribution analysis is developed for the contour feature parameters of the vehicle. And acquiring contour data of the vehicle at different angles and different distances by carrying out edge detection and contour extraction on the acquired vehicle image, wherein the contour data comprise key point coordinates, line trend and integral form of the contour. And simultaneously, analyzing a possible shielding region and shielding probability by combining the relative position relation between the vehicle and surrounding objects in continuous frame images, and generating a shielding risk evaluation graph which can embody the potential risk distribution of the shielded vehicle in the tracking process.
And retrieving a standard tracked vehicle profile reference distribution diagram from a system database, wherein the reference distribution diagram is constructed based on a large number of vehicle profile data under standard working conditions, and covers typical profile characteristics of different vehicle types and different postures. And performing topological comparison on the contour change distribution diagram of the target vehicle and the reference distribution diagram, and calculating the numerical values of the contour key point matching quantity, the line similarity, the overall form fitness and the like to obtain the contour matching degree. And carrying out normalization processing on the profile matching degree, namely converting the numerical value into a preset value range to form a target tracking potential index, wherein the index reflects the matching degree of the target vehicle profile and the standard profile and can be used as one of basic indexes for evaluating the target trackability.
And extracting the shielding occurrence frequency and the characteristic loss proportion from the shielding risk evaluation graph of the vehicle, and marking the two parameters as shielding characteristic values respectively. The shielding occurrence frequency refers to the number of times that the vehicle is shielded in unit time, and the feature loss proportion refers to the proportion that key features (such as license plates, vehicle type identifiers and the like) of the vehicle are covered when shielding occurs. And extracting a standard occlusion risk threshold value from a system database, wherein the threshold value is determined according to the maximum occlusion degree which does not influence normal tracking in the historical tracking data. And carrying out difference value calculation on the extracted shielding characteristic value and the standard shielding risk threshold value to obtain the trackability deviation degree, wherein the deviation degree reflects the difference between the actual shielding condition and the standard threshold value and can reflect the influence degree of the shielding factor on the trackability of the target.
And carrying out weighted fusion on the numerical values of the target tracking potential index and the trackability deviation according to a preset weight proportion, wherein the setting of the weight proportion comprehensively considers the actual effects of the contour matching degree and the shielding influence under different tracking scenes. Through the fusion calculation, a feature evaluation reference value is obtained, and the reference value integrates two factors of contour features and shielding risks and can be used as a comprehensive index for measuring the traceability of the target.
When the scene dynamic perception module predicts and analyzes the dynamic perception efficiency of the scene, firstly, the image frame number, the motion parameter number and the log storage amount are extracted from the real-time data flow of the system. The image frame number refers to the number of vehicle images acquired in unit time, the motion parameter number refers to the number of parameters (such as speed, acceleration, direction angle and the like) describing the motion state of the vehicle, the log storage amount refers to the storage size of log data generated in the running process of the system, and the three parameters are integrated to form a data flow parameter set.
Historical tracking data of similar scenes are extracted from a system database, and the data comprise information such as real-time data flow, processing efficiency and system load in the similar scenes in the past. And constructing a performance prediction model based on a dynamic balance algorithm, wherein the model establishes a prediction function by learning the association relation between the data flow and the processing performance in the historical data. And inputting the current flow parameter set into the efficiency prediction model, and calculating the data processing rate, the calculation power utilization rate and the delay fluctuation rate in the output target time interval by the model. The data processing rate refers to the amount of processing data in a unit time of the system, the calculation power utilization rate refers to the ratio of the calculation resources actually used by the system to the total calculation resources, and the delay fluctuation rate refers to the fluctuation amplitude of the communication delay in the unit time.
And carrying out normalization calculation on the values of the data processing speed, the calculation force utilization rate and the delay fluctuation rate, namely converting the values into the same value range, and eliminating the dimension difference among different parameters. Through the normalization processing, a dynamic perception evaluation value is obtained, and the evaluation value can comprehensively reflect the perception efficiency and the processing capacity of the system on dynamic change in the current scene.
Embodiment 3. When the context correlation analysis module performs joint analysis on the context correlation performance of the system, firstly, the result of quantifying the abnormal tracking state of the system is invoked. The abnormal tracking state quantification result is an abnormal state fusion value obtained by fusion calculation of the tracking abnormal contribution degree, the communication abnormal rate and the environment interference evaluation value by the multi-mode information acquisition module. And setting corresponding correction factors aiming at the abnormal state fusion values, wherein the values of the correction factors are determined based on the influence degree of the abnormal state on the system context correlation analysis, and the abnormal state fusion values in different ranges correspond to different correction factor values. And obtaining a tracking influence correction value through the product calculation of the abnormal state fusion value and the correction factor, wherein the tracking influence correction value is used for adjusting the influence brought by the abnormal state in subsequent analysis.
And after the tracking influence correction value is obtained, carrying out normalization calculation processing on the tracking influence correction value, the characteristic evaluation reference value generated by the target characteristic extraction module and the dynamic perception evaluation value generated by the scene dynamic perception module. In the normalization process, three values are mapped into a value interval of 0-1 respectively, specifically, for each value, the possible minimum value is subtracted by the value, and then the difference between the maximum value and the minimum value is divided, so that the influence caused by the difference of the dimension and the value range among different indexes is eliminated. After normalization processing, three values in the same magnitude are obtained, weighted summation is carried out on the three values according to a preset proportion, and a context correlation evaluation value is obtained, wherein the calculation formula is as follows:
Wherein, the A context-associated evaluation value is represented,Represents the normalized feature evaluation reference value,Represents the normalized dynamic perceptual evaluation value,Indicating the normalized tracking impact correction value,、、Respectively represent the weight coefficient corresponding to the characteristic evaluation reference value, the dynamic perception evaluation value and the tracking influence correction value, and。
A context correlation evaluation threshold value is preset in the system, and the threshold value is determined based on context correlation efficiency data of the system in a normal running state, and reflects the minimum efficiency level of the system capable of meeting tracking requirements through edge processing. And if the context correlation evaluation value is smaller than the threshold value, the context correlation evaluation value of the current system indicates that the context correlation effect of the current system is insufficient, the tracking effect is difficult to ensure only by virtue of the processing of the edge end, and the cloud reinforcement signal is generated at the moment.
And after the tracking strategy generation module receives the edge regulation signal or the cloud reinforcement signal, starting multi-level optimization strategy analysis. When the edge regulation signal is captured, a tracking maintenance instruction is triggered. The tracking maintenance instruction comprises regulation rules for vehicle tracking parameters, and the size of a tracking window and the feature matching precision of the vehicle are dynamically regulated according to the regulation rules. The adjustment of the size of the tracking window is based on the movement speed and image resolution of the target vehicle, and the tracking window is appropriately enlarged to avoid the target from being out of the tracking range when the vehicle moves at a high speed, and is appropriately reduced to reduce unnecessary calculation amount when the vehicle moves at a low speed or is stationary. The feature matching precision is adjusted according to the definition of the target feature, and when the target feature is clear, the matching precision is improved to enhance the tracking accuracy; when the target features are blurred, the matching accuracy is reduced to ensure the tracking continuity. By these adjustments, tracking maintenance parameters are generated, which include specific values of the adjusted tracking window size, feature matching threshold, etc.
When the cloud reinforcement signal is captured, a calculation force distribution instruction is triggered. The computing power distribution instruction comprises a planning scheme for distributing system resources, and the local storage capacity and cloud computing tasks of the system are dynamically planned according to the planning scheme. The adjustment of the local storage capacity transfers the data with low access frequency to the cloud storage according to the real-time data flow and the access frequency of the historical storage data, releases the local storage space to meet the new data storage requirement, and keeps the data with high access frequency locally to improve the data reading speed. According to the planning of the cloud computing task, tasks which need a large amount of computing resources and have low real-time requirements are distributed to the cloud processing according to the complexity and real-time requirements of the tasks, and simple tasks with high real-time requirements are reserved in the local processing. Through the planning, computing power distribution parameters are generated, wherein the parameters comprise the content such as the capacity distribution proportion of local storage, the cloud end and the local task distribution list.
Through the process, the tracking strategy generation module can generate targeted tracking maintenance parameters and calculation force distribution parameters according to different signal types, so that dynamic optimization of a vehicle target tracking process is realized, and the system can adapt to different scene conditions and efficiency requirements.
Embodiment 4. The system comprises a rights management and control module which continuously monitors the tracking operation rights of the system in real time. The monitoring process starts from license data of the analysis system during operation control, and the license data records the identity information of an operation main body, the authorized operation range, the operation validity period and the like. The method comprises the steps of analyzing data, acquiring the authority types of an operation main body, dividing the authority types into a plurality of levels according to different operation contents, covering different operation authorities such as checking, modifying and exporting tracking data, and simultaneously extracting historical violation records of the operation main body, wherein the historical violation records comprise behavior records such as override operation, violation export data and the like in the past operation process.
And matching the authority type obtained through analysis with an operation permission list preset by the system. The operation permission list specifies the allowable operation ranges corresponding to the operation subjects with different identities, for example, a system administrator can perform all operations, and an ordinary user can only view part of tracking data. The deviation degree of the authority type is calculated by comparing the actual authority type of the operating main body with the authority range specified in the list, the numerical value of the deviation degree reflects the difference degree between the actual authority and the due authority, if the actual authority exceeds the specified range of the list, the deviation degree is positive, and if the actual authority is smaller than the specified range, the deviation degree is negative.
Counting the frequency of unauthorized operation and the total operation times in the history violation records, and calculating the difference rate of the frequency of unauthorized operation and the total operation times to obtain an operation behavior abnormality index. The override operation frequency refers to the operation times of the operation main body exceeding the authority range of the operation main body in the past period, the total operation times refers to the total times of all operations of the main body in the same period, the difference rate is calculated in a mode of the ratio of the override operation frequency to the total operation times, and the index intuitively shows the compliance degree of the operation main body in the behavior.
Weighting and fusing the authority type deviation degree and the operation behavior abnormality index according to a preset proportion, wherein the set of the weight proportion comprehensively considers the role of the compliance of the authority type and the normalization of the historical operation behavior in the authority assessment in the fusion process. By means of the fusion calculation, a permission association evaluation value is generated, the evaluation value is a quantization index which comprehensively reflects the permission compliance and the behavior normalization of the operation main body, the value range is between 0 and 1, the value is closer to 0, the permission association state is more normal, and the value is closer to 1, the permission risk is higher.
And the context correlation analysis module brings the generated authority correlation evaluation value into an analysis range when carrying out joint analysis on the context correlation efficiency of the system. And on the basis of the previous analysis, further combining with the authority association evaluation value, and carrying out operation authority constraint analysis on the context association efficiency. Specifically, the authority-related evaluation value, the feature evaluation reference value, the dynamic perception evaluation value and the tracking influence correction value are normalized together, so that all indexes are in the same value interval. Then, the context association evaluation value is recalculated, the weight of the authority association evaluation value is taken into consideration in the calculation, and the size of the weight is determined according to the influence degree of the operation authority on the tracking efficiency in the current scene.
Through the operation authority constraint analysis, the result of the context association analysis not only reflects the conditions of aspects such as target characteristics, scene dynamics, tracking anomalies and the like, but also integrates compliance factors of the operation authority. When the authority association evaluation value is higher, a certain authority risk is indicated, the system context association efficiency can be negatively influenced, and at the moment, when an edge regulation signal or a cloud reinforcement signal is generated, the strength of the signal or an additional authority constraint condition can be correspondingly adjusted; when the weight-related evaluation value is lower, the operation authority is in a normal state, the constraint effect on the context-related efficiency is smaller, and the signal generation is mainly based on other evaluation indexes.
By taking the operation authority factors into context correlation analysis, the system can consider the technical-level efficiency and the factors of the operation safety level, so that the generated edge regulation signals and cloud reinforcement signals are more comprehensive, and when the tracking strategy is optimized, the technical feasibility is ensured, and the compliance and safety of the operation process are ensured.
In the embodiment 5, the system comprises a dynamic adjustment module, wherein the core function of the module is to adjust the resource allocation frequency of the operation and maintenance of the system in real time according to the tracking maintenance parameter and the calculation power allocation parameter output by the tracking strategy generation module, and feed back the adjusted parameter to the multi-mode information acquisition module to form a closed-loop optimization flow. The process enables the system to continuously optimize the resource configuration according to the actual running state so as to adapt to the tracking demand change under different scenes.
After receiving the tracking maintenance parameters, the dynamic adjustment module firstly analyzes the parameter content to determine whether the adjustment of the size of the tracking window is involved. The size of the tracking window is an important parameter affecting the target tracking precision and the system resource consumption, and the numerical variation of the tracking window is directly related to the range and the frequency of the data collected by the sensor. When the tracking maintenance parameters contain the adjustment instruction of the size of the tracking window, the dynamic adjustment module starts a corresponding resource configuration adjustment mechanism to synchronously adjust the sensor scheduling parameters in the resource configuration. The sensor scheduling parameters include sampling frequency, working time length, data transmission interval and the like of the sensor. For example, when the tracking window is increased, it means that a wider monitoring range needs to be covered, at this time, the system can increase the sampling frequency of the related area sensor, shorten the data transmission interval, ensure that the target dynamics in a larger range can be captured, and at the same time, properly prolong the working time of the sensor in the core area of the tracking window, so as to ensure continuous acquisition of key data. If the tracking window is reduced, the sampling frequency of the non-core area sensor is correspondingly reduced, the data transmission interval is prolonged, unnecessary resource consumption is reduced, and the saved resources are concentrated for monitoring the core area.
When the computing power distribution parameters received by the dynamic adjustment module relate to cloud computing task optimization, triggering another set of resource configuration adjustment flow, and synchronously adjusting network bandwidth distribution parameters in resource configuration. The network bandwidth allocation parameters comprise the communication bandwidth ratio between the edge terminal and the cloud terminal, the transmission priority of different types of data and the like. Cloud computing task optimization typically involves the redistribution of tasks, such as transferring a portion of the complex computing task originally undertaken by the edge to the cloud, or vice versa. When more tasks are required to be distributed to the cloud, the system can improve the communication bandwidth ratio between the edge end and the cloud, ensure that a large amount of calculation data can be quickly transmitted to the cloud, and simultaneously set higher transmission priority for the data related to the calculation tasks, so that processing delay caused by data congestion is avoided. If the cloud computing tasks are reduced, when the edge end bears more computing tasks, the communication bandwidth ratio of the edge end and the cloud is reduced, more bandwidth resources are distributed to the inter-equipment communication in the edge end, and efficient coordination of the edge end data processing is guaranteed.
After the adjustment of the sensor scheduling parameters or the network bandwidth allocation parameters is completed, the dynamic adjustment module packages and integrates the adjusted parameters to form a new resource allocation scheme, and inputs the new resource allocation scheme into the multi-mode information acquisition module. And restarting the optimization verification process after the multi-mode information acquisition module receives the new resource configuration parameters. In the process, the multi-mode information acquisition module acquires multi-source data according to new parameters, wherein the multi-source data comprises visual characteristic data and motion parameter data acquired by the adjusted sensors and environmental context information transmitted under the new network bandwidth configuration. Then, the system processes and analyzes data sequentially through the target feature extraction module, the scene dynamic perception module, the context association analysis module and the tracking strategy generation module according to a normal working flow, and generates new tracking maintenance parameters and calculation power distribution parameters. The dynamic adjustment module receives the new parameters again, compares the new parameters with the previous parameters, and continues to adjust if the new parameters are different until the generated parameters tend to be stable, so as to form a continuously optimized closed loop.
Through the closed-loop optimization flow, the system can continuously correct the resource configuration according to the parameter feedback in the actual operation process, so that the tracking maintenance and the calculation power distribution are always matched with the current scene demand, and the stable and efficient vehicle target tracking recognition capability is maintained in a complex and changeable environment.
It is noted that relational terms such as first and second, and the like are used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Moreover, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus.
Although embodiments of the present invention have been shown and described, it will be understood by those skilled in the art that various changes, modifications, substitutions and alterations can be made therein without departing from the principles and spirit of the invention, the scope of which is defined in the appended claims and their equivalents.
Claims (10)
Priority Applications (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN202510969127.2A CN120472404B (en) | 2025-07-15 | 2025-07-15 | Self-adaptive context-aware vehicle target tracking and identifying system |
Applications Claiming Priority (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN202510969127.2A CN120472404B (en) | 2025-07-15 | 2025-07-15 | Self-adaptive context-aware vehicle target tracking and identifying system |
Publications (2)
| Publication Number | Publication Date |
|---|---|
| CN120472404A true CN120472404A (en) | 2025-08-12 |
| CN120472404B CN120472404B (en) | 2025-09-05 |
Family
ID=96628894
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| CN202510969127.2A Active CN120472404B (en) | 2025-07-15 | 2025-07-15 | Self-adaptive context-aware vehicle target tracking and identifying system |
Country Status (1)
| Country | Link |
|---|---|
| CN (1) | CN120472404B (en) |
Cited By (2)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN120726540A (en) * | 2025-08-25 | 2025-09-30 | 北京长河数智科技有限责任公司 | A method for statistics and analysis of dynamic bright spot targets based on visual tracking |
| CN120778136A (en) * | 2025-09-09 | 2025-10-14 | 上海博礼智能科技有限公司 | Unmanned vehicle dynamic path planning method based on multi-source sensor fusion |
Citations (4)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN114581863A (en) * | 2022-03-03 | 2022-06-03 | 广西新发展交通集团有限公司 | Method and system for identifying dangerous state of vehicle |
| CN118537835A (en) * | 2024-04-17 | 2024-08-23 | 广东工业大学 | A traffic dynamic occlusion tracking method and system based on multimodal fusion knowledge graph |
| CN119575368A (en) * | 2025-02-07 | 2025-03-07 | 中联德冠科技(北京)有限公司 | A UAV multi-target tracking method and system based on vision and millimeter-wave radar information fusion |
| CN120147978A (en) * | 2025-02-05 | 2025-06-13 | 南京广播电视系统集成有限公司 | A video surveillance system based on image feature analysis |
-
2025
- 2025-07-15 CN CN202510969127.2A patent/CN120472404B/en active Active
Patent Citations (4)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN114581863A (en) * | 2022-03-03 | 2022-06-03 | 广西新发展交通集团有限公司 | Method and system for identifying dangerous state of vehicle |
| CN118537835A (en) * | 2024-04-17 | 2024-08-23 | 广东工业大学 | A traffic dynamic occlusion tracking method and system based on multimodal fusion knowledge graph |
| CN120147978A (en) * | 2025-02-05 | 2025-06-13 | 南京广播电视系统集成有限公司 | A video surveillance system based on image feature analysis |
| CN119575368A (en) * | 2025-02-07 | 2025-03-07 | 中联德冠科技(北京)有限公司 | A UAV multi-target tracking method and system based on vision and millimeter-wave radar information fusion |
Cited By (3)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN120726540A (en) * | 2025-08-25 | 2025-09-30 | 北京长河数智科技有限责任公司 | A method for statistics and analysis of dynamic bright spot targets based on visual tracking |
| CN120726540B (en) * | 2025-08-25 | 2025-11-07 | 北京长河数智科技有限责任公司 | Dynamic bright spot target statistics and analysis method based on visual tracking |
| CN120778136A (en) * | 2025-09-09 | 2025-10-14 | 上海博礼智能科技有限公司 | Unmanned vehicle dynamic path planning method based on multi-source sensor fusion |
Also Published As
| Publication number | Publication date |
|---|---|
| CN120472404B (en) | 2025-09-05 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| CN120472404B (en) | Self-adaptive context-aware vehicle target tracking and identifying system | |
| CN117312801B (en) | AI-based smart city monitoring system and method | |
| CN120088989B (en) | Traffic road network multi-mode sensing abnormal event early warning method and system | |
| CN120495963B (en) | Security monitoring video analysis method and system based on AI | |
| CN116721549B (en) | Traffic flow detection system and detection method | |
| CN120156538B (en) | Driving behavior analysis method and system based on multimodal sensor fusion | |
| CN117596755B (en) | Intelligent control method and system for street lamp of Internet of things | |
| CN118571034A (en) | Sensing, calculating and controlling integrated intelligent control device and traffic light | |
| CN118278841B (en) | Large-piece transportation supervision and early warning method, system and device based on driving trajectory | |
| CN117115752A (en) | A highway video monitoring method and system | |
| CN119089161A (en) | An intelligent vehicle audit method and system based on big data edge computing equipment | |
| CN119151159A (en) | Electric vehicle charging load prediction method and system considering traffic flow | |
| Wu et al. | A deep learning-based car accident detection approach in video-based traffic surveillance | |
| CN117173913B (en) | Traffic control method and system based on traffic flow analysis at different time periods | |
| CN119622317B (en) | Robot environment perception data processing method, system, equipment and medium | |
| CN120612819A (en) | A traffic control system based on multimodal data fusion | |
| KR102494953B1 (en) | On-device real-time traffic signal control system based on deep learning | |
| CN118506290B (en) | AI (advanced technology attachment) -recognition-based beam field construction safety quality monitoring method and system | |
| CN120279541A (en) | Vehicle paint recognition system based on AI and data analysis | |
| CN120220031A (en) | Intelligent video analysis method and storage medium based on large model scheduling | |
| CN119960989A (en) | A SLAM system with adaptive energy consumption management and energy consumption optimization method | |
| CN119007131A (en) | Road side digital video monitoring method and system based on deep learning | |
| CN118606667A (en) | Intelligent traffic scene recognition method driven by spatiotemporal element association and safety requirements | |
| CN120147608B (en) | Target detection tracking method combining YOLOv detection algorithm and KCF tracking algorithm | |
| CN118172711B (en) | AI big data intelligent management method and system |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| PB01 | Publication | ||
| PB01 | Publication | ||
| SE01 | Entry into force of request for substantive examination | ||
| SE01 | Entry into force of request for substantive examination | ||
| GR01 | Patent grant | ||
| GR01 | Patent grant |