CN120472404A - Adaptive context-aware vehicle target tracking and recognition system - Google Patents

Adaptive context-aware vehicle target tracking and recognition system

Info

Publication number
CN120472404A
CN120472404A CN202510969127.2A CN202510969127A CN120472404A CN 120472404 A CN120472404 A CN 120472404A CN 202510969127 A CN202510969127 A CN 202510969127A CN 120472404 A CN120472404 A CN 120472404A
Authority
CN
China
Prior art keywords
tracking
value
module
context
parameters
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202510969127.2A
Other languages
Chinese (zh)
Other versions
CN120472404B (en
Inventor
李博
王瑞
方舒
李俊
黄宇暄
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Boli Intelligent Technology Co ltd
Original Assignee
Shanghai Boli Intelligent Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Boli Intelligent Technology Co ltd filed Critical Shanghai Boli Intelligent Technology Co ltd
Priority to CN202510969127.2A priority Critical patent/CN120472404B/en
Publication of CN120472404A publication Critical patent/CN120472404A/en
Application granted granted Critical
Publication of CN120472404B publication Critical patent/CN120472404B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/52Surveillance or monitoring of activities, e.g. for recognising suspicious objects
    • G06V20/54Surveillance or monitoring of activities, e.g. for recognising suspicious objects of traffic, e.g. cars on the road, trains or boats
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/246Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/54Extraction of image or video features relating to texture
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/56Extraction of image or video features relating to colour
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/776Validation; Performance evaluation
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/96Management of image or video recognition tasks
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30248Vehicle exterior or interior
    • G06T2207/30252Vehicle exterior; Vicinity of vehicle

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Data Mining & Analysis (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Databases & Information Systems (AREA)
  • Computing Systems (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Traffic Control Systems (AREA)

Abstract

The invention relates to the technical field of vehicle target tracking and identification, and discloses a vehicle target tracking and identification system with self-adaptive context awareness. The system comprises a multi-mode information acquisition module, a target feature extraction module, a scene dynamic perception module, a context association analysis module and a tracking strategy generation module. The system comprises a multi-mode information acquisition module, a target feature extraction module, a scene dynamic sensing module, a context correlation analysis module, a tracking strategy generation module and a tracking strategy generation module, wherein the multi-mode information acquisition module acquires multi-dimensional data and triggers a correlation module, the target feature extraction module extracts various features of a vehicle and evaluates traceability, the scene dynamic sensing module monitors system parameters and predicts dynamic sensing efficiency, the context correlation analysis module jointly analyzes evaluation values and generates regulation signals, and the tracking strategy generation module generates tracking maintenance and calculation force distribution parameters based on the regulation signals. The system can adapt to complex dynamic scenes and improve the stability and the accuracy of tracking and identifying the vehicle target.

Description

Self-adaptive context-aware vehicle target tracking and identifying system
Technical Field
The invention relates to the technical field of vehicle target tracking and identification, in particular to a vehicle target tracking and identification system with self-adaptive context awareness.
Background
Along with the rapid development of intelligent traffic systems, vehicle target tracking and recognition technology is widely applied to scenes such as automatic driving, traffic monitoring and vehicle-road coordination. However, current vehicle target tracking recognition systems still face a number of challenges in complex dynamic environments. In practical applications, traffic scenes often exhibit a high degree of dynamics and uncertainty. For example, on urban roads in peak periods, the number of vehicles is dense, the driving state is changeable, and actions such as acceleration, deceleration, lane changing, overtaking and the like frequently occur, and meanwhile, the vehicles can be influenced by weather conditions (such as rain, heavy fog and strong light), so that the visual characteristics of the vehicles are obviously changed. Most of traditional tracking systems rely on a single vision sensor to collect data, so that dynamic information of a vehicle is difficult to comprehensively capture, and when conditions such as shielding, abrupt illumination change and the like are met, the problems of tracking drift and even target loss are easy to occur.
Some systems have attempted to introduce multi-sensor fusion techniques in the prior art, but have had limitations in data processing and feature extraction. Most systems adopt a fixed feature extraction mode, only extract part of the features of the vehicle, and cannot dynamically adjust an extraction strategy according to scene changes. For example, some systems focus only on the contours and color features of the vehicle, and ignore the recognition value of the texture detail features in complex environments, which makes it difficult to accurately distinguish the target vehicle in a scene with more similar vehicles, and reduces the tracking accuracy.
Existing systems lack flexibility in allocation of computing resources when processing real-time data. With the increase of the number of vehicles and the improvement of data acquisition precision, the real-time data flow of the system is rapidly increased, and the problems of local calculation load and communication delay are increasingly prominent. The conventional system generally adopts a fixed calculation power distribution mode, and can not dynamically adjust the resource distribution of edge calculation and cloud calculation according to real-time data flow and calculation load, so that the response speed of the system is reduced, the tracking delay is increased, and even the phenomenon of data processing congestion occurs under the condition of high load.
The existing system has insufficient utilization of the context information and lacks depth perception and association analysis of scene dynamic changes. For example, when the system is in different communication environments or computing conditions, its tracking performance can be significantly affected, but existing systems often cannot timely perceive these changes and make adjustments accordingly. The context information comprises environmental illumination, road conditions, communication signal intensity and the like, the information is closely related to the tracking state of the vehicle, and the suitability of the system in a complex scene is reduced due to the fact that the system is ignored, so that stable tracking effect is difficult to maintain.
The strategy generation mechanism of the existing tracking system is single and lacks multi-level optimization capability. In the tracking process, when abnormal conditions such as fuzzy target characteristics and shielding occur, the system cannot quickly generate an effective coping strategy, and can only rely on a preset fixed algorithm for processing, so that the tracking stability and the robustness are insufficient. For example, in a highway scene, when a vehicle is running fast and a short occlusion occurs, an existing system may lose a target due to the fact that tracking parameters cannot be adjusted in time, and a subsequent tracking effect is affected.
The current vehicle target tracking and identifying system has obvious defects in the aspects of dynamic scene adaptability, multi-mode information fusion, dynamic calculation distribution, context correlation analysis and the like, and is difficult to meet the requirements of high-precision and high-stability tracking and identifying of the vehicle target in a complex traffic environment.
Disclosure of Invention
The present invention is directed to a vehicle target tracking recognition system with adaptive context awareness, which solves the above-mentioned problems.
To achieve the above object, the present invention provides an adaptive context-aware vehicle target tracking recognition system, the system comprising:
the multi-mode information acquisition module is used for carrying out multi-dimensional acquisition on visual characteristic data, motion parameter data and environmental context information of the vehicle to obtain a scene multi-source data set, carrying out quantitative recognition on an abnormal tracking state based on the scene multi-source data set, generating a tracking early warning signal, and triggering a scene dynamic sensing module and a context association analysis module according to the generated tracking early warning signal;
The target feature extraction module is used for carrying out feature extraction on the contour feature parameters, the color distribution features and the texture detail features of the vehicle, and carrying out dynamic evaluation on the traceability of the target to obtain a feature evaluation reference value;
the scene dynamic sensing module is used for monitoring real-time data flow, local calculation load and communication delay parameters of the system, and performing predictive analysis on the dynamic sensing efficiency of the scene to obtain a dynamic sensing evaluation value;
The context association analysis module is used for receiving the characteristic evaluation reference value and the dynamic perception evaluation value, carrying out joint analysis on the context association efficiency of the system, and generating an edge regulation signal and a cloud reinforcement signal;
The tracking strategy generation module is used for receiving the edge regulation signals and the cloud reinforcement signals, carrying out multistage optimization strategy analysis and generating tracking maintenance parameters and calculation force distribution parameters of the vehicle.
Preferably, the quantitatively identifying the abnormal tracking state based on the scene multisource data set includes:
Extracting target ambiguity, color offset and contour integrity in visual feature data of a vehicle to obtain an ambiguity value, a color offset value and a contour integrity value, and carrying out weighted calculation on the values of the extracted three values to obtain a tracking anomaly contribution degree;
Extracting signal interruption frequency, parameter matching degree and data packet loss rate in sensor communication state parameters of a system to obtain a communication interruption value, a parameter matching value and a data packet loss value, marking the communication interruption value, the parameter matching value and the data packet loss value as tracking stability characteristic values, setting a tracking stability threshold value, comparing and analyzing the characteristic values with the threshold value, marking the sensor as an abnormal sensor when the characteristic values are smaller than the threshold value, counting the ratio of the number of the abnormal sensors to the total number of the sensors in the current system to obtain a communication abnormal rate, extracting illumination intensity change and shielding occurrence frequency in environmental context information, and carrying out dynamic simulation calculation to obtain an environmental interference evaluation value;
and multiplying the values of the tracking anomaly contribution degree, the communication anomaly rate and the environment interference evaluation value by corresponding weight coefficients respectively and adding the values to obtain an anomaly state fusion value, comparing the anomaly state fusion value with a preset anomaly threshold value, and generating a tracking early warning signal if the fusion value is higher than the threshold value.
Preferably, the dynamically evaluating the traceability of the target includes:
the contour feature parameters of the vehicle are spatially distributed and analyzed to generate a corresponding contour change distribution diagram and a corresponding shielding risk assessment diagram;
Extracting a standard tracked vehicle contour reference distribution diagram from a system database, performing topological comparison on a contour change distribution diagram of a target vehicle and the reference distribution diagram, calculating the contour matching degree of the contour change distribution diagram and the reference distribution diagram, and performing normalization processing to obtain a target tracking potential index;
Extracting the shielding occurrence frequency and the characteristic loss proportion from a shielding risk evaluation chart of the vehicle, and marking the shielding occurrence frequency and the characteristic loss proportion as shielding characteristic values respectively;
Extracting a standard shielding risk threshold value from a system database, and performing difference calculation on a shielding characteristic value and the threshold value to obtain a trackability deviation degree;
And carrying out weighted fusion on the target tracking potential index and the value of the trackability deviation degree to obtain a characteristic evaluation reference value.
Preferably, the predicting analysis of the dynamic sensing efficiency of the scene includes:
Extracting the number of image frames, the number of motion parameters and the log storage amount in the real-time data flow of the system to obtain a data flow parameter set;
extracting historical tracking data of similar scenes from a system database, constructing a performance prediction model based on a dynamic balance algorithm, inputting a flow parameter set into the model, and outputting a data processing rate, a calculation power utilization rate and a delay fluctuation rate in a target time interval;
And carrying out normalization calculation on the data processing speed, the calculation power utilization rate and the numerical value of the delay fluctuation rate to obtain a dynamic perception evaluation value.
Preferably, the performing a joint analysis on the context-dependent performance of the system includes:
Taking the quantification result of the abnormal tracking state of the system, setting a correction factor of the quantification result, and obtaining a tracking influence correction value through calculation processing;
carrying out normalization calculation on the values of the characteristic evaluation reference value, the dynamic perception evaluation value and the tracking influence correction value to obtain a context association evaluation value;
setting a context association evaluation threshold, generating an edge regulation signal if the association evaluation value is greater than or equal to the threshold, and generating a cloud reinforcement signal if the association evaluation value is less than the threshold.
Preferably, the performing multi-level optimization strategy analysis includes:
If the edge regulation signal is captured, triggering a tracking maintenance instruction, and dynamically adjusting the size of a tracking window and the feature matching precision of the vehicle according to the instruction to generate tracking maintenance parameters;
If the cloud reinforcement signal is captured, triggering a calculation force distribution instruction, and dynamically planning the local storage capacity and the cloud computing task of the system according to the instruction to generate calculation force distribution parameters.
Preferably, the system further comprises:
The right management and control module is used for monitoring the tracking operation right of the system in real time, extracting the identification of an operation main body, the allowable operation range and the illegal operation characteristics, and generating a right association evaluation value;
and the context association analysis module is further combined with the authority association evaluation value to perform operation authority constraint analysis on the context association efficiency.
Preferably, the real-time monitoring of the tracking operation authority of the system includes:
Acquiring the authority type and the history violation record of an operation main body through analyzing license data of the system during operation control;
matching the authority type with a preset operation permission list, and calculating the deviation degree of the authority type;
counting the difference rate of the override operation frequency and the total operation frequency in the history violation records to obtain an operation behavior abnormality index;
and carrying out weighted fusion on the authority type deviation degree and the operation behavior abnormality index to generate an authority association evaluation value.
Preferably, the system further comprises:
The dynamic adjustment module is used for adjusting the resource configuration frequency of the system operation and maintenance in real time according to the generated tracking maintenance parameter and calculation force distribution parameter, and feeding back the adjusted parameter to the multi-mode information acquisition module to form a closed-loop optimization flow.
Preferably, the adjusting process of the dynamic adjusting module includes:
If the tracking maintenance parameter relates to the tracking window size adjustment, synchronously adjusting the sensor scheduling parameter in the resource configuration;
If the computing power distribution parameters relate to cloud computing task optimization, synchronously adjusting network bandwidth distribution parameters in resource configuration;
Inputting the adjusted parameters into a multi-mode information acquisition module, and restarting the optimization verification process.
Compared with the prior art, the invention has the beneficial effects that:
The multi-mode information acquisition module can acquire the visual characteristic data, the motion parameter data and the environmental context information of the vehicle in a multi-dimensional manner, and the limitation that the traditional system depends on a single sensor is changed. The fusion acquisition mode of the multi-source data enables the system to capture the characteristic information of the vehicle in different scenes more comprehensively, and even under the conditions that the vehicle is dense, the illumination changes greatly or the shielding exists, tracking errors caused by the fact that single data sources are insufficient can be reduced through mutual complementation of the multi-dimension data. Meanwhile, the module can quantitatively identify the abnormal tracking state and trigger the corresponding module, so that timely response to the abnormal condition in the tracking process is realized, and the continuous influence of the abnormal state is avoided.
The target feature extraction module extracts contour feature parameters, color distribution features and texture detail features of the vehicle, and dynamically evaluates the traceability of the target, thereby breaking through the constraint of the traditional fixed feature extraction mode. By comprehensively considering various characteristic parameters, the system can more accurately characterize the target vehicle, and the distinguishing capability of similar vehicles in more scenes is improved. The dynamic evaluation mechanism enables the system to adjust the tracking strategy in real time according to the change of the target characteristics, and when the target characteristics are blurred or changed, the system can timely sense and provide reference for the adjustment of the follow-up tracking strategy, so that the adaptability of the system to the dynamic change of the target characteristics is enhanced.
The scene dynamic perception module monitors real-time data flow, local calculation load and communication delay parameters of the system, predicts and analyzes the dynamic perception efficiency of the scene, and solves the problem of stiff calculation power distribution of the traditional system. By monitoring key parameters of system operation in real time, the system can sense the variation trend of data processing pressure and communication delay in advance, and provides basis for subsequent calculation power distribution and strategy adjustment. The predictive analysis capability enables the system to be prepared in advance before the data flow is suddenly increased or the calculation load is overlarge, so that tracking delay or data loss caused by insufficient resources is avoided, and stable operation of the system in a dynamic scene is ensured.
The context association analysis module receives the characteristic evaluation reference value and the dynamic perception evaluation value, performs joint analysis on the context association efficiency of the system, generates an edge regulation signal and a cloud reinforcement signal, and realizes the cooperative linkage among the modules of the system. The traditional system often isolates the work of each module and lacks effective association analysis, and the module can grasp the overall running state of the system more comprehensively by comprehensively considering the target characteristics and the evaluation result of scene dynamics. Based on the generated regulation and control signals, tasks of edge calculation and cloud computing can be reasonably distributed, so that the edge end can rapidly process tasks with high real-time requirements, and the cloud is responsible for complex data analysis and strategy optimization, so that the overall operation efficiency of the system is improved.
The tracking strategy generation module receives the edge regulation signal and the cloud reinforcement signal, performs multistage optimization strategy analysis, generates tracking maintenance parameters and calculation force distribution parameters of the vehicle, and overcomes the defect of single strategy of the traditional system. The multi-level optimization strategy enables the system to generate targeted tracking parameters and calculation force distribution schemes according to different scenes and target states. For example, when the target features are clear and the scene is stable, a relatively simple tracking strategy is adopted and less calculation force is distributed, and when the target features are fuzzy or the scene is complex, more complex tracking maintenance parameters are started and calculation force support is increased, so that the utilization efficiency of calculation force resources is improved while the tracking precision is ensured.
The system can realize accurate and stable tracking and identification of the vehicle target under a complex dynamic environment through cooperative work of the modules, effectively overcomes the defects of the traditional system in aspects of multi-source data fusion, feature extraction adaptability, calculation power distribution flexibility and the like, and improves the overall performance of tracking and identification of the vehicle target.
Drawings
FIG. 1 is a schematic diagram of the operation of an adaptive context-aware vehicle target tracking recognition system according to the present invention;
FIG. 2 is a flow chart of anomaly tracking state quantitative identification;
FIG. 3 is a flow chart of dynamic evaluation of object traceability;
FIG. 4 is a flow chart of scene dynamic perceptual efficacy prediction;
FIG. 5 is a flow chart of a context-dependent performance joint analysis.
Detailed Description
The following description of the embodiments of the present invention will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present invention, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
Referring to fig. 1-5, the present invention provides a vehicle target tracking and identifying system with adaptive context awareness, which includes a multi-modal information acquisition module, a target feature extraction module, a scene dynamic awareness module, a context association analysis module and a tracking strategy generation module. The system is described in detail below in conjunction with the specific details:
The multi-mode information acquisition module is used for carrying out multi-dimensional acquisition on the visual characteristic data, the motion parameter data and the environmental context information of the vehicle to obtain a scene multi-source data set. And carrying out quantitative identification on the abnormal tracking state based on the scene multisource data set, generating a tracking early warning signal, and triggering a scene dynamic sensing module and a context association analysis module according to the tracking early warning signal.
The target feature extraction module is used for carrying out feature extraction on the contour feature parameters, the color distribution features and the texture detail features of the vehicle, and carrying out dynamic evaluation on the traceability of the target to obtain a feature evaluation reference value.
The scene dynamic perception module is used for monitoring real-time data flow, local calculation load and communication delay parameters of the system, and carrying out predictive analysis on the dynamic perception efficiency of the scene to obtain a dynamic perception evaluation value.
The context correlation analysis module is used for receiving the characteristic evaluation reference value and the dynamic perception evaluation value, carrying out joint analysis on the context correlation efficiency of the system, and generating an edge regulation signal and a cloud reinforcement signal.
The tracking strategy generation module is used for receiving the edge regulation signals and the cloud reinforcement signals, carrying out multistage optimization strategy analysis and generating tracking maintenance parameters and calculation force distribution parameters of the vehicle.
Embodiment 1. The process of quantitatively identifying the abnormal tracking state by the multimodal information acquisition module begins with a deep parsing of the vehicle visual characteristic data.
Focusing on target ambiguity, color offset and contour integrity in visual features, extracting the features by a specific image analysis algorithm. Aiming at the target ambiguity, the sharpness and pixel gradient change of the vehicle edge in the image are analyzed to be converted into quantifiable ambiguity values, the color offset is calculated to obtain a specific color offset value by comparing RGB color space distribution differences of the vehicle in different frame images, and the contour integrity value is determined according to the continuity degree of the vehicle contour lines in the image and the duty ratio of the missing parts. And after the three values are obtained, weighting calculation is carried out according to a preset proportion, so that the tracking abnormality contribution degree is obtained, and the values reflect the comprehensive influence of the visual characteristic layer, which can cause the tracking abnormality.
And synchronously unfolding and analyzing the sensor communication state parameters of the system while acquiring the contribution degree of the tracking abnormality. And extracting signal interruption frequency, parameter matching degree and data packet loss rate from real-time communication data of the sensor, respectively converting the parameters into a communication interruption value, a parameter matching value and a data packet loss value, and integrating and marking the three values as tracking stability characteristic values. A tracking stability threshold is preset in the system, and is determined based on the normal operating parameter range of the sensor. Comparing the calculated tracking stability characteristic value with the threshold value, and marking the sensor as an abnormal sensor when the characteristic value is smaller than the threshold value, which means that the corresponding sensor is abnormal in the communication process. And counting the number of the sensors marked as abnormal in the current system, and calculating the ratio of the number of the sensors to the total number of the sensors in the system to obtain the communication abnormal rate, wherein the ratio intuitively represents the abnormal condition of the communication layer of the sensors.
Meanwhile, the illumination intensity change and the shielding occurrence frequency in the environmental context information are collected and analyzed. The illumination intensity change obtains illumination values of different time points in real time through the light sensor, the illumination fluctuation range in unit time is calculated, and the occlusion occurrence frequency is used for counting the occlusion times of the vehicle by other objects through the comparison of continuous frame images. And combining the two items of data to perform dynamic simulation calculation, and considering the influence of illumination change on image acquisition quality and the interference of shielding on target identification continuity to finally obtain an environmental interference evaluation value, wherein the value reflects the interference degree of external environmental factors on the tracking process.
After the calculation of the contribution degree of the tracking abnormality, the communication abnormality rate and the environment interference evaluation value is completed, corresponding weight coefficients are respectively assigned to the three values, and the coefficients are set according to the actual influence degree of each factor on the tracking abnormality under different scenes. And multiplying the three numerical values with the weight coefficients of the three numerical values, and then summing the three numerical values to obtain the abnormal state fusion value. An anomaly threshold is preset in the system, and the threshold is determined based on a critical value of normal tracking and anomaly tracking in a large amount of historical tracking data. Comparing the abnormal state fusion value with an abnormal threshold value, if the abnormal state fusion value is higher than the threshold value, indicating that the tracking state of the current system is obviously abnormal, generating a tracking early warning signal by the multi-mode information acquisition module, and triggering the scene dynamic sensing module and the context correlation analysis module to enter corresponding working states so as to cope with possible tracking abnormal conditions. The whole process realizes accurate quantitative identification of the abnormal tracking state of the system through multi-dimensional data acquisition and comprehensive analysis, and provides a reliable basis for subsequent tracking adjustment.
Embodiment 2. When the object feature extraction module dynamically evaluates the trackability of the object, spatial distribution analysis is developed for the contour feature parameters of the vehicle. And acquiring contour data of the vehicle at different angles and different distances by carrying out edge detection and contour extraction on the acquired vehicle image, wherein the contour data comprise key point coordinates, line trend and integral form of the contour. And simultaneously, analyzing a possible shielding region and shielding probability by combining the relative position relation between the vehicle and surrounding objects in continuous frame images, and generating a shielding risk evaluation graph which can embody the potential risk distribution of the shielded vehicle in the tracking process.
And retrieving a standard tracked vehicle profile reference distribution diagram from a system database, wherein the reference distribution diagram is constructed based on a large number of vehicle profile data under standard working conditions, and covers typical profile characteristics of different vehicle types and different postures. And performing topological comparison on the contour change distribution diagram of the target vehicle and the reference distribution diagram, and calculating the numerical values of the contour key point matching quantity, the line similarity, the overall form fitness and the like to obtain the contour matching degree. And carrying out normalization processing on the profile matching degree, namely converting the numerical value into a preset value range to form a target tracking potential index, wherein the index reflects the matching degree of the target vehicle profile and the standard profile and can be used as one of basic indexes for evaluating the target trackability.
And extracting the shielding occurrence frequency and the characteristic loss proportion from the shielding risk evaluation graph of the vehicle, and marking the two parameters as shielding characteristic values respectively. The shielding occurrence frequency refers to the number of times that the vehicle is shielded in unit time, and the feature loss proportion refers to the proportion that key features (such as license plates, vehicle type identifiers and the like) of the vehicle are covered when shielding occurs. And extracting a standard occlusion risk threshold value from a system database, wherein the threshold value is determined according to the maximum occlusion degree which does not influence normal tracking in the historical tracking data. And carrying out difference value calculation on the extracted shielding characteristic value and the standard shielding risk threshold value to obtain the trackability deviation degree, wherein the deviation degree reflects the difference between the actual shielding condition and the standard threshold value and can reflect the influence degree of the shielding factor on the trackability of the target.
And carrying out weighted fusion on the numerical values of the target tracking potential index and the trackability deviation according to a preset weight proportion, wherein the setting of the weight proportion comprehensively considers the actual effects of the contour matching degree and the shielding influence under different tracking scenes. Through the fusion calculation, a feature evaluation reference value is obtained, and the reference value integrates two factors of contour features and shielding risks and can be used as a comprehensive index for measuring the traceability of the target.
When the scene dynamic perception module predicts and analyzes the dynamic perception efficiency of the scene, firstly, the image frame number, the motion parameter number and the log storage amount are extracted from the real-time data flow of the system. The image frame number refers to the number of vehicle images acquired in unit time, the motion parameter number refers to the number of parameters (such as speed, acceleration, direction angle and the like) describing the motion state of the vehicle, the log storage amount refers to the storage size of log data generated in the running process of the system, and the three parameters are integrated to form a data flow parameter set.
Historical tracking data of similar scenes are extracted from a system database, and the data comprise information such as real-time data flow, processing efficiency and system load in the similar scenes in the past. And constructing a performance prediction model based on a dynamic balance algorithm, wherein the model establishes a prediction function by learning the association relation between the data flow and the processing performance in the historical data. And inputting the current flow parameter set into the efficiency prediction model, and calculating the data processing rate, the calculation power utilization rate and the delay fluctuation rate in the output target time interval by the model. The data processing rate refers to the amount of processing data in a unit time of the system, the calculation power utilization rate refers to the ratio of the calculation resources actually used by the system to the total calculation resources, and the delay fluctuation rate refers to the fluctuation amplitude of the communication delay in the unit time.
And carrying out normalization calculation on the values of the data processing speed, the calculation force utilization rate and the delay fluctuation rate, namely converting the values into the same value range, and eliminating the dimension difference among different parameters. Through the normalization processing, a dynamic perception evaluation value is obtained, and the evaluation value can comprehensively reflect the perception efficiency and the processing capacity of the system on dynamic change in the current scene.
Embodiment 3. When the context correlation analysis module performs joint analysis on the context correlation performance of the system, firstly, the result of quantifying the abnormal tracking state of the system is invoked. The abnormal tracking state quantification result is an abnormal state fusion value obtained by fusion calculation of the tracking abnormal contribution degree, the communication abnormal rate and the environment interference evaluation value by the multi-mode information acquisition module. And setting corresponding correction factors aiming at the abnormal state fusion values, wherein the values of the correction factors are determined based on the influence degree of the abnormal state on the system context correlation analysis, and the abnormal state fusion values in different ranges correspond to different correction factor values. And obtaining a tracking influence correction value through the product calculation of the abnormal state fusion value and the correction factor, wherein the tracking influence correction value is used for adjusting the influence brought by the abnormal state in subsequent analysis.
And after the tracking influence correction value is obtained, carrying out normalization calculation processing on the tracking influence correction value, the characteristic evaluation reference value generated by the target characteristic extraction module and the dynamic perception evaluation value generated by the scene dynamic perception module. In the normalization process, three values are mapped into a value interval of 0-1 respectively, specifically, for each value, the possible minimum value is subtracted by the value, and then the difference between the maximum value and the minimum value is divided, so that the influence caused by the difference of the dimension and the value range among different indexes is eliminated. After normalization processing, three values in the same magnitude are obtained, weighted summation is carried out on the three values according to a preset proportion, and a context correlation evaluation value is obtained, wherein the calculation formula is as follows:
Wherein, the A context-associated evaluation value is represented,Represents the normalized feature evaluation reference value,Represents the normalized dynamic perceptual evaluation value,Indicating the normalized tracking impact correction value,Respectively represent the weight coefficient corresponding to the characteristic evaluation reference value, the dynamic perception evaluation value and the tracking influence correction value, and
A context correlation evaluation threshold value is preset in the system, and the threshold value is determined based on context correlation efficiency data of the system in a normal running state, and reflects the minimum efficiency level of the system capable of meeting tracking requirements through edge processing. And if the context correlation evaluation value is smaller than the threshold value, the context correlation evaluation value of the current system indicates that the context correlation effect of the current system is insufficient, the tracking effect is difficult to ensure only by virtue of the processing of the edge end, and the cloud reinforcement signal is generated at the moment.
And after the tracking strategy generation module receives the edge regulation signal or the cloud reinforcement signal, starting multi-level optimization strategy analysis. When the edge regulation signal is captured, a tracking maintenance instruction is triggered. The tracking maintenance instruction comprises regulation rules for vehicle tracking parameters, and the size of a tracking window and the feature matching precision of the vehicle are dynamically regulated according to the regulation rules. The adjustment of the size of the tracking window is based on the movement speed and image resolution of the target vehicle, and the tracking window is appropriately enlarged to avoid the target from being out of the tracking range when the vehicle moves at a high speed, and is appropriately reduced to reduce unnecessary calculation amount when the vehicle moves at a low speed or is stationary. The feature matching precision is adjusted according to the definition of the target feature, and when the target feature is clear, the matching precision is improved to enhance the tracking accuracy; when the target features are blurred, the matching accuracy is reduced to ensure the tracking continuity. By these adjustments, tracking maintenance parameters are generated, which include specific values of the adjusted tracking window size, feature matching threshold, etc.
When the cloud reinforcement signal is captured, a calculation force distribution instruction is triggered. The computing power distribution instruction comprises a planning scheme for distributing system resources, and the local storage capacity and cloud computing tasks of the system are dynamically planned according to the planning scheme. The adjustment of the local storage capacity transfers the data with low access frequency to the cloud storage according to the real-time data flow and the access frequency of the historical storage data, releases the local storage space to meet the new data storage requirement, and keeps the data with high access frequency locally to improve the data reading speed. According to the planning of the cloud computing task, tasks which need a large amount of computing resources and have low real-time requirements are distributed to the cloud processing according to the complexity and real-time requirements of the tasks, and simple tasks with high real-time requirements are reserved in the local processing. Through the planning, computing power distribution parameters are generated, wherein the parameters comprise the content such as the capacity distribution proportion of local storage, the cloud end and the local task distribution list.
Through the process, the tracking strategy generation module can generate targeted tracking maintenance parameters and calculation force distribution parameters according to different signal types, so that dynamic optimization of a vehicle target tracking process is realized, and the system can adapt to different scene conditions and efficiency requirements.
Embodiment 4. The system comprises a rights management and control module which continuously monitors the tracking operation rights of the system in real time. The monitoring process starts from license data of the analysis system during operation control, and the license data records the identity information of an operation main body, the authorized operation range, the operation validity period and the like. The method comprises the steps of analyzing data, acquiring the authority types of an operation main body, dividing the authority types into a plurality of levels according to different operation contents, covering different operation authorities such as checking, modifying and exporting tracking data, and simultaneously extracting historical violation records of the operation main body, wherein the historical violation records comprise behavior records such as override operation, violation export data and the like in the past operation process.
And matching the authority type obtained through analysis with an operation permission list preset by the system. The operation permission list specifies the allowable operation ranges corresponding to the operation subjects with different identities, for example, a system administrator can perform all operations, and an ordinary user can only view part of tracking data. The deviation degree of the authority type is calculated by comparing the actual authority type of the operating main body with the authority range specified in the list, the numerical value of the deviation degree reflects the difference degree between the actual authority and the due authority, if the actual authority exceeds the specified range of the list, the deviation degree is positive, and if the actual authority is smaller than the specified range, the deviation degree is negative.
Counting the frequency of unauthorized operation and the total operation times in the history violation records, and calculating the difference rate of the frequency of unauthorized operation and the total operation times to obtain an operation behavior abnormality index. The override operation frequency refers to the operation times of the operation main body exceeding the authority range of the operation main body in the past period, the total operation times refers to the total times of all operations of the main body in the same period, the difference rate is calculated in a mode of the ratio of the override operation frequency to the total operation times, and the index intuitively shows the compliance degree of the operation main body in the behavior.
Weighting and fusing the authority type deviation degree and the operation behavior abnormality index according to a preset proportion, wherein the set of the weight proportion comprehensively considers the role of the compliance of the authority type and the normalization of the historical operation behavior in the authority assessment in the fusion process. By means of the fusion calculation, a permission association evaluation value is generated, the evaluation value is a quantization index which comprehensively reflects the permission compliance and the behavior normalization of the operation main body, the value range is between 0 and 1, the value is closer to 0, the permission association state is more normal, and the value is closer to 1, the permission risk is higher.
And the context correlation analysis module brings the generated authority correlation evaluation value into an analysis range when carrying out joint analysis on the context correlation efficiency of the system. And on the basis of the previous analysis, further combining with the authority association evaluation value, and carrying out operation authority constraint analysis on the context association efficiency. Specifically, the authority-related evaluation value, the feature evaluation reference value, the dynamic perception evaluation value and the tracking influence correction value are normalized together, so that all indexes are in the same value interval. Then, the context association evaluation value is recalculated, the weight of the authority association evaluation value is taken into consideration in the calculation, and the size of the weight is determined according to the influence degree of the operation authority on the tracking efficiency in the current scene.
Through the operation authority constraint analysis, the result of the context association analysis not only reflects the conditions of aspects such as target characteristics, scene dynamics, tracking anomalies and the like, but also integrates compliance factors of the operation authority. When the authority association evaluation value is higher, a certain authority risk is indicated, the system context association efficiency can be negatively influenced, and at the moment, when an edge regulation signal or a cloud reinforcement signal is generated, the strength of the signal or an additional authority constraint condition can be correspondingly adjusted; when the weight-related evaluation value is lower, the operation authority is in a normal state, the constraint effect on the context-related efficiency is smaller, and the signal generation is mainly based on other evaluation indexes.
By taking the operation authority factors into context correlation analysis, the system can consider the technical-level efficiency and the factors of the operation safety level, so that the generated edge regulation signals and cloud reinforcement signals are more comprehensive, and when the tracking strategy is optimized, the technical feasibility is ensured, and the compliance and safety of the operation process are ensured.
In the embodiment 5, the system comprises a dynamic adjustment module, wherein the core function of the module is to adjust the resource allocation frequency of the operation and maintenance of the system in real time according to the tracking maintenance parameter and the calculation power allocation parameter output by the tracking strategy generation module, and feed back the adjusted parameter to the multi-mode information acquisition module to form a closed-loop optimization flow. The process enables the system to continuously optimize the resource configuration according to the actual running state so as to adapt to the tracking demand change under different scenes.
After receiving the tracking maintenance parameters, the dynamic adjustment module firstly analyzes the parameter content to determine whether the adjustment of the size of the tracking window is involved. The size of the tracking window is an important parameter affecting the target tracking precision and the system resource consumption, and the numerical variation of the tracking window is directly related to the range and the frequency of the data collected by the sensor. When the tracking maintenance parameters contain the adjustment instruction of the size of the tracking window, the dynamic adjustment module starts a corresponding resource configuration adjustment mechanism to synchronously adjust the sensor scheduling parameters in the resource configuration. The sensor scheduling parameters include sampling frequency, working time length, data transmission interval and the like of the sensor. For example, when the tracking window is increased, it means that a wider monitoring range needs to be covered, at this time, the system can increase the sampling frequency of the related area sensor, shorten the data transmission interval, ensure that the target dynamics in a larger range can be captured, and at the same time, properly prolong the working time of the sensor in the core area of the tracking window, so as to ensure continuous acquisition of key data. If the tracking window is reduced, the sampling frequency of the non-core area sensor is correspondingly reduced, the data transmission interval is prolonged, unnecessary resource consumption is reduced, and the saved resources are concentrated for monitoring the core area.
When the computing power distribution parameters received by the dynamic adjustment module relate to cloud computing task optimization, triggering another set of resource configuration adjustment flow, and synchronously adjusting network bandwidth distribution parameters in resource configuration. The network bandwidth allocation parameters comprise the communication bandwidth ratio between the edge terminal and the cloud terminal, the transmission priority of different types of data and the like. Cloud computing task optimization typically involves the redistribution of tasks, such as transferring a portion of the complex computing task originally undertaken by the edge to the cloud, or vice versa. When more tasks are required to be distributed to the cloud, the system can improve the communication bandwidth ratio between the edge end and the cloud, ensure that a large amount of calculation data can be quickly transmitted to the cloud, and simultaneously set higher transmission priority for the data related to the calculation tasks, so that processing delay caused by data congestion is avoided. If the cloud computing tasks are reduced, when the edge end bears more computing tasks, the communication bandwidth ratio of the edge end and the cloud is reduced, more bandwidth resources are distributed to the inter-equipment communication in the edge end, and efficient coordination of the edge end data processing is guaranteed.
After the adjustment of the sensor scheduling parameters or the network bandwidth allocation parameters is completed, the dynamic adjustment module packages and integrates the adjusted parameters to form a new resource allocation scheme, and inputs the new resource allocation scheme into the multi-mode information acquisition module. And restarting the optimization verification process after the multi-mode information acquisition module receives the new resource configuration parameters. In the process, the multi-mode information acquisition module acquires multi-source data according to new parameters, wherein the multi-source data comprises visual characteristic data and motion parameter data acquired by the adjusted sensors and environmental context information transmitted under the new network bandwidth configuration. Then, the system processes and analyzes data sequentially through the target feature extraction module, the scene dynamic perception module, the context association analysis module and the tracking strategy generation module according to a normal working flow, and generates new tracking maintenance parameters and calculation power distribution parameters. The dynamic adjustment module receives the new parameters again, compares the new parameters with the previous parameters, and continues to adjust if the new parameters are different until the generated parameters tend to be stable, so as to form a continuously optimized closed loop.
Through the closed-loop optimization flow, the system can continuously correct the resource configuration according to the parameter feedback in the actual operation process, so that the tracking maintenance and the calculation power distribution are always matched with the current scene demand, and the stable and efficient vehicle target tracking recognition capability is maintained in a complex and changeable environment.
It is noted that relational terms such as first and second, and the like are used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Moreover, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus.
Although embodiments of the present invention have been shown and described, it will be understood by those skilled in the art that various changes, modifications, substitutions and alterations can be made therein without departing from the principles and spirit of the invention, the scope of which is defined in the appended claims and their equivalents.

Claims (10)

1.一种自适应上下文感知的车辆目标跟踪识别系统,其特征在于,包括:1. An adaptive context-aware vehicle target tracking and recognition system, comprising: 多模态信息采集模块,用于对车辆的视觉特征数据、运动参数数据及环境上下文信息进行多维度采集,得到场景多源数据集合,基于场景多源数据集合对异常跟踪状态进行量化识别,生成跟踪预警信号,依据生成的跟踪预警信号触发场景动态感知模块和上下文关联分析模块;The multimodal information acquisition module is used to collect the vehicle's visual feature data, motion parameter data, and environmental context information in multiple dimensions to obtain a multi-source scene data set. Based on the multi-source scene data set, the module quantitatively identifies abnormal tracking states and generates tracking warning signals. The generated tracking warning signals trigger the scene dynamic perception module and the context association analysis module. 目标特征提取模块,用于对车辆的轮廓特征参数、颜色分布特征及纹理细节特征进行特征提取,对目标的可跟踪性进行动态评测,得到特征评估基准值;The target feature extraction module is used to extract the vehicle's contour feature parameters, color distribution features, and texture detail features, dynamically evaluate the target's trackability, and obtain a feature evaluation benchmark value; 场景动态感知模块,用于对系统的实时数据流量、本地计算负荷及通信延迟参数进行监测,对场景的动态感知效能进行预测分析,得到动态感知评估值;The scene dynamic perception module is used to monitor the system's real-time data traffic, local computing load, and communication delay parameters, and to predict and analyze the scene's dynamic perception performance to obtain a dynamic perception evaluation value. 上下文关联分析模块,用于接收特征评估基准值和动态感知评估值,对系统的上下文关联效能进行联合分析,生成边缘调控信号和云端强化信号;The context-related analysis module receives the feature evaluation baseline value and the dynamic perception evaluation value, performs a joint analysis on the system's context-related effectiveness, and generates edge control signals and cloud-side reinforcement signals; 跟踪策略生成模块,用于接收边缘调控信号和云端强化信号,进行多级优化策略分析,生成车辆的跟踪维护参数和算力分配参数。The tracking strategy generation module is used to receive edge control signals and cloud reinforcement signals, perform multi-level optimization strategy analysis, and generate vehicle tracking maintenance parameters and computing power allocation parameters. 2.根据权利要求1所述的一种自适应上下文感知的车辆目标跟踪识别系统,其特征在于,所述基于场景多源数据集合对异常跟踪状态进行量化识别,包括:2. The adaptive context-aware vehicle target tracking and recognition system according to claim 1, wherein the quantitative identification of abnormal tracking status based on a multi-source scene data set comprises: 通过对车辆的视觉特征数据中的目标模糊度、颜色偏移量及轮廓完整性进行提取,得到模糊度值、颜色偏移值及轮廓完整性值,提取三者的数值进行加权计算处理,得到跟踪异常贡献度;By extracting the target ambiguity, color offset and contour integrity from the vehicle's visual feature data, the ambiguity value, color offset value and contour integrity value are obtained. The values of the three are extracted and weighted to calculate the contribution of tracking anomaly. 通过对系统的传感器通信状态参数中的信号中断频次、参数匹配度及数据丢包率进行提取,得到通信中断值、参数匹配值及数据丢包值,并将其标记为跟踪稳定性特征值,设置跟踪稳定性阈值,将特征值与阈值进行比较分析,当特征值小于阈值时,则将该传感器标记为异常传感器,统计当前系统中异常传感器数量与总传感器数量的占比,得到通信异常率;同时提取环境上下文信息中的光照强度变化、遮挡发生频次,进行动态模拟计算,得到环境干扰评估值;By extracting the signal interruption frequency, parameter matching degree and data packet loss rate from the system's sensor communication status parameters, the communication interruption value, parameter matching value and data packet loss value are obtained, and they are marked as tracking stability characteristic values. A tracking stability threshold is set, and the characteristic value is compared and analyzed with the threshold. When the characteristic value is less than the threshold, the sensor is marked as an abnormal sensor. The ratio of the number of abnormal sensors to the total number of sensors in the current system is counted to obtain the communication anomaly rate. At the same time, the light intensity change and the frequency of occlusion in the environmental context information are extracted, and dynamic simulation calculations are performed to obtain the environmental interference assessment value. 将跟踪异常贡献度、通信异常率及环境干扰评估值的数值分别乘以对应的权重系数并相加,得到异常状态融合值,将其与预设的异常阈值进行比较,若融合值高于阈值,则生成跟踪预警信号。The values of tracking anomaly contribution, communication anomaly rate and environmental interference assessment value are multiplied by the corresponding weight coefficients and added together to obtain the abnormal state fusion value, which is compared with the preset abnormality threshold. If the fusion value is higher than the threshold, a tracking warning signal is generated. 3.根据权利要求1所述的一种自适应上下文感知的车辆目标跟踪识别系统,其特征在于,所述对目标的可跟踪性进行动态评测,包括:3. The adaptive context-aware vehicle target tracking and recognition system according to claim 1, wherein the dynamic evaluation of the target's trackability comprises: 通过对车辆的轮廓特征参数进行空间分布解析,生成其对应的轮廓变化分布图及遮挡风险评估图;By analyzing the spatial distribution of the vehicle's contour feature parameters, the corresponding contour change distribution map and occlusion risk assessment map are generated; 从系统数据库中提取标准跟踪的车辆轮廓参考分布图,将目标车辆的轮廓变化分布图与参考分布图进行拓扑对比,计算两者的轮廓匹配度,并进行归一化处理,得到目标跟踪潜力指数;Extract the reference distribution map of the standard tracking vehicle profile from the system database, perform a topological comparison between the target vehicle's profile change distribution map and the reference distribution map, calculate the profile matching degree of the two, and perform normalization processing to obtain the target tracking potential index; 从车辆的遮挡风险评估图中提取遮挡发生频率及特征损失比例,分别标记为遮挡特征值;Extract the occlusion occurrence frequency and feature loss ratio from the vehicle occlusion risk assessment map and mark them as occlusion feature values respectively; 从系统数据库中提取标准遮挡风险阈值,将遮挡特征值与阈值进行差值计算,得到可跟踪性偏离度;Extract the standard occlusion risk threshold from the system database, calculate the difference between the occlusion feature value and the threshold, and obtain the traceability deviation; 将目标跟踪潜力指数和可跟踪性偏离度的数值进行加权融合,得到特征评估基准值。The target tracking potential index and the trackability deviation value are weighted and fused to obtain the feature evaluation benchmark value. 4.根据权利要求1所述的一种自适应上下文感知的车辆目标跟踪识别系统,其特征在于,所述对场景的动态感知效能进行预测分析,包括:4. The adaptive context-aware vehicle target tracking and recognition system according to claim 1, wherein the predictive analysis of the dynamic perception efficiency of the scene comprises: 通过对系统的实时数据流量中的图像帧数、运动参数量及日志存储量进行提取,得到数据流量参数集;By extracting the number of image frames, motion parameters and log storage capacity from the real-time data flow of the system, a data flow parameter set is obtained; 从系统数据库中提取同类场景的历史跟踪数据,基于动态平衡算法构建效能预测模型,将流量参数集输入模型,输出目标时间区间内的数据处理速率、算力利用率及延迟波动率;Extract historical tracking data of similar scenarios from the system database, build an efficiency prediction model based on a dynamic balance algorithm, input the traffic parameter set into the model, and output the data processing rate, computing power utilization, and latency fluctuation rate within the target time interval; 将数据处理速率、算力利用率及延迟波动率的数值进行归一化计算,得到动态感知评估值。The data processing rate, computing power utilization, and delay fluctuation rate are normalized and calculated to obtain the dynamic perception evaluation value. 5.根据权利要求1所述的一种自适应上下文感知的车辆目标跟踪识别系统,其特征在于,所述对系统的上下文关联效能进行联合分析,包括:5. The adaptive context-aware vehicle target tracking and recognition system according to claim 1, wherein the joint analysis of the context-related performance of the system comprises: 调取系统的异常跟踪状态量化结果,设置其修正因子,通过计算处理得到跟踪影响修正值;Retrieve the system's abnormal tracking state quantification results, set its correction factor, and obtain the tracking impact correction value through calculation and processing; 将特征评估基准值、动态感知评估值及跟踪影响修正值的数值进行归一化计算处理,得到上下文关联评估值;Normalizing the feature evaluation benchmark value, the dynamic perception evaluation value, and the tracking impact correction value to obtain a contextual correlation evaluation value; 设置上下文关联评估阈值,若关联评估值大于或等于阈值,则生成边缘调控信号;若小于阈值,则生成云端强化信号。Set the context association evaluation threshold. If the association evaluation value is greater than or equal to the threshold, an edge control signal is generated; if it is less than the threshold, a cloud reinforcement signal is generated. 6.根据权利要求1所述的一种自适应上下文感知的车辆目标跟踪识别系统,其特征在于,所述进行多级优化策略分析,包括:6. The adaptive context-aware vehicle target tracking and recognition system according to claim 1, wherein the multi-level optimization strategy analysis comprises: 若捕捉到边缘调控信号,则触发跟踪维护指令,依据指令对车辆的跟踪窗口大小、特征匹配精度进行动态调整,生成跟踪维护参数;If an edge control signal is captured, the tracking maintenance instruction is triggered, and the vehicle's tracking window size and feature matching accuracy are dynamically adjusted according to the instruction to generate tracking maintenance parameters; 若捕捉到云端强化信号,则触发算力分配指令,依据指令对系统的本地存储容量、云端计算任务进行动态规划,生成算力分配参数。If a cloud reinforcement signal is captured, the computing power allocation instruction will be triggered. Based on the instruction, the system's local storage capacity and cloud computing tasks will be dynamically planned to generate computing power allocation parameters. 7.根据权利要求1所述的一种自适应上下文感知的车辆目标跟踪识别系统,其特征在于,还包括:7. The adaptive context-aware vehicle target tracking and recognition system according to claim 1, further comprising: 权限管控模块,用于对系统的跟踪操作权限进行实时监测,提取操作主体标识、许可操作范围及违规操作特征,生成权限关联评估值;The permission control module is used to monitor the system's tracking operation permissions in real time, extract the operator's identification, permitted operation scope and illegal operation characteristics, and generate permission association evaluation values; 所述上下文关联分析模块进一步结合权限关联评估值,对上下文关联效能进行操作权限约束分析。The context association analysis module further combines the authority association evaluation value to perform an operation authority constraint analysis on the context association effectiveness. 8.根据权利要求7所述的一种自适应上下文感知的车辆目标跟踪识别系统,其特征在于,所述对系统的跟踪操作权限进行实时监测,包括:8. The adaptive context-aware vehicle target tracking and recognition system according to claim 7, wherein the real-time monitoring of the system's tracking operation authority comprises: 通过解析系统在操作管控时的许可证数据,获取操作主体的权限类型及历史违规记录;By analyzing the license data of the system during operation control, the permission type and historical violation records of the operating subject are obtained; 将权限类型与预设的操作许可清单进行匹配,计算权限类型偏离度;Match the permission type with the preset operation permission list and calculate the permission type deviation; 统计历史违规记录中的越权操作频次与总操作次数的差异率,得到操作行为异常指数;Calculate the difference between the frequency of unauthorized operations and the total number of operations in historical violation records to obtain the abnormal operation behavior index; 将权限类型偏离度与操作行为异常指数进行加权融合,生成权限关联评估值。The permission type deviation degree and the operation behavior abnormality index are weighted and fused to generate the permission association evaluation value. 9.根据权利要求1所述的一种自适应上下文感知的车辆目标跟踪识别系统,其特征在于,还包括:9. The adaptive context-aware vehicle target tracking and recognition system according to claim 1, further comprising: 动态调整模块,用于根据生成的跟踪维护参数和算力分配参数,实时调整系统运维的资源配置频率,并将调整后的参数反馈至多模态信息采集模块,形成闭环优化流程。The dynamic adjustment module is used to adjust the resource configuration frequency of system operation and maintenance in real time based on the generated tracking maintenance parameters and computing power allocation parameters, and feed the adjusted parameters back to the multimodal information acquisition module to form a closed-loop optimization process. 10.根据权利要求9所述的一种自适应上下文感知的车辆目标跟踪识别系统,其特征在于,所述动态调整模块的调整过程包括:10. The adaptive context-aware vehicle target tracking and recognition system according to claim 9, wherein the adjustment process of the dynamic adjustment module comprises: 若跟踪维护参数涉及跟踪窗口大小调整,则同步调整资源配置中的传感器调度参数;If the tracking maintenance parameters involve adjusting the tracking window size, the sensor scheduling parameters in the resource configuration are adjusted synchronously; 若算力分配参数涉及云端计算任务优化,则同步调整资源配置中的网络带宽分配参数;If the computing power allocation parameters involve cloud computing task optimization, the network bandwidth allocation parameters in the resource configuration will be adjusted simultaneously; 将调整后的参数输入多模态信息采集模块,重新启动优化验证流程。Input the adjusted parameters into the multimodal information acquisition module and restart the optimization and verification process.
CN202510969127.2A 2025-07-15 2025-07-15 Self-adaptive context-aware vehicle target tracking and identifying system Active CN120472404B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202510969127.2A CN120472404B (en) 2025-07-15 2025-07-15 Self-adaptive context-aware vehicle target tracking and identifying system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202510969127.2A CN120472404B (en) 2025-07-15 2025-07-15 Self-adaptive context-aware vehicle target tracking and identifying system

Publications (2)

Publication Number Publication Date
CN120472404A true CN120472404A (en) 2025-08-12
CN120472404B CN120472404B (en) 2025-09-05

Family

ID=96628894

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202510969127.2A Active CN120472404B (en) 2025-07-15 2025-07-15 Self-adaptive context-aware vehicle target tracking and identifying system

Country Status (1)

Country Link
CN (1) CN120472404B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN120726540A (en) * 2025-08-25 2025-09-30 北京长河数智科技有限责任公司 A method for statistics and analysis of dynamic bright spot targets based on visual tracking
CN120778136A (en) * 2025-09-09 2025-10-14 上海博礼智能科技有限公司 Unmanned vehicle dynamic path planning method based on multi-source sensor fusion

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114581863A (en) * 2022-03-03 2022-06-03 广西新发展交通集团有限公司 Method and system for identifying dangerous state of vehicle
CN118537835A (en) * 2024-04-17 2024-08-23 广东工业大学 A traffic dynamic occlusion tracking method and system based on multimodal fusion knowledge graph
CN119575368A (en) * 2025-02-07 2025-03-07 中联德冠科技(北京)有限公司 A UAV multi-target tracking method and system based on vision and millimeter-wave radar information fusion
CN120147978A (en) * 2025-02-05 2025-06-13 南京广播电视系统集成有限公司 A video surveillance system based on image feature analysis

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114581863A (en) * 2022-03-03 2022-06-03 广西新发展交通集团有限公司 Method and system for identifying dangerous state of vehicle
CN118537835A (en) * 2024-04-17 2024-08-23 广东工业大学 A traffic dynamic occlusion tracking method and system based on multimodal fusion knowledge graph
CN120147978A (en) * 2025-02-05 2025-06-13 南京广播电视系统集成有限公司 A video surveillance system based on image feature analysis
CN119575368A (en) * 2025-02-07 2025-03-07 中联德冠科技(北京)有限公司 A UAV multi-target tracking method and system based on vision and millimeter-wave radar information fusion

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN120726540A (en) * 2025-08-25 2025-09-30 北京长河数智科技有限责任公司 A method for statistics and analysis of dynamic bright spot targets based on visual tracking
CN120726540B (en) * 2025-08-25 2025-11-07 北京长河数智科技有限责任公司 Dynamic bright spot target statistics and analysis method based on visual tracking
CN120778136A (en) * 2025-09-09 2025-10-14 上海博礼智能科技有限公司 Unmanned vehicle dynamic path planning method based on multi-source sensor fusion

Also Published As

Publication number Publication date
CN120472404B (en) 2025-09-05

Similar Documents

Publication Publication Date Title
CN120472404B (en) Self-adaptive context-aware vehicle target tracking and identifying system
CN117312801B (en) AI-based smart city monitoring system and method
CN120088989B (en) Traffic road network multi-mode sensing abnormal event early warning method and system
CN120495963B (en) Security monitoring video analysis method and system based on AI
CN116721549B (en) Traffic flow detection system and detection method
CN120156538B (en) Driving behavior analysis method and system based on multimodal sensor fusion
CN117596755B (en) Intelligent control method and system for street lamp of Internet of things
CN118571034A (en) Sensing, calculating and controlling integrated intelligent control device and traffic light
CN118278841B (en) Large-piece transportation supervision and early warning method, system and device based on driving trajectory
CN117115752A (en) A highway video monitoring method and system
CN119089161A (en) An intelligent vehicle audit method and system based on big data edge computing equipment
CN119151159A (en) Electric vehicle charging load prediction method and system considering traffic flow
Wu et al. A deep learning-based car accident detection approach in video-based traffic surveillance
CN117173913B (en) Traffic control method and system based on traffic flow analysis at different time periods
CN119622317B (en) Robot environment perception data processing method, system, equipment and medium
CN120612819A (en) A traffic control system based on multimodal data fusion
KR102494953B1 (en) On-device real-time traffic signal control system based on deep learning
CN118506290B (en) AI (advanced technology attachment) -recognition-based beam field construction safety quality monitoring method and system
CN120279541A (en) Vehicle paint recognition system based on AI and data analysis
CN120220031A (en) Intelligent video analysis method and storage medium based on large model scheduling
CN119960989A (en) A SLAM system with adaptive energy consumption management and energy consumption optimization method
CN119007131A (en) Road side digital video monitoring method and system based on deep learning
CN118606667A (en) Intelligent traffic scene recognition method driven by spatiotemporal element association and safety requirements
CN120147608B (en) Target detection tracking method combining YOLOv detection algorithm and KCF tracking algorithm
CN118172711B (en) AI big data intelligent management method and system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant