CN114820724B - Intelligent monitoring method and system for cross-scene tracking - Google Patents

Intelligent monitoring method and system for cross-scene tracking Download PDF

Info

Publication number
CN114820724B
CN114820724B CN202210738826.2A CN202210738826A CN114820724B CN 114820724 B CN114820724 B CN 114820724B CN 202210738826 A CN202210738826 A CN 202210738826A CN 114820724 B CN114820724 B CN 114820724B
Authority
CN
China
Prior art keywords
monitoring
target
feature
environment
background
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CN202210738826.2A
Other languages
Chinese (zh)
Other versions
CN114820724A (en
Inventor
陈华锋
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang Shuren University
Original Assignee
Zhejiang Shuren University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang Shuren University filed Critical Zhejiang Shuren University
Priority to CN202210738826.2A priority Critical patent/CN114820724B/en
Publication of CN114820724A publication Critical patent/CN114820724A/en
Application granted granted Critical
Publication of CN114820724B publication Critical patent/CN114820724B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/292Multi-camera tracking
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/246Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30232Surveillance
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/02Total factory control, e.g. smart factories, flexible manufacturing systems [FMS] or integrated manufacturing systems [IMS]

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Closed-Circuit Television Systems (AREA)

Abstract

The application discloses an intelligent monitoring method and system for cross-scene tracking, which are used for respectively extracting characteristics of first monitoring information and second monitoring information to obtain a first monitoring characteristic set and a second monitoring characteristic set; respectively extracting environmental features and target features of the first monitoring feature set and the second monitoring feature set to obtain a first environmental target feature and a first environmental background feature, a second environmental target feature and a second environmental background feature, and performing correlation and distinctiveness analysis to obtain an environmental background influence relation and a target environmental correlation feature; and obtaining third monitoring information to determine monitoring characteristics to monitor the first target. The technical problem of low recognition reliability caused by the fact that monitoring recognition depends on target feature comparison and target feature collection is interfered by monitoring scene parameter change in cross-scene monitoring is solved. The method achieves the technical effects of tracking and identifying different scenes by using the effective identification characteristics of the target, reducing the influence of collecting environmental parameters in different monitoring ranges and realizing the cross-scene target intelligent tracking.

Description

Intelligent monitoring method and system for cross-scene tracking
Technical Field
The present application relates to the field of intelligent monitoring technologies, and in particular, to an intelligent monitoring method and system for cross-scene tracking.
Background
Video monitoring is an important component of a safety precaution system, and a traditional monitoring system comprises a front-end camera, a transmission cable and a video monitoring platform. The video acquisition system mainly comprises cameras at all observation points and mainly finishes video image signal acquisition. The method is widely applied to daily community monitoring, road monitoring, law enforcement monitoring and the like, and plays an important role in maintaining social stability. The general public has the sky eye to monitor the driving safety of roads, pedestrians and the like, and can assist law enforcement departments in case investigation, clue acquisition, collection and extraction and the like. At present, in tracking and querying of a monitored target, the characteristic of a target person is mainly utilized for searching, the target person is influenced by factors such as appearance, background environment and light in different monitoring ranges, the recognition accuracy is low, how to realize intelligent monitoring and recognition of the target in a cross-monitoring range is omitted, and the method has important significance for improving monitoring.
The above-mentioned techniques have been found to have at least the following technical problems:
in the prior art, monitoring identification is directly compared by means of collected target characteristics, but the collection of the target characteristics in cross-scene monitoring is interfered by monitoring scene parameters, so that the technical problem of low reliability of target identification is caused.
Disclosure of Invention
The application aims to provide an intelligent monitoring method and system for cross-scene tracking, which are used for solving the technical problems that monitoring identification in the prior art is directly compared by means of collected target characteristics, but the collection of the target characteristics in cross-scene monitoring is interfered by monitoring scene parameters, so that the reliability of target identification is low. The method achieves the technical effects of tracking and identifying different scenes by using the effective identification characteristics of the target, determining the targeted tracking characteristics of the different scenes, effectively tracking by combining the relation between the background of the scenes and the target performance characteristics, reducing the influence of collecting environmental parameters in different monitoring ranges and realizing the cross-scene target intelligent tracking.
In view of the foregoing problems, the present application provides an intelligent monitoring method and system for cross-scene tracking.
In a first aspect, the present application provides an intelligent monitoring method for cross-scene tracking, the method including: obtaining first monitoring information, wherein the first monitoring information comprises a first target; obtaining second monitoring information based on the first target; respectively extracting the characteristics of the first monitoring information and the second monitoring information to obtain a first monitoring characteristic set and a second monitoring characteristic set, wherein the first monitoring characteristic set is the characteristics of the first monitoring information, and the second monitoring characteristic set is the characteristics of the second monitoring information; respectively carrying out environmental feature extraction and target feature extraction on the first monitoring feature set and the second monitoring feature set to obtain a first environmental target feature, a first environmental background feature, a second environmental target feature and a second environmental background feature; performing correlation and differential analysis according to the first environment target feature, the second environment target feature, the first environment background feature and the second environment background feature to obtain an environment background influence relation and a target environment correlation feature; acquiring third monitoring information, and determining monitoring target information according to the third monitoring information; and determining monitoring characteristics according to the monitoring target information based on the environmental parameter influence relation and the target environment related characteristics, and monitoring the first target based on the monitoring characteristics.
In another aspect, the present application further provides a cross-scene tracking intelligent monitoring system, configured to execute the cross-scene tracking intelligent monitoring method according to the first aspect, where the system includes: a first monitoring information obtaining unit, configured to obtain first monitoring information, where the first monitoring information includes a first target;
a second monitoring information obtaining unit configured to obtain second monitoring information based on the first target;
a feature extraction unit, configured to perform feature extraction on the first monitoring information and the second monitoring information respectively to obtain a first monitoring feature set and a second monitoring feature set, where the first monitoring feature set is a feature of the first monitoring information, and the second monitoring feature set is a feature of the second monitoring information;
the scene interaction analysis unit is used for respectively carrying out environmental feature extraction and target feature extraction on the first monitoring feature set and the second monitoring feature set to obtain a first environmental target feature, a first environmental background feature, a second environmental target feature and a second environmental background feature;
the monitoring target determining unit is used for carrying out correlation and differential analysis according to the first environment target feature, the second environment target feature, the first environment background feature and the second environment background feature to obtain an environment background influence relation and a target environment related feature;
a third monitoring information obtaining unit, configured to obtain third monitoring information, and determine monitoring target information according to the third monitoring information;
and the tracking and monitoring unit is used for determining monitoring characteristics according to the monitoring target information based on the environmental parameter influence relation and the target environment related characteristics, and monitoring the first target based on the monitoring characteristics.
In a third aspect, the present application further provides an intelligent monitoring system for cross-scene tracking, including a memory, a processor, and a computer program stored in the memory and executable on the processor, where the processor implements the steps of the method according to the first aspect when executing the program.
One or more technical solutions provided in the present application have at least the following technical effects or advantages:
the application discloses an intelligent monitoring method and system for cross-scene tracking, which comprises the steps of obtaining first monitoring information, wherein the first monitoring information comprises a first target; obtaining second monitoring information based on the first target; respectively extracting the characteristics of the first monitoring information and the second monitoring information to obtain a first monitoring characteristic set and a second monitoring characteristic set, wherein the first monitoring characteristic set is the characteristics of the first monitoring information, and the second monitoring characteristic set is the characteristics of the second monitoring information; respectively carrying out environmental feature extraction and target feature extraction on the first monitoring feature set and the second monitoring feature set to obtain a first environmental target feature, a first environmental background feature, a second environmental target feature and a second environmental background feature; performing correlation and differential analysis according to the first environment target feature, the second environment target feature, the first environment background feature and the second environment background feature to obtain an environment background influence relation and a target environment correlation feature; acquiring third monitoring information, and determining monitoring target information according to the third monitoring information; and determining monitoring characteristics according to the monitoring target information based on the environmental parameter influence relation and the target environment related characteristics, and monitoring the first target based on the monitoring characteristics. The method achieves the technical effects of tracking and identifying different scenes by using the effective identification characteristics of the target, determining the targeted tracking characteristics of the different scenes, effectively tracking by combining the relation between the background of the scenes and the target performance characteristics, reducing the influence of collecting environmental parameters in different monitoring ranges and realizing the cross-scene target intelligent tracking. Therefore, the technical problem that monitoring identification in the prior art is directly compared by means of collected target characteristics, but the collection of the target characteristics in cross-scene monitoring is interfered by parameters of a monitored scene, so that the reliability of target identification is low is solved.
The foregoing description is only an overview of the technical solutions of the present application, and the present application can be implemented according to the content of the description in order to make the technical means of the present application more clearly understood, and the following detailed description of the present application is given in order to make the above and other objects, features, and advantages of the present application more clearly understandable.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings needed to be used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only exemplary, and for those skilled in the art, other drawings can be obtained according to the provided drawings without creative efforts.
Fig. 1 is a schematic flowchart of an intelligent monitoring method for cross-scene tracking according to an embodiment of the present application;
fig. 2 is a schematic structural diagram of an intelligent monitoring system for cross-scene tracking according to an embodiment of the present application.
Detailed Description
The embodiment of the application provides an intelligent monitoring method and system for cross-scene tracking, and solves the technical problems that monitoring identification in the prior art is directly compared by means of collected target characteristics, but the collection of the target characteristics in cross-scene monitoring is interfered by monitoring scene parameters, so that the reliability of target identification is low.
In the following, the technical solutions in the embodiments of the present application will be clearly and completely described with reference to the accompanying drawings, and it is to be understood that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments of the present application, and it should be understood that the present application is not limited by the example embodiments described herein. All other embodiments, which can be derived by a person skilled in the art from the embodiments of the present application without making any creative effort, shall fall within the protection scope of the present application. It should be further noted that, for the convenience of description, only some but not all of the relevant portions of the present application are shown in the drawings.
Example one
Referring to fig. 1, an embodiment of the present application provides an intelligent monitoring method for cross-scene tracking, where the method includes:
step S100: first monitoring information is obtained, wherein the first monitoring information comprises a first target.
Specifically, the cross-scene tracking intelligent monitoring method is applied to a monitoring system, is in data connection with a plurality of monitoring cameras in the monitoring system, and is used for synchronously acquiring monitoring pictures acquired by the monitoring cameras, wherein first monitoring information is monitoring information acquired by any camera in the monitoring system within a monitoring acquisition range of the monitoring camera, and comprises a first target, and the first target is a target to be monitored, such as a target person, a target vehicle and the like.
Step S200: and obtaining second monitoring information based on the first target.
Specifically, the second monitoring information is monitoring information of a camera in the monitoring system, the second monitoring information is different from a camera corresponding to the first monitoring information and is a monitoring picture in another monitoring area range, the first target is also acquired in the second monitoring information and is monitoring information acquired by the first target in different monitoring ranges of the cameras, the second monitoring information and the first monitoring information can be monitoring cameras of two continuous road sections in the same area or monitoring cameras of different places in the same city, and the pictures acquired by the camera can be continuous pictures of the first target on the same day or pictures in different periods.
Step S300: respectively extracting the characteristics of the first monitoring information and the second monitoring information to obtain a first monitoring characteristic set and a second monitoring characteristic set, wherein the first monitoring characteristic set is the characteristics of the first monitoring information, and the second monitoring characteristic set is the characteristics of the second monitoring information.
Further, the performing feature extraction on the first monitoring information and the second monitoring information respectively to obtain a first monitoring feature set and a second monitoring feature set includes: obtaining historical monitoring data, wherein the historical monitoring data comprises monitoring target identification information; determining a training data set by using the historical monitoring data, and carrying out deep convolution model training convergence through the training data set to obtain a deep convolution model; preprocessing the first monitoring information and the second monitoring information; inputting the preprocessed first monitoring information and the preprocessed second monitoring information into the depth convolution model for image feature processing to respectively obtain a first monitoring feature and a second monitoring feature; and constructing the first monitoring feature set and the second monitoring feature set based on the first monitoring feature and the second monitoring feature.
Specifically, feature extraction is respectively carried out on first monitoring information and second monitoring information, wherein all features in a monitoring picture, such as target appearance, background, expression features, light features and the like, are included, data conversion of picture information is achieved, the features in the monitoring picture are expressed by using information such as digital character codes, different picture parameter features are identified by using different character code numbers, the feature extraction result of the first monitoring information is integrated into a first monitoring feature set, the corresponding feature extraction result of the second monitoring information is used as a second monitoring feature set, all information contents in the monitoring picture are reflected, the data expression of the digital code characters and the like is used, and image analysis of picture expression contents can be carried out.
Optionally, a deep convolution model is used during feature extraction of the monitoring information, the deep convolution model comprises an input layer, a convolution layer, a filter function and an output layer, multiple filter functions are arranged among the input layer, the convolution layer and the output layer, the convolution layer is used to carry out convolution processing on the input monitoring picture information, convolution is a widely used technology in the fields of signal processing, image processing and the like, in the field of deep learning, a model architecture of a convolutional neural network is based on convolution, in the field of image processing, convolution is defined as the integral of the product of inverting and shifting one of two functions and multiplying, the two functions are respectively a sliding function and a filter function, the weight of the filter is learned in a training stage, and after training, the learned filter looks like a reversed filter function. The image features are extracted by using a convolution technology, the training data is used for carrying out neural network training, the training and learning of a convolution network model are realized, the deep convolution model is the convolution neural network model, and the feature learning of the training data is used for realizing the effective recognition of the image features.
The method comprises the steps of performing model training by using a historical monitoring picture of a monitoring system as training data, wherein the training data comprises identification marks of historical targets, such as target appearance, background, expression features, light features and other feature identifications, performing supervised learning of the feature identifications by using the marked training data, learning a processing relation of feature extraction, verifying and converging a training result by using identification information, and ending the training until an output result is consistent with the identification information of the verification data, so that feature extraction is performed on input monitoring information according to a computer processing relation method determined by training to obtain a feature extraction result of input data.
In order to further improve the reliability of feature extraction, the method and the device preprocess the input monitoring information before feature extraction, and the preprocessing mainly carries out denoising processing on the definition of a monitoring picture so as to reduce the accuracy reduction of a feature result caused by interference. And inputting the monitoring information subjected to the denoising treatment into a depth convolution model for identifying and extracting image features, and combining the feature results of all extracted monitoring information into a corresponding feature set, wherein the feature results of the first monitoring information are summarized into a first monitoring feature set, and the feature extraction results of the second monitoring information are summarized into a second monitoring feature set. Each monitoring feature set includes all features in the monitoring information, including target features, background features, light features, and the like.
Step S400: and respectively carrying out environmental feature extraction and target feature extraction on the first monitoring feature set and the second monitoring feature set to obtain a first environmental target feature, a first environmental background feature, a second environmental target feature and a second environmental background feature.
Specifically, the first monitoring information and the second monitoring information are monitoring information in different monitoring ranges, the background, the light, the appearance and the like of the monitoring information are changed, the monitoring information and the second monitoring information are related to factors such as the acquisition environment of the area, in order to identify the target in different acquisition environments, the analysis and comparison are carried out on the characteristics in different monitoring information, the relation between the target and the acquisition environment is found, the influence and the change can occur through the identification characteristics of the changed target of the acquisition environment, the analysis result of the environmental characteristics of the monitoring target is the influence relation between the identification characteristics used for describing the target and the acquisition environmental characteristics, and the effective identification of the target is realized by utilizing the influence relation of analysis and processing.
Optionally, when analyzing the monitoring target environmental feature analysis result, first, the environmental features of each monitoring information are respectively identified, the first environmental background feature and the first environmental target feature are the background feature and the target feature extracted correspondingly from the first monitoring information, and the corresponding second environmental background feature and the second environmental target feature are the environmental background and the target feature corresponding to the second monitoring information. The background characteristic is a characteristic other than the target characteristic, the target characteristic is a characteristic of the target, the characteristic can influence the acquired target characteristic under different acquisition environments, and how to reduce the influence degree of the background characteristic of the acquisition environment is reduced. Calculating comprehensive influence, and then aiming at the influence and influence characteristics of the background, such as change of light influence target characteristics, correspondingly denoising the acquired target characteristics by utilizing the influence of the background characteristics on the target characteristics, removing the influence degree of the background characteristics on the target characteristics, and performing corresponding characteristic processing on the target characteristics to determine the real situation of the target characteristics, wherein the denoising target characteristics are the characteristics which correspond to the adjusted target characteristics under the condition of considering the influence and influence relation of background factors, and the target characteristics which correspond to the interference of the background characteristics and take the interference factors into consideration are restored to obtain the target characteristics.
Step S500: and performing correlation and differential analysis according to the first environment target characteristic, the second environment target characteristic, the first environment background characteristic and the second environment background characteristic to obtain an environment background influence relation and a target environment correlation characteristic.
Further, performing correlation and differential analysis according to the first environment target feature, the second environment target feature, the first environment background feature, and the second environment background feature to obtain an environment background influence relationship and a target environment related feature, including: performing scene consistency feature analysis according to the first environment target feature and the first environment background feature to obtain a first target environment related feature and a first environment background influence relationship; performing scene consistency feature analysis according to the second environment target feature and the second environment background feature to obtain a second target environment related feature and a second environment background influence relationship; performing correlation analysis according to the first environment target characteristic and the second environment target characteristic to obtain a third target environment correlation characteristic; and performing differential analysis according to the first environmental background characteristic and the second environmental background characteristic to obtain a third environmental background influence relationship, wherein the environmental background influence relationship comprises a first environmental background influence relationship, a second environmental background influence relationship and a third environmental background influence relationship, and the target environment related characteristic comprises a first target environment related characteristic, a second target environment related characteristic and a third target environment related characteristic.
Specifically, the first monitoring information and the second monitoring information are monitoring information which is analyzed before, the record of historical monitoring analysis is used for guiding the subsequent monitoring, different monitoring information is data which is acquired by different cameras in different environments, the data comprises dark light, bright light, high personnel density, low density, quick movement, slow movement or static and the like, the characteristic changes of each environment and target are compared transversely and longitudinally, the characteristic changes comprise the same movement speed but different scene light, the scene is similar but different personnel density is included, the influence of the mutual influence relationship between the background and the target characteristic and the characteristic expression change of the target in different environments are compared in a cross mode, and the change influence on the monitoring picture under different background parameters is determined, the method is used as a characteristic background influence relation, and the influence degree of each parameter on a target is determined by performing a large amount of analysis on the parameter of the background in each monitoring and the presentation effect of the target in a monitoring picture. And similarly, analyzing the environment related characteristics of the user for the performance characteristics of the target in different states in the same monitoring environment, wherein the target environment related characteristics are the characteristics which are found by the target in different environments and have the same directional correlation and are used for identifying the specific characteristic performance of the target.
Step S600: and acquiring third monitoring information, and determining monitoring target information according to the third monitoring information.
Further, obtaining third monitoring information, and determining monitoring target information according to the third monitoring information, includes: acquiring a third monitoring scene and third monitoring background characteristics according to the third monitoring information; determining the characteristics of a monitoring target scene according to the third monitoring scene; performing correlation analysis according to the third monitoring background characteristic, the first environment background characteristic and the second environment background characteristic to obtain a correlation background characteristic; performing matching analysis based on the relevant background features and the environmental background influence relationship to determine a matching feature background influence relationship; and performing correlation analysis according to the background influence relation of the matching features and the relevant features of the target environment to obtain the relevant features of the matching environment, and determining the monitoring target information based on the relevant features of the matching environment.
Specifically, the third monitoring information is monitoring that a target needs to be searched, scene feature analysis is performed on an application scene of the third monitoring information to determine relevant features of a target environment expressed by the target in the scene, if the third monitoring information is a motion field, the monitoring target is a moving target, identification of target motion features is performed for a target presentation form in the motion field, if the scene of the third monitoring information is a street, walking features and clothing features of a target person, monitoring target information of a specific scene is determined for different scenes, and the monitoring target information is target expression features combined with the scene.
And matching the background characteristics in the third monitoring information with various historical background parameters to find the influence relationship of the corresponding parameters, if the background of the third monitoring information is dark, matching the background parameters according to the degree of darkness of the background and each analysis result in the characteristic background influence relationship, finding the influence relationship with the degree of darkness close to that of the background, matching all the parameters in the third monitoring information, finding all the parameters matched with the third monitoring information, predicting the performance of the target characteristics by using each characteristic background influence relationship, and simultaneously integrating the behavior characteristics in the monitoring target information to determine which characteristics of the target should be monitored in the target monitoring environment.
Step S700: and determining monitoring characteristics according to the monitoring target information based on the environmental parameter influence relation and the target environment related characteristics, and monitoring the first target based on the monitoring characteristics.
Further, determining monitoring characteristics according to the monitoring target information based on the environmental parameter influence relationship and the target environment related characteristics includes: analyzing the characteristic influence according to the matched characteristic background influence relation and the third monitoring background characteristic to obtain the characteristic information of the influence target; analyzing the characteristic core degree according to the influence target characteristic information and the monitoring target scene characteristics to determine the core target characteristics; and performing characteristic environment performance matching on the core target characteristics and the target environment related characteristics, determining scene matching characteristic performance, and taking the scene matching characteristic performance as the monitoring characteristics.
Specifically, the influence relationship on the target monitoring picture in the background parameters determined by the influence relationship of the environmental parameters is utilized, and the characteristics of the target with identification property in the environmental scene in the relevant characteristics of the target environment, namely the characteristics with little change, such as limb characteristics, are found out in the expression characteristics, wherein the influence degree in the third monitoring scene is not great, and the analysis and the determination of the target identification characteristics can be rapidly carried out, the core target characteristics are taken as core target characteristics, the core target characteristics are the characteristics which combine the target expression characteristics of the monitoring scene and the minimum change degree in the background influence relationship, the target can be effectively identified, the characteristics are utilized as the monitoring characteristics, the traversing comparison is carried out on the third monitoring information, the marks which accord with the monitoring characteristics are found, the user numbers can be carried out according to the user characteristics, the user database is constructed to store the monitored marks, so as to manage or classify the relation of each user respectively. The target cross-scene effective tracking is realized, the identification features of the target can be utilized to carry out quick intelligent searching, and the target tracking efficiency is improved, so that the technical problem that in the prior art, monitoring identification is directly compared by means of the acquired target features, but the acquisition of the target features is interfered by parameters of the monitored scene in cross-scene monitoring, and the reliability of target identification is low is solved. The method achieves the technical effects of tracking and identifying different scenes by using the effective identification characteristics of the target, determining the targeted tracking characteristics of the different scenes, effectively tracking by combining the relation between the background of the scenes and the target performance characteristics, reducing the influence of collecting environmental parameters in different monitoring ranges and realizing the cross-scene target intelligent tracking.
Further, the preprocessing the first monitoring information and the second monitoring information includes: acquiring a first monitoring device parameter and a second monitoring device parameter, wherein the first monitoring device parameter corresponds to the first monitoring information, and the second monitoring device parameter corresponds to the second monitoring; obtaining a first monitoring picture threshold value based on the first monitoring equipment parameter; determining a first preprocessing effect range according to the first monitoring picture threshold value, and preprocessing the first monitoring information based on the first preprocessing effect range; and obtaining a second monitoring picture threshold value based on the second monitoring equipment parameter, determining a second preprocessing effect range, and preprocessing the second monitoring information based on the second preprocessing effect range.
Specifically, when the first monitoring information and the second monitoring information are input into the deep convolution model for feature extraction analysis, the monitoring information needs to be preprocessed to ensure the validity of a monitoring picture and avoid the influence of noise and signal interference in the monitoring picture on the monitoring picture, thereby influencing the reliability of feature extraction.
When the monitoring information is preprocessed, the influence of acquisition parameters of all monitoring cameras is considered, all monitoring information cannot be processed by using a uniform preprocessing requirement, the preprocessing is performed on parameter characteristics of the acquisition data of all the cameras in a targeted manner, if the pixels of the cameras are high, the pixel and resolution of a picture are high, whether picture noise interference exists in the current monitoring information or not is evaluated according to the parameter correspondence of the monitoring equipment, noise reduction is performed on the existing interference information, meanwhile, the acquisition number of the picture is also considered when the picture noise reduction processing is performed, if the acquisition number is large, the picture with poor picture definition and high noise can be removed by a noise reduction method, such as a main method, a DBSCAN, an isolated forest and the like, and the interference is reduced by using a noise reduction and dimension reduction method. If only one collected picture is available, the processes of noise reduction and dimension reduction are skipped, and the picture can be sharpened or other picture processing means are used for improving the definition of the picture.
The monitoring picture threshold is a preprocessing range which is correspondingly set for the acquisition parameters of the monitoring camera, and denoising processing is carried out for the pictures which are acquired by low picture definition, signal noise influence and the like. The shooting parameters of each camera are utilized to carry out the preprocessing of the corresponding monitoring information picture, the reliability of feature extraction is improved under the condition of reducing interference, and the influence on feature results caused by noise interference is avoided.
Further, the method further comprises: analyzing a monitoring position and monitoring time according to the third monitoring information, and determining associated monitoring information, wherein the associated monitoring information is monitoring information corresponding to a last scene of the third monitoring information; extracting relevant features of a first target according to the third monitoring information and the associated monitoring information to obtain relevant features of a scene; and adding the scene related features as auxiliary features into the monitoring features of the next scene.
Specifically, the monitoring position is determined according to the position of the monitoring camera where the third monitoring information is located, and the last monitoring scene of the third monitoring information is determined by using the monitoring position, the monitoring time in the previous monitoring information and the prediction of the moving direction of the target, such as the walking direction and the speed of the target on the road in the second monitoring information, the target can be predicted to move to the position of the third monitoring, and simultaneously matched with the monitoring time to determine the target characteristics in the last scene, because the temporal relation and the position relation are adjacent, the similarity between the target feature of the previous scene and the target feature in the third monitoring information is higher, and taking the target feature of the last scene as an auxiliary identification feature of the third monitoring information, carrying out target identification together with the monitoring feature, and carrying out comprehensive analysis through the coherent feature and the core feature so as to improve the identification efficiency.
Optionally, the position distance difference and the time difference between the associated monitoring information and the third monitoring information are combined to predict, when the position distance difference and the time difference are both small and meet the reference standard, an auxiliary feature is added to perform auxiliary monitoring, if the position distance difference and the time difference are large, the auxiliary monitoring is not performed, or the reliability of the auxiliary monitoring feature is reduced, and finally the identification result is identified according to the reliability, so that identification analysis for identifying the monitoring result according to different levels of different features is realized, the requirements of different monitoring are met, and the monitoring range and the identification effect are increased.
Example two
Based on the same inventive concept as the foregoing intelligent monitoring method for cross-scene tracking, the present invention further provides a cross-scene tracking intelligent monitoring system, please refer to fig. 2, where the system includes:
a first monitoring information obtaining unit, configured to obtain first monitoring information, where the first monitoring information includes a first target;
a second monitoring information obtaining unit configured to obtain second monitoring information based on the first target;
a feature extraction unit, configured to perform feature extraction on the first monitoring information and the second monitoring information respectively to obtain a first monitoring feature set and a second monitoring feature set, where the first monitoring feature set is a feature of the first monitoring information, and the second monitoring feature set is a feature of the second monitoring information;
the scene interaction analysis unit is used for respectively carrying out environmental feature extraction and target feature extraction on the first monitoring feature set and the second monitoring feature set to obtain a first environmental target feature, a first environmental background feature, a second environmental target feature and a second environmental background feature;
the monitoring target determining unit is used for carrying out correlation and differential analysis according to the first environment target feature, the second environment target feature, the first environment background feature and the second environment background feature to obtain an environment background influence relation and a target environment related feature;
a third monitoring information obtaining unit, configured to obtain third monitoring information, and determine monitoring target information according to the third monitoring information;
and the tracking and monitoring unit is used for determining monitoring characteristics according to the monitoring target information based on the environmental parameter influence relation and the target environment related characteristics, and monitoring the first target based on the monitoring characteristics.
Further, the system further comprises: the feature extraction unit is further configured to:
obtaining historical monitoring data, wherein the historical monitoring data comprises monitoring target identification information;
determining a training data set by using the historical monitoring data, and carrying out deep convolution model training convergence through the training data set to obtain a deep convolution model;
preprocessing the first monitoring information and the second monitoring information;
inputting the first monitoring information and the second monitoring information which are preprocessed into the depth convolution model respectively for image feature processing to obtain a first monitoring feature and a second monitoring feature respectively;
and constructing the first monitoring feature set and the second monitoring feature set based on the first monitoring feature and the second monitoring feature.
Further, the feature extraction unit is also used for
Acquiring a first monitoring device parameter and a second monitoring device parameter, wherein the first monitoring device parameter corresponds to the first monitoring information, and the second monitoring device parameter corresponds to the second monitoring;
obtaining a first monitoring picture threshold value based on the first monitoring equipment parameter;
determining a first preprocessing effect range according to the first monitoring picture threshold value, and preprocessing the first monitoring information based on the first preprocessing effect range;
and obtaining a second monitoring picture threshold value based on the second monitoring equipment parameter, determining a second preprocessing effect range, and preprocessing the second monitoring information based on the second preprocessing effect range.
Further, the scene interaction analysis unit is further configured to:
performing scene consistency feature analysis according to the first environment target feature and the first environment background feature to obtain a first target environment related feature and a first environment background influence relationship;
performing scene consistency feature analysis according to the second environment target feature and the second environment background feature to obtain a second target environment related feature and a second environment background influence relationship;
performing correlation analysis according to the first environment target characteristic and the second environment target characteristic to obtain a third target environment correlation characteristic;
and performing differential analysis according to the first environmental background characteristic and the second environmental background characteristic to obtain a third environmental background influence relationship, wherein the environmental background influence relationship comprises a first environmental background influence relationship, a second environmental background influence relationship and a third environmental background influence relationship, and the target environment related characteristic comprises a first target environment related characteristic, a second target environment related characteristic and a third target environment related characteristic.
Further, the third monitoring information obtaining unit is further configured to:
acquiring a third monitoring scene and third monitoring background characteristics according to the third monitoring information;
determining the characteristics of a monitoring target scene according to the third monitoring scene;
performing correlation analysis according to the third monitoring background characteristic, the first environment background characteristic and the second environment background characteristic to obtain a correlation background characteristic;
performing matching analysis based on the relevant background features and the environmental background influence relationship to determine a matching feature background influence relationship;
and performing correlation analysis according to the background influence relation of the matching features and the relevant features of the target environment to obtain the relevant features of the matching environment, and determining the monitoring target information based on the relevant features of the matching environment.
Target environment related features further, the tracking monitoring unit is further configured to:
analyzing the characteristic influence according to the matched characteristic background influence relation and the third monitoring background characteristic to obtain the characteristic information of the influence target;
analyzing the characteristic core degree according to the influence target characteristic information and the monitoring target scene characteristics to determine the core target characteristics;
and performing characteristic environment performance matching on the core target characteristics and the target environment related characteristics, determining scene matching characteristic performance, and taking the scene matching characteristic performance as the monitoring characteristics.
Further, the system further comprises:
the correlated monitoring analysis unit is used for analyzing the monitoring position and the monitoring time according to the third monitoring information and determining correlated monitoring information, wherein the correlated monitoring information is monitoring information corresponding to a last scene of the third monitoring information;
the related scene feature analysis unit is used for extracting related features of a first target according to the third monitoring information and the related monitoring information to obtain scene related features;
and the auxiliary characteristic adding unit is used for adding the scene related characteristics serving as auxiliary characteristics into the monitoring characteristics of the next scene.
In the present specification, each embodiment is described in a progressive manner, and the emphasis of each embodiment is to expect the difference of the other embodiments, and the foregoing cross-scene tracking intelligent monitoring method and the specific example in the first embodiment of fig. 1 are also applicable to the cross-scene tracking intelligent monitoring system of the present embodiment, and through the foregoing detailed description of the cross-scene tracking intelligent monitoring method, those skilled in the art can clearly know the cross-scene tracking intelligent monitoring system in the present embodiment, so for the brevity of the description, detailed description is not repeated here. The device disclosed by the embodiment corresponds to the method disclosed by the embodiment, so that the description is simple, and the relevant points can be referred to the method part for description.
The previous description of the disclosed embodiments is provided to enable any person skilled in the art to make or use the present application. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the application. Thus, the present application is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.

Claims (6)

1. An intelligent monitoring method for cross-scene tracking, the method comprising:
obtaining first monitoring information, wherein the first monitoring information comprises a first target;
obtaining second monitoring information based on the first target;
respectively extracting the characteristics of the first monitoring information and the second monitoring information to obtain a first monitoring characteristic set and a second monitoring characteristic set, wherein the first monitoring characteristic set is the characteristics of the first monitoring information, and the second monitoring characteristic set is the characteristics of the second monitoring information;
respectively carrying out environmental feature extraction and target feature extraction on the first monitoring feature set and the second monitoring feature set to obtain a first environmental target feature, a first environmental background feature, a second environmental target feature and a second environmental background feature;
performing correlation and differential analysis according to the first environment target feature, the second environment target feature, the first environment background feature and the second environment background feature to obtain an environment background influence relation and a target environment correlation feature;
acquiring third monitoring information, and determining monitoring target information according to the third monitoring information;
determining monitoring characteristics according to the monitoring target information based on the environmental background influence relation and the target environment related characteristics, and monitoring a first target based on the monitoring characteristics;
wherein, the performing correlation and differential analysis according to the first environment target feature, the second environment target feature, the first environment background feature and the second environment background feature to obtain an environment background influence relationship and a target environment correlation feature includes:
performing scene consistency feature analysis according to the first environment target feature and the first environment background feature to obtain a first target environment related feature and a first environment background influence relationship;
performing scene consistency feature analysis according to the second environment target feature and the second environment background feature to obtain a second target environment related feature and a second environment background influence relationship;
performing correlation analysis according to the first environment target characteristic and the second environment target characteristic to obtain a third target environment correlation characteristic;
performing differential analysis according to the first environmental background feature and the second environmental background feature to obtain a third environmental background influence relationship, wherein the environmental background influence relationship comprises a first environmental background influence relationship, a second environmental background influence relationship and a third environmental background influence relationship, and the target environmental relevant feature comprises a first target environmental relevant feature, a second target environmental relevant feature and a third target environmental relevant feature;
wherein, the obtaining of the third monitoring information and the determining of the monitoring target information according to the third monitoring information include:
acquiring a third monitoring scene and third monitoring background characteristics according to the third monitoring information;
determining the characteristics of a monitoring target scene according to the third monitoring scene;
performing correlation analysis according to the third monitoring background characteristic, the first environment background characteristic and the second environment background characteristic to obtain a correlation background characteristic;
performing matching analysis based on the relevant background features and the environmental background influence relationship to determine a matching feature background influence relationship;
performing correlation analysis according to the matching feature background influence relation and the target environment related feature to obtain a matching environment related feature, and determining the monitoring target information based on the matching environment related feature;
determining monitoring characteristics according to the monitoring target information based on the environmental background influence relationship and the target environment related characteristics, wherein the determining of the monitoring characteristics comprises the following steps:
analyzing the characteristic influence according to the matched characteristic background influence relation and the third monitoring background characteristic to obtain the characteristic information of the influence target;
analyzing the characteristic core degree according to the influence target characteristic information and the monitoring target scene characteristics to determine the core target characteristics;
and performing characteristic environment performance matching on the core target characteristics and the target environment related characteristics, determining scene matching characteristic performance, and taking the scene matching characteristic performance as the monitoring characteristics.
2. The method of claim 1, wherein the performing feature extraction on the first monitoring information and the second monitoring information to obtain a first monitoring feature set and a second monitoring feature set respectively comprises:
obtaining historical monitoring data, wherein the historical monitoring data comprises monitoring target identification information;
determining a training data set by using the historical monitoring data, and carrying out deep convolution model training convergence through the training data set to obtain a deep convolution model;
preprocessing the first monitoring information and the second monitoring information;
inputting the first monitoring information and the second monitoring information which are preprocessed into the depth convolution model respectively for image feature processing to obtain a first monitoring feature and a second monitoring feature respectively;
and constructing the first monitoring feature set and the second monitoring feature set based on the first monitoring feature and the second monitoring feature.
3. The method of claim 2, wherein the pre-processing the first monitoring information and the second monitoring information comprises:
acquiring a first monitoring device parameter and a second monitoring device parameter, wherein the first monitoring device parameter corresponds to the first monitoring information, and the second monitoring device parameter corresponds to the second monitoring;
obtaining a first monitoring picture threshold value based on the first monitoring equipment parameter;
determining a first preprocessing effect range according to the first monitoring picture threshold value, and preprocessing the first monitoring information based on the first preprocessing effect range;
and obtaining a second monitoring picture threshold value based on the second monitoring equipment parameter, determining a second preprocessing effect range, and preprocessing the second monitoring information based on the second preprocessing effect range.
4. The method of claim 1, wherein the method further comprises:
analyzing a monitoring position and monitoring time according to the third monitoring information, and determining associated monitoring information, wherein the associated monitoring information is monitoring information corresponding to a last scene of the third monitoring information;
extracting relevant features of a first target according to the third monitoring information and the associated monitoring information to obtain relevant features of a scene;
and adding the scene related features as auxiliary features into the monitoring features of the next scene.
5. An intelligent monitoring system for cross-scene tracking, the system comprising:
a first monitoring information obtaining unit, configured to obtain first monitoring information, where the first monitoring information includes a first target;
a second monitoring information obtaining unit configured to obtain second monitoring information based on the first target;
a feature extraction unit, configured to perform feature extraction on the first monitoring information and the second monitoring information respectively to obtain a first monitoring feature set and a second monitoring feature set, where the first monitoring feature set is a feature of the first monitoring information, and the second monitoring feature set is a feature of the second monitoring information;
the scene interaction analysis unit is used for respectively carrying out environmental feature extraction and target feature extraction on the first monitoring feature set and the second monitoring feature set to obtain a first environmental target feature, a first environmental background feature, a second environmental target feature and a second environmental background feature;
the monitoring target determining unit is used for carrying out correlation and differential analysis according to the first environment target feature, the second environment target feature, the first environment background feature and the second environment background feature to obtain an environment background influence relation and a target environment related feature;
a third monitoring information obtaining unit, configured to obtain third monitoring information, and determine monitoring target information according to the third monitoring information;
the tracking and monitoring unit is used for determining monitoring characteristics according to the monitoring target information based on the environmental background influence relation and the target environment relevant characteristics and monitoring a first target based on the monitoring characteristics;
the scene interaction analysis unit is further configured to:
performing scene consistency feature analysis according to the first environment target feature and the first environment background feature to obtain a first target environment related feature and a first environment background influence relationship;
performing scene consistency feature analysis according to the second environment target feature and the second environment background feature to obtain a second target environment related feature and a second environment background influence relationship;
performing correlation analysis according to the first environment target characteristic and the second environment target characteristic to obtain a third target environment correlation characteristic;
performing differential analysis according to the first environmental background feature and the second environmental background feature to obtain a third environmental background influence relationship, wherein the environmental background influence relationship comprises a first environmental background influence relationship, a second environmental background influence relationship and a third environmental background influence relationship, and the target environmental relevant feature comprises a first target environmental relevant feature, a second target environmental relevant feature and a third target environmental relevant feature;
the third monitoring information obtaining unit is further configured to:
acquiring a third monitoring scene and third monitoring background characteristics according to the third monitoring information;
determining the characteristics of a monitoring target scene according to the third monitoring scene;
performing correlation analysis according to the third monitoring background characteristic, the first environment background characteristic and the second environment background characteristic to obtain a correlation background characteristic;
performing matching analysis based on the relevant background features and the environmental background influence relationship to determine a matching feature background influence relationship;
performing correlation analysis according to the matching feature background influence relation and the target environment related feature to obtain a matching environment related feature, and determining the monitoring target information based on the matching environment related feature;
the tracking monitoring unit is further configured to:
analyzing the characteristic influence according to the matched characteristic background influence relation and the third monitoring background characteristic to obtain the characteristic information of the influence target;
analyzing the characteristic core degree according to the influence target characteristic information and the monitoring target scene characteristics to determine the core target characteristics;
and performing characteristic environment performance matching on the core target characteristics and the target environment related characteristics, determining scene matching characteristic performance, and taking the scene matching characteristic performance as the monitoring characteristics.
6. An intelligent cross-scene tracking monitoring system comprising a memory, a processor and a computer program stored on the memory and executable on the processor, wherein the processor executes the program to implement the steps of the method of any one of claims 1 to 4.
CN202210738826.2A 2022-06-28 2022-06-28 Intelligent monitoring method and system for cross-scene tracking Expired - Fee Related CN114820724B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210738826.2A CN114820724B (en) 2022-06-28 2022-06-28 Intelligent monitoring method and system for cross-scene tracking

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210738826.2A CN114820724B (en) 2022-06-28 2022-06-28 Intelligent monitoring method and system for cross-scene tracking

Publications (2)

Publication Number Publication Date
CN114820724A CN114820724A (en) 2022-07-29
CN114820724B true CN114820724B (en) 2022-09-20

Family

ID=82523361

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210738826.2A Expired - Fee Related CN114820724B (en) 2022-06-28 2022-06-28 Intelligent monitoring method and system for cross-scene tracking

Country Status (1)

Country Link
CN (1) CN114820724B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117953238A (en) * 2024-02-23 2024-04-30 北京积加科技有限公司 Multi-target cross-scene tracking method and device

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112801018A (en) * 2021-02-07 2021-05-14 广州大学 Cross-scene target automatic identification and tracking method and application

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102148965B (en) * 2011-05-09 2014-01-15 厦门博聪信息技术有限公司 Video monitoring system for multi-target tracking close-up shooting
US10592771B2 (en) * 2016-12-30 2020-03-17 Accenture Global Solutions Limited Multi-camera object tracking
CN109409250A (en) * 2018-10-08 2019-03-01 高新兴科技集团股份有限公司 A kind of across the video camera pedestrian of no overlap ken recognition methods again based on deep learning
CN112396635B (en) * 2020-11-30 2021-07-06 深圳职业技术学院 Multi-target detection method based on multiple devices in complex environment

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112801018A (en) * 2021-02-07 2021-05-14 广州大学 Cross-scene target automatic identification and tracking method and application

Also Published As

Publication number Publication date
CN114820724A (en) 2022-07-29

Similar Documents

Publication Publication Date Title
CN109284670B (en) Pedestrian detection method and device based on multi-scale attention mechanism
Hodges et al. Single image dehazing using deep neural networks
US7957557B2 (en) Tracking apparatus and tracking method
CN104915655A (en) Multi-path monitor video management method and device
Tao et al. Smoke vehicle detection based on multi-feature fusion and hidden Markov model
CN114820724B (en) Intelligent monitoring method and system for cross-scene tracking
CN111798356B (en) Rail transit passenger flow abnormal pattern recognition method based on big data
CN115049954A (en) Target identification method, device, electronic equipment and medium
He et al. Vehicle theft recognition from surveillance video based on spatiotemporal attention
CN112651366B (en) Passenger flow number processing method and device, electronic equipment and storage medium
CN118038494A (en) Cross-modal pedestrian re-identification method for damage scene robustness
Meng et al. Fast-armored target detection based on multi-scale representation and guided anchor
CN113361475A (en) Multi-spectral pedestrian detection method based on multi-stage feature fusion information multiplexing
Liu et al. Image forgery localization based on fully convolutional network with noise feature
Khan et al. Foreground detection using motion histogram threshold algorithm in high-resolution large datasets
CN115147450A (en) Moving target detection method and detection device based on motion frame difference image
CN112906679B (en) Pedestrian re-identification method, system and related equipment based on human shape semantic segmentation
CN115762172A (en) Method, device, equipment and medium for identifying vehicles entering and exiting parking places
CN114743257A (en) Method for detecting and identifying image target behaviors
CN114445787A (en) Non-motor vehicle weight recognition method and related equipment
Koohzadi et al. OTWC: an efficient object-tracking method
Elmaci et al. Detection of background forgery using a two-stream convolutional neural network architecture
CN118135460B (en) Intelligent building site safety monitoring method and device based on machine learning
CN118411829B (en) Zebra crossing pedestrian safety early warning method and device
CN116977754A (en) Image processing method, image processing device, computer device, storage medium, and program product

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20220920