CN117523466A - Scene adaptability improving method and device for target detection and target detection system - Google Patents

Scene adaptability improving method and device for target detection and target detection system Download PDF

Info

Publication number
CN117523466A
CN117523466A CN202210889349.XA CN202210889349A CN117523466A CN 117523466 A CN117523466 A CN 117523466A CN 202210889349 A CN202210889349 A CN 202210889349A CN 117523466 A CN117523466 A CN 117523466A
Authority
CN
China
Prior art keywords
feature
similarity
target
target data
detection result
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210889349.XA
Other languages
Chinese (zh)
Inventor
赵鑫
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou Hikvision Digital Technology Co Ltd
Original Assignee
Hangzhou Hikvision Digital Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hangzhou Hikvision Digital Technology Co Ltd filed Critical Hangzhou Hikvision Digital Technology Co Ltd
Priority to CN202210889349.XA priority Critical patent/CN117523466A/en
Priority to PCT/CN2023/109613 priority patent/WO2024022450A1/en
Publication of CN117523466A publication Critical patent/CN117523466A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/52Surveillance or monitoring of activities, e.g. for recognising suspicious objects
    • G06V20/53Recognition of crowd images, e.g. recognition of crowd congestion
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/761Proximity, similarity or dissimilarity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/52Surveillance or monitoring of activities, e.g. for recognising suspicious objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/103Static body considered as a whole, e.g. static pedestrian or occupant recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/07Target detection

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Artificial Intelligence (AREA)
  • Health & Medical Sciences (AREA)
  • Human Computer Interaction (AREA)
  • Computing Systems (AREA)
  • Databases & Information Systems (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • Image Analysis (AREA)

Abstract

The application discloses a scene adaptability improvement method for target detection, which comprises the following steps: obtaining a detection result of current target data, extracting target features of the current target data, determining the matching degree of each target feature and each feature in a feature library, comparing each matching degree with a detection threshold value bound by a matched feature for determining the matching degree in the feature library, and determining the processing of the detection result according to the comparison result, wherein the feature library is a feature set of the target data calibrated according to whether the detection result is false alarm or false missing alarm, and the detection threshold value bound by each matched feature is adjusted along with the matching degree between the matched feature and the target feature of the target data in a preset range. The method and the device are favorable for reducing missing report and false report, and have strong adaptability to scenes.

Description

Scene adaptability improving method and device for target detection and target detection system
Technical Field
The invention relates to the field of security target detection, in particular to a scene adaptability improvement method for target detection.
Background
With the development of image-based target detection technology, more and more application scenes adopt the technical means of image-based target detection, such as security protection, industrial environment monitoring and the like. For example, in practical applications, the monitored condition is generally reflected according to the detection result of the target data, and when the detection result meets a preset condition, for example, when the category and confidence of the detected target reach the preset condition, an alarm or other prompt is triggered.
In order to improve the accuracy and reliability of alarm triggering and avoid missing report and false report, the problem that target detection has better adaptability to application scenes is urgent to be solved.
Disclosure of Invention
The invention provides a scene adaptability improving method for target detection, which is used for improving the adaptability of the target detection to an application scene and reducing missing report and/or false report.
The first aspect of the present invention provides a method for improving scene adaptability of object detection, the method comprising:
the detection result of the current target data is obtained,
extracting target characteristics from the current target data,
determining the matching degree of each target feature and each feature in the feature library,
each degree of matching is compared to a detection threshold bound by the matched feature in the feature library used to determine the degree of matching,
determining the processing of the detection result according to the comparison result,
wherein,
the feature library is a feature set of target data calibrated according to false alarm or missing alarm of the detection result,
the detection threshold bound by each matched feature is adjusted according to the matching degree between the matched feature and the target feature of the target data in the preset range.
A second aspect of the present invention provides an apparatus for improving scene adaptation for object detection, the apparatus comprising:
a target detection module for obtaining the detection result of the current target data and extracting the target characteristics of the current target data,
the feature matching module is used for matching each target feature with each feature in the feature library, wherein the feature library is a feature set of target data calibrated according to false alarm or missing alarm of a detection result;
a comparison module for comparing the detection result with a detection threshold value bound by the matched feature in the feature library,
a determining module for determining the processing of the detection result according to the comparison result,
and the threshold adjustment module is used for adjusting the detection threshold bound to each matched feature according to the matching degree between the matched feature and the target feature of the target data in the preset range.
In a third aspect, the present invention provides an object detection system, including the apparatus for improving scene adaptation for object detection.
According to the scene adaptability improvement method for target detection, the detection result is determined through the matching degree of the target features and each feature in the specific feature library and the comparison result of the detection threshold value bound with each feature in the feature library, so that feature data in the feature library is fully utilized, the granularity of the detection threshold value is finer, the detection threshold value can be adjusted in a self-adaptive mode, the scene adaptability of the detection threshold value to the target data is improved, the scene adaptability of the target detection is improved, and missing report and false report are reduced.
Drawings
Fig. 1 is a schematic flow chart of a method for improving scene adaptability of object detection.
Fig. 2 is a schematic flow chart of a method for improving scene adaptability of object detection in the embodiment of the present application.
FIG. 3 is a schematic diagram of a similarity matrix and a similarity threshold.
FIG. 4 is a schematic diagram of similarity threshold updating for feature binding.
Fig. 5 is a schematic diagram of an apparatus for improving scene adaptation for object detection.
Fig. 6 is another schematic diagram of an apparatus for improving scene adaptation for object detection.
Detailed Description
In order to make the objects, technical means and advantages of the present application more apparent, the present application is further described in detail below with reference to the accompanying drawings.
The applicant has found that false positives and false negatives of target detection results are particularly relevant to detection thresholds, and that existing detection thresholds are generated by a set false positive rate, for example, detection thresholds are generated according to a false positive rate of 1/100000,1/1000000 or the like. The false positive rate generated is often based on the vendor's own test set that holds test target data in many different scenarios, but rather than generating a separate false positive value for each use scenario. In practical use, a relatively large gap exists between a fixed single use scene and a self-testing environment of a manufacturer, and a problem that the deviation between the detection threshold and an actual detection result is large may exist when the detection threshold is generated only according to a preset false alarm rate.
In view of this, the present application provides a method for improving the scene adaptability of target detection, which screens out the targets of high-frequency false alarm and/or false alarm, and adaptively generates a detection threshold value matched with the target data for each feature in the false alarm feature library and the false alarm feature library, thereby improving the scene adaptability of target detection by using the feature library with smaller granularity.
Referring to fig. 1, fig. 1 is a schematic flow chart of a method for improving scene adaptability of object detection. The method comprises the following steps: on the side of the object detection apparatus,
step 101, obtaining the detection result of the current target data,
step 102, extracting target characteristics of the current target data,
step 103, determining the matching degree of each target feature and each feature in the feature library,
the matching degree may be determined by similarity, euclidean distance, or the like.
Step 104, comparing each matching degree with a detection threshold value bound by the matched feature used for determining the matching degree in the feature library,
step 105, determining the processing of the detection result according to the comparison result,
wherein,
the feature library is a feature set of target data calibrated according to whether the detection result is false alarm or missing alarm.
The detection threshold bound by each matched feature is adjusted along with the matching degree between the matched feature and the target feature of the target data in the preset range, wherein the matching degree can be measured through similarity and also can be measured through Euclidean distance and other parameters.
According to the method and the device, the feature library is built by utilizing target data of false alarm and/or missing alarm, and the detection threshold value is set for each feature of the feature library, so that different detection threshold values are bound for each feature, the self-adaptability of the detection threshold values to scenes is improved, and the scene adaptability of detection results obtained based on the detection threshold values is improved.
For ease of understanding the present application, a security system will be described below as an example, and it should be understood that the present application is not only applicable to security systems, but also applicable to any monitoring system that uses target detection in industrial applications, and the like.
Referring to fig. 2, fig. 2 is a schematic flow chart of a method for improving scene adaptability of object detection according to an embodiment of the present application. The method comprises the following steps:
step 201, obtaining a first feature library and a second feature library, wherein the first feature library comprises a false alarm feature set for characterizing a first target data feature set with false alarm detection results, the second feature library comprises a false alarm feature set for characterizing a second target data feature set with false alarm detection results,
as an example, the first feature database and the second feature database may be established as follows:
gathering target data with a first time period and/or a first quantity, calibrating the gathered target data, for example, traversing the gathered target data, judging whether false alarm or missing alarm exists in the detection result of the target data, identifying the target with false alarm or missing alarm in the target data, recording the target data with false alarm or missing alarm,
and respectively extracting features based on the recorded false alarm and false alarm target data, for example, extracting features by adopting a deep learning model, taking the extracted false alarm features as features in a first feature library, and taking the extracted false alarm features as features in a second feature library.
For example, in one scene, a missing report of a human body appears many times, and the missing report of the human body has obvious and similar characteristics, for example, when the same uniform such as a petroleum worker uniform is worn, the human body always in a consistent special posture such as a half-bending waist is used for advancing, the missing report of the human body data is subjected to characteristic extraction, and the extracted missing report characteristics comprise characteristics such as clothing characteristics, posture characteristics and the like, and the characteristics can be stored in a second characteristic library.
In this embodiment, the security system may filter out the detection results of the high-frequency false alarm and the false omission by using the perimeter algorithm, so as to obtain the target data for establishing the feature library.
The first feature library and the second feature library may be updated according to at least one of time and number as an update trigger condition, for example, the first feature library and the second feature library may be updated periodically.
As the feature library is a calibration result, the cleanliness of feature data in the feature library is improved.
Step 202, obtaining the current detection result of the current target data, judging whether the detection result of the target data triggering the alarm rule in the security system is judged to be positive report or false report,
if a positive report is made, steps 203-204 are performed,
if false, executing steps 205-206;
step 203, calculating a first similarity between each target feature of the current target data and each first feature in the first feature library,
as an example, when the detection result of the target data triggering the alarm rule in the security system is judged to be positive, extracting features of the target data corresponding to the positive report, for example, extracting features by adopting a deep learning algorithm so as to obtain at least one more target feature, and performing similarity calculation on similarity of each target feature and each first feature in the first feature library so as to obtain each first similarity, so as to match each target feature with each first feature in the first feature library;
step 204, for each first similarity, determining whether the first similarity is greater than a first similarity threshold to which the first feature for the first similarity calculation is bound,
if any first similarity is larger than the first similarity threshold, the detection result is judged to be false alarm, the detection result is corrected to be false alarm, the alarm is not triggered any more, so as to reduce false alarm,
otherwise, judging that the detection result is positive report, triggering alarm prompt,
step 205, calculating a second similarity between each target feature of the target data with the false positive current detection result and each second feature in the second feature library, so as to match each target feature with each second feature in the second feature library;
step 206, for each second similarity, determining whether the second similarity is greater than a second similarity threshold to which a second feature for the second similarity calculation is bound,
if any second similarity is larger than the second similarity threshold, the detection result is judged to be false alarm, the alarm is not triggered, so as to reduce false alarm,
otherwise, correcting the detection result to be positive report, triggering alarm to increase detection rate.
Through the steps 204 and 206, the matching degree is compared with a detection threshold value bound by the matched feature used for determining the matching degree in the feature library, and processing logic of the detection result is determined according to the comparison result, so that when the extracted target feature has higher similarity with the false alarm database, the alarm process of the target data can be terminated to reduce false alarm; when the extracted target features are higher in similarity with the missed report database, the target data can be re-alarmed, so that the detection rate is improved, and the scene adaptability of target detection is improved.
Not generally, the target features extracted by the target feature data 1 include: target feature 1, target features 2, …, target feature n; the first feature library includes: first feature 1, first features 2, …, first feature i; the second feature library comprises a second feature 1, a second feature 2, … and a second feature j; first feature 1 binds to similarity threshold 1, first feature 2 binds to similarity threshold 2, …, first feature i binds to similarity threshold i; the second feature 2 binds to similarity threshold 1, the second feature 2 binds to similarity threshold 2, …, and the second feature j binds to similarity threshold j. For example, a certain body image is taken as target data 1, and target features extracted from the target data 1 include an eye feature, a face feature, a body state feature, and the like, wherein the first feature library includes an eye feature, a face feature, a body state feature, and the like, the eye feature is bound with an eye similarity threshold 1, the face feature is bound with a face similarity threshold 2, the body state feature is bound with a body state similarity threshold 3, and likewise the second feature library includes an eye feature, a face feature, and a body state feature, wherein the eye feature is bound with an eye similarity threshold 1', the face feature is bound with a face similarity threshold 2', and the body state feature is bound with a body state similarity threshold 3'.
If the detection result of the target feature data 1 is positive report, calculating the similarity between the target feature 1 and each first feature to obtain i first similarity results, wherein the n target features share i×n first similarity results, and the matrix C of i rows and n columns can be used in The method comprises the steps of representing, wherein elements of an ith row and an nth column in a matrix are first similarity between a first feature i and a target feature n; each row in the matrix is bound with a first similarity threshold. As indicated by a in fig. 3.
For matrix C in Once any first similarity result exists in the line elements and is larger than the first similarity threshold value bound by the line, the target feature corresponding to the element is higher in similarity with the first feature corresponding to the element, and the matching degree is large, so that the detection result is indicated to be false alarm;if any first similarity result is not found in the line element and is greater than the first similarity threshold bound by the line, that is, the line element is smaller than the first similarity threshold bound by the line, the object feature corresponding to the element is lower in similarity with the first feature, so that the detection result is not false alarm;
similarly, if the detection result of the target feature data 1 is false alarm, calculating the similarity between the target feature 1 and each second feature to obtain j second similarity results, and if the n target features share j×n first similarity results, using a matrix C of j rows and n columns jn The representation, wherein the elements of the j-th row and the n-th column in the matrix are the second similarity of the second feature j and the target feature n; each row in the matrix is bound with a second similarity threshold. As shown by b in fig. 3.
For matrix C jn Once any second similarity result exists in the line elements and is larger than a second similarity threshold value bound by the line, the target feature corresponding to the element is higher in similarity with the second feature corresponding to the element, and the matching degree is large, so that the detection result is indicated to belong to missing report; if any second similarity result in the line element is not greater than the second similarity threshold bound by the line, that is, the line element is smaller than the second similarity threshold bound by the line, the object feature corresponding to the element is indicated to have lower similarity with the second feature, so that the detection result is indicated not to belong to missing report.
The similarity threshold value bound by the features in the feature library is generated by a self-adaptive method, the similarity threshold value bound by each feature in the feature library is different, and each similarity threshold value is adjusted along with the matching degree between the matched feature and the target feature of the target data in a preset range, wherein the target data in the preset range comprises target data in a second time period and/or a second quantity. And considering that target data in the scene changes along with factors such as time, service characteristics and the like, the similarity threshold value bound with the features in the feature library is updated when the update triggering condition is met.
The generation of the similarity threshold is described below.
Referring to FIG. 4, FIG. 4 is a schematic illustration of similarity threshold updating for feature bindings. Triggering the update of the similarity threshold of the feature binding in the feature library when the update condition is satisfied, wherein the update condition comprises: at least one of a set update time, a set update frequency, and a feature library being updated is satisfied. The updating process comprises the following steps: for any feature in the feature library,
in step 401, the similarity between the feature and each target feature of all target data in the preset range is calculated, where all target data are target data in a set time threshold and/or a set number of target data in a set number of thresholds, and the target feature of the target data may also be used to calibrate the detection result so as to obtain a feature library.
Step 402, selecting the maximum similarity from all similarity results for the feature,
step 403, adjusting the similarity threshold of the feature binding based on the maximum similarity, for example, updating the similarity threshold of the feature binding to be equal to or greater than the sum of the maximum similarity and the redundancy.
As an example, the first similarity threshold bound to the first feature is obtained by performing similarity calculation with the set third time period and/or the set third fixed number of target features of the target data whose all detection results are positive. In particular, the first similarity threshold value floats around a maximum similarity sim1_max between the first feature and the target feature, the float value depending on the particular correction strategy that is adopted. As an example, it is desirable to satisfy thr1+.gtoreq.sim1_max+gap 1, gap1 being the first amount of redundancy for characterizing the floating value based on the maximum similarity corresponding to the first feature.
The second similarity threshold bound with the second feature is obtained by performing similarity calculation on the target feature of the false alarm target data with a set fourth time period and/or a set fourth number of all detection results. In particular, the second similarity threshold value floats around the target feature's maximum similarity to the second feature, sim2 max, with the float value depending on the particular corrective strategy being adopted. As an example, it is desirable to satisfy thr2+.gtoreq.sim2_max+gap 2, gap2 being the second amount of redundancy for characterizing the floating value based on the maximum similarity corresponding to the second feature.
According to the embodiment, the self-adaptive adjustment of the detection threshold value bound with each feature in the feature library is beneficial to improving the scene adaptability of the detection result, the problem of missing report and false report in the existing security system can be solved, and particularly, the detection result can be corrected by utilizing the matching degree of the target feature of the target data and the feature library and the comparison result of the detection threshold value, so that the accuracy and the reliability of the detection result are improved, and missing report and false report are reduced.
Referring to fig. 5, fig. 5 is a schematic diagram of an apparatus for improving scene adaptation for object detection.
The device comprises:
a target detection module for extracting target characteristics of the current target data and obtaining the detection result of the current target data,
a feature matching module for matching each target feature with each feature in the feature library,
a comparison module for comparing the detection result with a detection threshold value bound by the matched feature in the feature library,
a determining module for determining the processing of the detection result according to the comparison result,
and the threshold adjustment module is used for adjusting the detection threshold bound to each matched feature according to the matching degree between the matched feature and the target feature of the target data in the preset range.
Wherein,
the feature library is a feature set of target data calibrated according to whether the detection result is false alarm or missing alarm.
The threshold adjustment module is further configured to: and adjusting the similarity threshold bound by each feature along with the maximum similarity corresponding to each feature, wherein the maximum similarity corresponding to each feature is as follows: the maximum value in the similarity between each feature and the target feature of the target data within the preset range,
the feature matching module comprises:
the first feature matching sub-module is used for carrying out similarity calculation on each target feature and each first feature in the first feature library under the condition that the detection result is a positive report so as to obtain each first similarity;
the second feature matching sub-module is used for carrying out similarity calculation on each target feature and each second feature in the second feature library under the condition that the detection result is false alarm, so as to obtain each second similarity;
the comparison module comprises:
a first comparison sub-module for comparing each calculated first similarity with a first similarity threshold bound by a first feature for the first similarity calculation,
a second comparison sub-module for comparing each calculated second similarity with a second similarity threshold bound by a second feature for the second similarity calculation,
the determining module includes:
the first determining submodule is used for correcting the detection result into false alarm when any one of the calculated first similarity is larger than a first similarity threshold value bound by a first feature used for calculating the first similarity, otherwise, the detection result is kept to be positive alarm;
and the second determining submodule is used for correcting the detection result to be a missing report when any calculated second similarity is larger than a second similarity threshold value bound by a second feature used for calculating the second similarity, and otherwise, the detection result is kept to be a false report.
The threshold adjustment module may comprise a module for adjusting the threshold value,
the first similarity threshold adjustment sub-module is configured to adjust a first similarity threshold bound to each first feature along with a maximum similarity corresponding to the first feature, where the maximum similarity corresponding to the first feature is: the maximum value in the similarity between the first feature and the target feature of the third target data with the detection result being positive report in the first preset range
The second similarity threshold adjustment sub-module is configured to adjust a second similarity threshold bound to each second feature along with a maximum second similarity corresponding to the second feature, where the maximum similarity corresponding to the second feature is: the detection result in the second preset range is the maximum value in the similarity between the second characteristic and the target characteristic of the fourth target data which is misreported.
The apparatus further comprises:
the feature library management module is used for collecting target data in a first time period and/or a first quantity, calibrating the collected target data,
based on the calibrated target data, judging whether the detection result of the collected target data has false alarm or missing alarm, recording the target data with false alarm or missing alarm,
and carrying out feature extraction based on the recorded target data, taking the extracted false positive features as features in a first feature library, and taking the extracted false positive features as features in a second feature library.
Referring to fig. 6, fig. 6 is another schematic diagram of an apparatus for improving scene adaptation for object detection. The apparatus comprises a memory storing a computer program and a processor configured to execute the steps of the computer program implementing the method of improving scene adaptation for object detection of the present application.
The Memory may include random access Memory (Random Access Memory, RAM) or may include Non-Volatile Memory (NVM), such as at least one disk Memory. Optionally, the memory may also be at least one memory device located remotely from the aforementioned processor.
The processor may be a general-purpose processor, including a central processing unit (Central Processing Unit, CPU), a network processor (Network Processor, NP), etc.; but also digital signal processors (Digital Signal Processing, DSP), application specific integrated circuits (Application Specific Integrated Circuit, ASIC), field programmable gate arrays (Field-Programmable Gate Array, FPGA) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components.
The embodiment of the invention also provides a computer readable storage medium, wherein the storage medium stores a computer program, and the computer program realizes the steps of the scene adaptability improvement method of target detection when being executed by a processor.
For the apparatus/network side device/storage medium embodiment, since it is substantially similar to the method embodiment, the description is relatively simple, and the relevant points are referred to in the description of the method embodiment.
In this document, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Moreover, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
The foregoing description of the preferred embodiments of the invention is not intended to be limiting, but rather to enable any modification, equivalent replacement, improvement or the like to be made within the spirit and principles of the invention.

Claims (10)

1. A method for improving scene adaptability of target detection, the method comprising:
the detection result of the current target data is obtained,
extracting target characteristics from the current target data,
determining the matching degree of each target feature and each feature in the feature library,
each degree of matching is compared to a detection threshold bound by the matched feature in the feature library used to determine the degree of matching,
determining the processing of the detection result according to the comparison result,
wherein,
the feature library is a feature set of target data calibrated according to false alarm or missing alarm of the detection result,
the detection threshold bound by each matched feature is adjusted according to the matching degree between the matched feature and the target feature of the target data in the preset range.
2. The method of claim 1, wherein determining the matching degree of each target feature to each feature in the feature library comprises:
calculating the similarity of each target feature with each feature in the feature library,
the comparing each matching degree with the detection threshold bound by the matched feature used for determining the matching degree in the feature library comprises the following steps:
for each of the calculated degrees of similarity,
the similarity is compared to a similarity threshold,
wherein,
the similarity threshold is a similarity threshold to which the features used for the similarity calculation in the feature library are bound.
3. The method of claim 2, wherein the similarity threshold bound to each feature is adjusted according to a maximum similarity corresponding to each feature, wherein the maximum similarity corresponding to each feature is: a maximum value in the similarity between each feature and the target feature of the target data in the preset range;
the processing for determining the detection result according to the comparison result comprises the following steps:
and correcting the detection result according to the comparison result.
4. The scene adaptation method according to claim 3, wherein the detection result includes a positive report, and the feature library includes: a first feature library for characterizing feature sets of first target data whose detection results are false positives,
the calculating the similarity between each target feature and each feature in the feature library comprises the following steps:
under the condition that the detection result is positive report, carrying out similarity calculation on each target feature and each first feature in the first feature library to obtain each first similarity;
the correcting the detection result according to the comparison result comprises the following steps:
if any one of the calculated first similarity is larger than a first similarity threshold value bound by a first feature used for calculating the first similarity, correcting the detection result as false alarm;
wherein,
the first similarity threshold value bound by each first feature is adjusted along with the maximum similarity corresponding to the first feature, and the maximum similarity corresponding to the first feature is: the detection result in the first preset range is the maximum value in the similarity between the first characteristic and the target characteristic of the third target data.
5. The method of claim 4, wherein the first similarity threshold bound to each of the first features is adjusted with a maximum similarity corresponding to the first feature, comprising:
and determining that the first similarity threshold is greater than the sum of the maximum similarity corresponding to the first feature and a first redundancy amount, wherein the first redundancy amount is used for representing a floating value based on the maximum similarity corresponding to the first feature.
6. The method for improving scene adaptability according to claim 3, wherein the detection result comprises false alarm,
the feature library includes: a second feature library for characterizing a feature set of the second target data whose detection result is a missing report,
the calculating the similarity between each target feature and each feature in the feature library comprises the following steps:
under the condition that the detection result is false alarm, carrying out similarity calculation on each target feature and each second feature in a second feature library to obtain each second similarity;
the correcting the detection result according to the comparison result comprises the following steps:
if any one of the calculated second similarity is larger than a second similarity threshold value bound by a second feature used for calculating the second similarity, correcting the detection result as a missing report;
wherein,
the second similarity threshold value bound by each second feature is adjusted along with the maximum second similarity corresponding to the second feature, and the maximum similarity corresponding to the second feature is: the detection result in the second preset range is the maximum value in the similarity between the second characteristic and the target characteristic of the fourth target data which is misreported.
7. The method of claim 6, wherein the second similarity threshold bound to each of the second features is adjusted with a maximum second similarity corresponding to the second feature, comprising:
and determining that the second similarity threshold is greater than the sum of the maximum similarity corresponding to the second feature and a second redundancy amount, wherein the second redundancy amount is used for representing a floating value based on the maximum similarity corresponding to the second feature.
8. The scene adaptation enhancement method according to claim 1, wherein the feature library is built as follows:
collecting target data within a set first time period and/or a set first quantity, calibrating the collected target data,
based on the calibrated target data, judging whether the detection result of the collected target data has false alarm or missing alarm, recording the target data with false alarm or missing alarm,
based on the recorded target data, extracting features, taking the extracted false positive features as features in a first feature library, and taking the extracted false positive features as features in a second feature library;
the extracting the target characteristics of the current target data comprises the following steps: extracting target characteristics of target data of which the detection result does not accord with a set condition;
the similarity threshold is triggered and updated according to a set updating condition;
the preset range comprises a set second time period and/or a set second number;
the scene is monitored by the security system.
9. An apparatus for improving scene adaptation for object detection, characterized in that,
a target detection module for obtaining the detection result of the current target data and extracting the target characteristics of the current target data,
the feature matching module is used for matching each target feature with each feature in the feature library, wherein the feature library is a feature set of target data calibrated according to false alarm or missing alarm of a detection result;
a comparison module for comparing the detection result with a detection threshold value bound by the matched feature in the feature library,
a determining module for determining the processing of the detection result according to the comparison result,
and the threshold adjustment module is used for adjusting the detection threshold bound to each matched feature according to the matching degree between the matched feature and the target feature of the target data in the preset range.
10. An object detection system comprising the apparatus for improving scene adaptation for object detection as claimed in claim 9.
CN202210889349.XA 2022-07-27 2022-07-27 Scene adaptability improving method and device for target detection and target detection system Pending CN117523466A (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202210889349.XA CN117523466A (en) 2022-07-27 2022-07-27 Scene adaptability improving method and device for target detection and target detection system
PCT/CN2023/109613 WO2024022450A1 (en) 2022-07-27 2023-07-27 Scene adaptability improvement method and apparatus for object detection, and object detection system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210889349.XA CN117523466A (en) 2022-07-27 2022-07-27 Scene adaptability improving method and device for target detection and target detection system

Publications (1)

Publication Number Publication Date
CN117523466A true CN117523466A (en) 2024-02-06

Family

ID=89705465

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210889349.XA Pending CN117523466A (en) 2022-07-27 2022-07-27 Scene adaptability improving method and device for target detection and target detection system

Country Status (2)

Country Link
CN (1) CN117523466A (en)
WO (1) WO2024022450A1 (en)

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8947600B2 (en) * 2011-11-03 2015-02-03 Infosys Technologies, Ltd. Methods, systems, and computer-readable media for detecting scene changes in a video
CN112927258A (en) * 2019-11-21 2021-06-08 株式会社日立制作所 Target tracking method and device
CN111667501A (en) * 2020-06-10 2020-09-15 杭州海康威视数字技术股份有限公司 Target tracking method and device, computing equipment and storage medium
CN112637194A (en) * 2020-12-18 2021-04-09 北京天融信网络安全技术有限公司 Security event detection method and device, electronic equipment and storage medium
CN112560787A (en) * 2020-12-28 2021-03-26 深研人工智能技术(深圳)有限公司 Pedestrian re-identification matching boundary threshold setting method and device and related components
CN112861673A (en) * 2021-01-27 2021-05-28 长扬科技(北京)有限公司 False alarm removal early warning method and system for multi-target detection of surveillance video

Also Published As

Publication number Publication date
WO2024022450A1 (en) 2024-02-01

Similar Documents

Publication Publication Date Title
CN109035299B (en) Target tracking method and device, computer equipment and storage medium
CN111950329A (en) Target detection and model training method and device, computer equipment and storage medium
CN112188531B (en) Abnormality detection method, abnormality detection device, electronic apparatus, and computer storage medium
CN107644194B (en) System and method for providing monitoring data
CN112529942A (en) Multi-target tracking method and device, computer equipment and storage medium
CN111667501A (en) Target tracking method and device, computing equipment and storage medium
JP2007179542A (en) System and method for detecting network intrusion
CN109840413B (en) Phishing website detection method and device
CN106611151B (en) A kind of face identification method and device
CN112948612B (en) Human body cover generation method and device, electronic equipment and storage medium
CN112560957B (en) Neural network training and detecting method, device and equipment
CN112184688B (en) Network model training method, target detection method and related device
US9865158B2 (en) Method for detecting false alarm
CN114743067A (en) Training data enhancement method and device, computer equipment and storage medium
CN109410198B (en) Time sequence action detection method, device and equipment
CN109360167B (en) Infrared image correction method and device and storage medium
CN117523466A (en) Scene adaptability improving method and device for target detection and target detection system
CN116383814B (en) Neural network model back door detection method and system
CN116740586A (en) Hail identification method, hail identification device, electronic equipment and computer readable storage medium
CN111583159A (en) Image completion method and device and electronic equipment
CN116261149A (en) Deployment method and system of sensor nodes in wireless sensor network
CN112344979B (en) Method and device for adjusting detection stability of sensor
CN111640076B (en) Image complement method and device and electronic equipment
CN112685416A (en) Data missing item filling method and device, computer equipment and storage medium
CN114549193A (en) List screening method, apparatus, device, storage medium and program product

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination