CN112784738B - Moving object detection alarm method, moving object detection alarm device and computer readable storage medium - Google Patents

Moving object detection alarm method, moving object detection alarm device and computer readable storage medium Download PDF

Info

Publication number
CN112784738B
CN112784738B CN202110083706.9A CN202110083706A CN112784738B CN 112784738 B CN112784738 B CN 112784738B CN 202110083706 A CN202110083706 A CN 202110083706A CN 112784738 B CN112784738 B CN 112784738B
Authority
CN
China
Prior art keywords
detection
detection frame
frame
target
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110083706.9A
Other languages
Chinese (zh)
Other versions
CN112784738A (en
Inventor
朱蕾
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Yunconghuilin Artificial Intelligence Technology Co ltd
Original Assignee
Shanghai Yunconghuilin Artificial Intelligence Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Yunconghuilin Artificial Intelligence Technology Co ltd filed Critical Shanghai Yunconghuilin Artificial Intelligence Technology Co ltd
Priority to CN202110083706.9A priority Critical patent/CN112784738B/en
Publication of CN112784738A publication Critical patent/CN112784738A/en
Application granted granted Critical
Publication of CN112784738B publication Critical patent/CN112784738B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/103Static body considered as a whole, e.g. static pedestrian or occupant recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/23Clustering techniques
    • G06F18/232Non-hierarchical techniques
    • G06F18/2321Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions
    • G06F18/23213Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions with fixed number of clusters, e.g. K-means clustering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/246Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • G06T7/62Analysis of geometric attributes of area, perimeter, diameter or volume
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/25Determination of region of interest [ROI] or a volume of interest [VOI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30196Human being; Person

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Data Mining & Analysis (AREA)
  • Multimedia (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Probability & Statistics with Applications (AREA)
  • Human Computer Interaction (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Geometry (AREA)
  • Image Analysis (AREA)

Abstract

The invention relates to the technical field of image processing, in particular to a moving object detection and alarm method, a moving object detection and alarm device and a medium, and aims to solve the technical problem of how to accurately identify a moving object by utilizing an image in an environment with poor illumination conditions. Therefore, according to the method of the embodiment of the invention, the first to-be-processed detection frame with the same track association relation with the target detection frame of each frame of detection image and the second to-be-processed detection frame without the track association relation can be respectively obtained from the target detection frame of the previous frame or frames of detection image of each frame of detection image, the first to-be-processed detection frame is connected with the track to form a motion track, the second to-be-processed detection frame is subjected to statistical analysis of the detection frame position, the motion area of the motion target is obtained, the defect that the motion target is easy to miss detection due to poor illumination condition is overcome, and the accuracy of detecting the motion target of the image is remarkably improved.

Description

Moving object detection alarm method, moving object detection alarm device and computer readable storage medium
Technical Field
The present invention relates to the field of image processing technologies, and in particular, to a moving object detection and alarm method, a moving object detection and alarm device, and a computer readable storage medium.
Background
In areas with high security level, such as railway, military base, etc., a perimeter intrusion alarm system is usually set to perform intrusion detection and alarm of foreign objects, such as pedestrians, so as to prevent illegal intrusion and damage. The conventional detection method at present mainly detects pedestrians on images acquired by an image acquisition device in a target area, and judges whether pedestrians enter the target area according to the pedestrian detection result. However, in environments such as night or tree mountain shielding, the quality of the image collected by the image collecting device is reduced (e.g., the brightness of the image is reduced, the ambiguity of the image is increased, etc.), so that the accuracy of pedestrian detection on the image is affected, and false alarm or missing alarm is caused.
Disclosure of Invention
The present invention has been made to overcome the above-mentioned drawbacks, and provides a moving object detection and alert method, apparatus, and computer-readable storage medium that solve or at least partially solve the technical problem of how to accurately identify a moving object using an image in an environment with poor illumination conditions.
In a first aspect, a moving object detection alarm method is provided, the method including: respectively detecting a moving target of each frame of detection image to obtain a target detection frame of each frame of detection image; for each frame of detection image, respectively acquiring a first to-be-processed detection frame with the same track association relationship with the target detection frame of each frame of detection image and a second to-be-processed detection frame without the track association relationship from the target detection frame of the previous frame or frames of detection image of each frame of detection image; generating a motion track of one or more motion targets according to the image arrangement sequence of each frame of detection image, the target detection frame of each frame of detection image and a first to-be-processed detection frame which has the same track association relation with each target detection frame; carrying out statistical analysis on the detection frame position of the second detection frame to be processed so as to obtain a motion area of the moving target; and alarming according to the motion trail and/or the motion area of the moving object.
In the technical scheme of the moving object detection alarm method, the step of acquiring the object detection frame of each frame of detection image specifically includes: acquiring the current brightness of a current detection image and the historical brightness of a continuous multi-frame detection image before the current detection image, and judging whether the brightness variation of the current brightness and the historical brightness is larger than or equal to a preset variation threshold; if yes, not detecting a moving target of the current detection image; if not, acquiring foreground pixels in the current detection image by adopting a foreground detection algorithm; performing area communication on the foreground pixels to form one or more pixel groups; obtaining an external rectangular frame of each pixel group and the size of each external rectangular frame; acquiring an external rectangular frame consistent with a preset target size according to the size, and setting the external rectangular frame as a target detection frame of the current detection image; the preset target size is determined according to the actual size of the moving target, the actual moving detection range of the moving target, the size of the detection image and the size of a target detection frame obtained by adopting continuous multi-frame detection images before the current detection image.
In the technical solution of the above moving object detection alarm method, after the step of setting the circumscribed rectangular frame as the object detection frame of the current detection image, the method further includes performing a merging process on the object detection frame by: selecting a target detection frame with the area smaller than or equal to a preset area threshold value from the target detection frames as a first detection frame to be combined; screening according to the distance between the first detection frame to be combined and other target detection frames in the current detection image to obtain a second detection frame to be combined; calculating a combined gain value of the first detection frame to be combined and the second detection frame to be combined; selectively combining the first to-be-combined detection frame and the second to-be-combined detection frame according to a comparison result of the combined gain value and a preset gain threshold value; the combined gain value is a sum of the areas of the first to-be-combined detection frame and the second to-be-combined detection frame, and a ratio of the areas of the new detection frames formed after the first to-be-combined detection frame and the second to-be-combined detection frame are combined;
and/or, after the step of setting the circumscribed rectangle frame as the target detection frame of the current detection image, the method further includes performing a filtering process on the target detection frame by: calculating a state change value of each target detection frame in the current detection image according to the detection frame state information of the target detection frame in the current detection image and the detection frame state information of the target detection frame in one or more previous detection images of the current detection image; if the state change value is greater than or equal to a preset change threshold value, deleting the corresponding target detection frame; the detection frame state information comprises the brightness, the size and the position of a target detection frame, the state change value comprises a brightness change value, a size change value and a moving speed, and the preset change threshold comprises a brightness change threshold, a size change threshold and a moving speed threshold; and/or, obtaining the sum of the areas of all target detection frames in the current detection image, and calculating the ratio of the sum of the areas to the image area of the current detection image; if the ratio is greater than or equal to a preset ratio threshold, deleting all target detection frames in the current detection image; and/or obtaining the length-width ratio of each target detection frame in the current detection image, and judging whether the length-width ratio is consistent with the length-width ratio of a preset moving target; if not, deleting the corresponding target detection frame.
In the technical scheme of the moving object detection alarm method, the step of screening according to the distance between the first detection frame to be combined and other target detection frames in the current detection image to obtain a second detection frame to be combined specifically includes: step S11: setting the other target detection frames as detection frames to be screened; step S12: obtaining K nearest neighbor detection frames of the first detection frames to be combined from the detection frames to be screened by adopting a nearest neighbor algorithm, wherein K is more than or equal to 1; step S13: judging whether the distance between each nearest neighbor detection frame and the first detection frame to be combined is greater than or equal to a preset distance threshold value, and acquiring an initial second detection frame to be combined and an updated detection frame to be screened according to a judging result; if the distance corresponding to the current nearest neighbor detection frame is greater than or equal to the preset distance threshold, setting the current nearest neighbor detection frame as an initial second detection frame to be combined and deleting the current nearest neighbor detection frame from the detection frames to be screened so as to update the detection frames to be screened; step S14: judging whether the number of the initial second detection frames to be combined is K or not; if yes, setting the initial second detection frame to be combined as a final second detection frame to be combined; if not, go to step S15; step S15: judging whether the number of the updated detection frames to be screened is zero or not; if so, selecting K target detection frames from the other target detection frames according to the sequence that the distances between the first detection frame to be combined and each other target detection frame are from big to small, and setting the K target detection frames as final second detection frames to be combined; if not, turning to the step S12 and executing the step S12 according to the updated detection frame to be screened;
And/or the step of selectively combining the first to-be-combined detection frame and the second to-be-combined detection frame according to the comparison result of the combined benefit value and the preset benefit threshold value specifically includes: step S21: forming an initial detection frame set by the first detection frames to be combined and all the corresponding second detection frames to be combined; step S22: obtaining a combined benefit value B after combining the detection frames in the initial detection frame set 1 The method comprises the steps of carrying out a first treatment on the surface of the If B 1 ≥B 1th Combining the detection frames in the initial detection frame set; if B 1 <B 1th Then go to step S23; wherein the B is 1th Is a benefit threshold and B 1th =A·B n-1 The A and the B are preset threshold coefficients, and the n is the number of detection frames of the initial detection frame set; step S23: respectively obtaining a combined gain value corresponding to each sub-detection frame set obtained after the detection frames in each sub-detection frame set are combined under the initial detection frame set, and obtaining a maximum combined gain value B from the combined gain value corresponding to each sub-detection frame set 2 Wherein each of the sub-sets of detection frames differ by a different one of the deleted detection frames; if B 2 ≥B 2th Will B 2 Combining the detection frames in the corresponding sub detection frame sets; if B 2 <B 2th Then go to step S24; wherein the B is 2th Is a benefit threshold and B 2th =A·B n′-1 Said n' is said B 2 The number of detection frames of the corresponding sub-detection frame set; step by stepStep S24: judging the B 2 Whether the corresponding sub-detection frame set contains the first detection frame to be combined or not; if yes, go to step S25; if not, not carrying out combination treatment on the first detection frame to be combined and the second detection frame to be combined; step S25: the B is carried out 2 After the corresponding sub-detection frame set is reset to the initial detection frame set, the process goes to step S23, and step S23 is executed according to the reset initial detection frame set.
In the technical scheme of the moving object detection alarm method, the step of acquiring a first to-be-processed detection frame having the same track association relationship with the object detection frame of each frame of detection image and a second to-be-processed detection frame having no track association relationship specifically includes: setting a target detection frame of one or more frames of detection images before a current detection image as a history detection frame, and calculating an assignment gain corresponding to each history detection frame when each history detection frame and each target detection frame in the current detection image are assigned to belong to the same motion track; setting a history detection frame corresponding to the maximum assignment gain as a first detection frame to be processed, and setting other history detection frames as a second detection frame to be processed; the assignment gain represents the credibility of the same motion track of the historical detection frame and the target detection frame in the current detection image, and the numerical value of the assignment gain and the credibility form a positive correlation.
In the technical scheme of the moving object detection alarm method, the step of calculating the assignment gain corresponding to each history detection frame when each history detection frame and each object detection frame in the current detection image are assigned to belong to the same motion track specifically includes:
the assigned gain for each history detection box is calculated according to the method shown in the following formula:
gain final_(i,j) =α 0 ·gain org_(i,j)1 ·S i,j2 ·(1-Δ 1_(i,j) )
3 ·(1-Δ 2_(i,j) )+α 4 ·(1-Δ 3_(i,j) )
wherein the gain final(i,j) Representing the intersection ratio of the ith historical detection frame and the jth target detection frame in the current detection image, wherein S is as follows i,j A direction cosine representing the direction vector of the i-th history detection frame and the direction vector of the j-th target detection frame; said delta 1_(i,j) Indicating the change degree of the detection frame areaSaid delta area_(i,j) Representing the area difference between the ith history detection frame and the jth target detection frame, the area j Representing the area of the jth target detection frame; said delta 2_(i,j) Indicating the brightness change degree of the detection frameSaid delta bright_(i,j) Representing the brightness difference between the ith history detection frame and the jth target detection frame, the bright j Representing the brightness of the j-th target detection frame; said delta 3_(i,j) Indicating the degree of change of the color tone of the detection frame and +.>Said delta hue_(i,j) Representing the hue difference value of the ith history detection frame and the jth target detection frame, the hue j Representing the hue value of the jth target detection box.
In the technical scheme of the moving object detection alarm method, the specific steps of alarming according to the moving track and/or the moving area of the moving object include: analyzing whether the motion trail belongs to the motion trail of the preset motion target or not according to the comparison result of the motion trail and the preset motion target; when the motion trail belongs to the motion trail of the preset moving target, continuously analyzing whether the preset moving target is abnormal in action according to the comparison result, and alarming according to the analysis result and/or the motion area of the moving target; and if the motion trail does not belong to the preset motion trail of the moving target, alarming according to the motion area of the moving target.
In the technical scheme of the moving object detection alarm method, the step of performing statistical analysis on the detection frame position of the second to-be-processed detection frame to obtain the moving area of the moving object specifically includes: acquiring a second detection frame to be processed corresponding to each frame of detection image in the continuous multi-frame detection images; clustering is carried out according to the detection frame position of each second detection frame to be processed, so as to obtain one or more clustering clusters; acquiring the density of a second detection frame to be processed in each cluster; acquiring a cluster with density larger than or equal to a preset density threshold value, and setting a region corresponding to the cluster as a motion region of the moving object;
And/or the step of performing statistical analysis on the detection frame position of the second detection frame to be processed to obtain the motion area of the moving object specifically includes: acquiring a second detection frame to be processed corresponding to each frame of detection image; taking the length and the width of each second detection frame to be processed as two-dimensional variables, and obtaining a two-dimensional Gaussian distribution function corresponding to each second detection frame to be processed; respectively acquiring a probability value of each coordinate position in each second detection frame to be processed by adopting the two-dimensional Gaussian distribution function, and constructing a panoramic probability map according to the probability values, wherein the probability value stored by each pixel point in the panoramic probability map represents the probability value of the pixel point of each pixel point belonging to a moving object; acquiring pixel point positions with probability values larger than or equal to a preset probability threshold in the global probability map, and setting a region corresponding to the pixel point positions as a motion region of the moving object; wherein the global probability map is the same size as the detection image.
In the technical scheme of the moving object detection alarm method, the step of constructing the panoramic probability map specifically comprises the following steps: step S31: after moving object detection is carried out on the current detection image, respectively subtracting a preset attenuation value from a probability value stored in each pixel point position in the global probability map to be updated so as to obtain a global probability map after primary updating; step S32: judging whether a second to-be-processed detection frame which has no track association relation with the target detection frame of the current detection image is acquired or not; if yes, go to step S33; if not, resetting the next frame of detection image as the current detection image, and then turning to step S31, and executing step S31 according to the reset current detection image; step S33: acquiring a probability value of each coordinate position in the second to-be-processed detection frame by adopting a two-dimensional Gaussian distribution function corresponding to the second to-be-processed detection frame; step S34: according to the corresponding relation between each coordinate position in the second to-be-processed detection frame and each pixel point position in the global probability map to be updated, respectively accumulating the probability value of each coordinate position to the corresponding probability value stored in each pixel point position so as to update the probability value stored in each pixel point position and obtain the global probability map updated again; step S35: resetting the global probability map after the re-updating as a global probability map to be updated, resetting the next frame of detection image as a current detection image, then transferring to step S31, and executing step S31 according to the reset global probability map to be updated and the reset current detection image.
In a second aspect, there is provided a moving object detection alarm device, the device comprising: a target detection frame acquisition module configured to perform moving target detection on each frame of detection image, respectively, to acquire a target detection frame of each frame of detection image; the system comprises a to-be-processed detection frame acquisition module, a detection frame detection module and a detection frame processing module, wherein the to-be-processed detection frame acquisition module is configured to acquire a first to-be-processed detection frame with the same track association relationship with a target detection frame of each frame detection image and a second to-be-processed detection frame without the track association relationship from the target detection frame of the previous frame or frames of each frame detection image; the motion track generation module is configured to generate motion tracks of one or more motion targets according to the image arrangement sequence of each frame of detection image and a target detection frame of each frame of detection image and a first to-be-processed detection frame which has the same track association relation with each target detection frame; the motion area acquisition module is configured to perform statistical analysis on the detection frame position of the second detection frame to be processed so as to acquire a motion area of a moving object; and the alarming module is configured to alarm according to the motion trail and/or the motion area of the moving object.
In the above technical solution of the moving object detection alarm device, the object detection frame acquisition module is further configured to perform the following operations: acquiring the current brightness of a current detection image and the historical brightness of a continuous multi-frame detection image before the current detection image, and judging whether the brightness variation of the current brightness and the historical brightness is larger than or equal to a preset variation threshold; if yes, not detecting a moving target of the current detection image; if not, acquiring foreground pixels in the current detection image by adopting a foreground detection algorithm; performing area communication on the foreground pixels to form one or more pixel groups; obtaining an external rectangular frame of each pixel group and the size of each external rectangular frame; acquiring an external rectangular frame consistent with a preset target size according to the size, and setting the external rectangular frame as a target detection frame of the current detection image; the preset target size is determined according to the actual size of the moving target, the actual moving detection range of the moving target, the size of the detection image and the size of a target detection frame obtained by adopting continuous multi-frame detection images before the current detection image.
In the technical scheme of the moving object detection alarm device, the device further comprises an object detection frame merging module and/or an object detection frame filtering module; the target detection frame merging module comprises a first detection frame obtaining sub-module to be merged, a second detection frame obtaining sub-module to be merged, a merging income value calculating sub-module and a merging processing sub-module; the first to-be-combined detection frame acquisition submodule is configured to select a target detection frame with the area smaller than or equal to a preset area threshold value from the target detection frames as a first to-be-combined detection frame; the second detection frame to be combined obtaining submodule is configured to screen according to the distance between the first detection frame to be combined and other target detection frames in the current detection image so as to obtain a second detection frame to be combined; the combined gain value calculating submodule is configured to calculate combined gain values of the first detection frame to be combined and the second detection frame to be combined; the merging processing submodule is configured to selectively merge the first to-be-merged detection frame and the second to-be-merged detection frame according to a comparison result of the merging benefit value and a preset benefit threshold; the combined gain value is a sum of the areas of the first to-be-combined detection frame and the second to-be-combined detection frame, and a ratio of the areas of the new detection frames formed after the first to-be-combined detection frame and the second to-be-combined detection frame are combined; the target detection frame filtering module comprises a first filtering sub-module and/or a second filtering sub-module and/or a third filtering sub-module; the first filtering sub-module is configured to calculate a state change value of each target detection frame in the current detection image according to detection frame state information of the target detection frame in the current detection image and detection frame state information of the target detection frame in one or more detection images before the current detection image; if the state change value is greater than or equal to a preset change threshold value, deleting the corresponding target detection frame; the detection frame state information comprises the brightness, the size and the position of a target detection frame, the state change value comprises a brightness change value, a size change value and a moving speed, and the preset change threshold comprises a brightness change threshold, a size change threshold and a moving speed threshold; the second filtering sub-module is configured to acquire the sum of the areas of all target detection frames in the current detection image, and calculate the ratio of the sum of the areas to the image area of the current detection image; if the ratio is greater than or equal to a preset ratio threshold, deleting all target detection frames in the current detection image; the third filtering sub-module is configured to acquire the length-width ratio of each target detection frame in the current detection image, and judge whether the length-width ratio is consistent with the length-width ratio of a preset moving target; if not, deleting the corresponding target detection frame.
In the above technical solution of the moving object detection alarm device, the second detection frame to be combined acquisition submodule is further configured to perform the following operations: step S11: setting the other target detection frames as detection frames to be screened; step S12: obtaining K nearest neighbor detection frames of the first detection frames to be combined from the detection frames to be screened by adopting a nearest neighbor algorithm, wherein K is more than or equal to 1; step S13: judging whether the distance between each nearest neighbor detection frame and the first detection frame to be combined is greater than or equal to a preset distance threshold value, and acquiring an initial second detection frame to be combined and an updated detection frame to be screened according to a judging result; if the distance corresponding to the current nearest neighbor detection frame is greater than or equal to the preset distance threshold, setting the current nearest neighbor detection frame as an initial second detection frame to be combined and deleting the current nearest neighbor detection frame from the detection frames to be screened so as to update the detection frames to be screened; step S14: judging whether the number of the initial second detection frames to be combined is K or not; if yes, setting the initial second detection frame to be combined as a final second detection frame to be combined; if not, go to step S15; step S15: judging whether the number of the updated detection frames to be screened is zero or not; if so, selecting K target detection frames from the other target detection frames according to the sequence that the distances between the first detection frame to be combined and each other target detection frame are from big to small, and setting the K target detection frames as final second detection frames to be combined; if not, turning to the step S12 and executing the step S12 according to the updated detection frame to be screened;
The merge processing sub-module is further configured to: step S21: forming an initial detection frame set by the first detection frames to be combined and all the corresponding second detection frames to be combined; step S22: obtaining a combined benefit value B after combining the detection frames in the initial detection frame set 1 The method comprises the steps of carrying out a first treatment on the surface of the If B 1 ≥B 1th Combining the detection frames in the initial detection frame set; if B 1 <B 1th Then go to step S23; wherein the B is 1th Is a benefit threshold and B 1th =A·B n-1 The A and the B are preset threshold coefficientsThe n is the number of detection frames of the initial detection frame set; step S23: respectively obtaining a combined gain value corresponding to each sub-detection frame set obtained after the detection frames in each sub-detection frame set are combined under the initial detection frame set, and obtaining a maximum combined gain value B from the combined gain value corresponding to each sub-detection frame set 2 Wherein each of the sub-sets of detection frames differ by a different one of the deleted detection frames; if B 2 ≥B 2th Will B 2 Combining the detection frames in the corresponding sub detection frame sets; if B 2 <B 2th Then go to step S24; wherein the B is 2th Is a benefit threshold and B 2th =A·B n′-1 Said n' is said B 2 The number of detection frames of the corresponding sub-detection frame set; step S24: judging the B 2 Whether the corresponding sub-detection frame set contains the first detection frame to be combined or not; if yes, go to step S25; if not, not carrying out combination treatment on the first detection frame to be combined and the second detection frame to be combined; step S25: the B is carried out 2 After the corresponding sub-detection frame set is reset to the initial detection frame set, the process goes to step S23, and step S23 is executed according to the reset initial detection frame set.
In the technical scheme of the moving object detection alarm device, the to-be-processed detection frame acquisition module comprises an assigned gain calculation sub-module and a to-be-processed detection frame acquisition sub-module; the assignment gain calculation sub-module is configured to set a target detection frame of one or more frames of detection images before a current detection image as a history detection frame, and calculate an assignment gain corresponding to each history detection frame when each history detection frame and each target detection frame in the current detection image are assigned to belong to the same motion track; the to-be-processed detection frame acquisition submodule is configured to set a history detection frame corresponding to the maximum assignment gain as a first to-be-processed detection frame, and set other history detection frames as second to-be-processed detection frames; the assignment gain represents the credibility of the same motion track of the historical detection frame and the target detection frame in the current detection image, and the numerical value of the assignment gain and the credibility form a positive correlation.
In the above technical solution of the moving object detection alarm device, the assignment gain calculation sub-module is further configured to calculate an assignment gain corresponding to each history detection frame according to a method shown in the following formula:
gain final_(i,j) =α 0 ·gain org_(i,j)1 ·S i,j2 ·(1-Δ 1_(i,j) )
3 ·(1-Δ 2_(i,j) )+α 4 ·(1-Δ 3_(i,j) )
wherein the gain final(i,j) Representing the intersection ratio of the ith historical detection frame and the jth target detection frame in the current detection image, wherein S is as follows i,j A direction cosine representing the direction vector of the i-th history detection frame and the direction vector of the j-th target detection frame; said delta 1_(i,j) Indicating the change degree of the detection frame areaSaid delta area_(i,j) Representing the area difference between the ith history detection frame and the jth target detection frame, the area j Representing the area of the jth target detection frame; said delta 2_(i,j) Indicating the brightness change degree of the detection frameSaid delta bright_(i,j) Representing the brightness difference between the first history detection frame and the j-th target detection frame, the bright j Representing the brightness of the j-th target detection frame; said delta 3_(i,j) Indicating the degree of change of the color tone of the detection frame and +.>Said delta hue_(i,j) Representing the hue difference value of the ith history detection frame and the jth target detection frame, the hue j Representing the hue value of the jth target detection box.
In the technical solution of the above moving object detection alarm device, the alarm module is further configured to perform the following operations: analyzing whether the motion trail belongs to the motion trail of the preset motion target or not according to the comparison result of the motion trail and the preset motion target; when the motion trail belongs to the motion trail of the preset moving target, continuously analyzing whether the preset moving target is abnormal in action according to the comparison result, and alarming according to the analysis result and/or the motion area of the moving target; and if the motion trail does not belong to the preset motion trail of the moving target, alarming according to the motion area of the moving target.
In the technical scheme of the moving object detection alarm device, the moving area acquisition module comprises a first moving area acquisition sub-module and/or a second moving area acquisition sub-module; the first motion region acquisition sub-module is configured to: acquiring a second detection frame to be processed corresponding to each frame of detection image in the continuous multi-frame detection images; clustering is carried out according to the detection frame position of each second detection frame to be processed, so as to obtain one or more clustering clusters; acquiring the density of a second detection frame to be processed in each cluster; acquiring a cluster with density larger than or equal to a preset density threshold value, and setting a region corresponding to the cluster as a motion region of the moving object; the second motion region acquisition sub-module is configured to: acquiring a second detection frame to be processed corresponding to each frame of detection image; taking the length and the width of each second detection frame to be processed as two-dimensional variables, and obtaining a two-dimensional Gaussian distribution function corresponding to each second detection frame to be processed; respectively acquiring a probability value of each coordinate position in each second detection frame to be processed by adopting the two-dimensional Gaussian distribution function, and constructing a panoramic probability map according to the probability values, wherein the probability value stored by each pixel point in the panoramic probability map represents the probability value of the pixel point of each pixel point belonging to a moving object; acquiring pixel point positions with probability values larger than or equal to a preset probability threshold in the global probability map, and setting a region corresponding to the pixel point positions as a motion region of the moving object; wherein the global probability map is the same size as the detection image.
In the above technical solution of the moving object detection alert device, the second moving area acquisition sub-module is further configured to construct a panoramic probability map by performing the following operations: step S31: after moving object detection is carried out on the current detection image, respectively subtracting a preset attenuation value from a probability value stored in each pixel point position in the global probability map to be updated so as to obtain a global probability map after primary updating; step S32: judging whether a second to-be-processed detection frame which has no track association relation with the target detection frame of the current detection image is acquired or not; if yes, go to step S33; if not, resetting the next frame of detection image as the current detection image, and then turning to step S31, and executing step S31 according to the reset current detection image; step S33: acquiring a probability value of each coordinate position in the second to-be-processed detection frame by adopting a two-dimensional Gaussian distribution function corresponding to the second to-be-processed detection frame; step S34: according to the corresponding relation between each coordinate position in the second to-be-processed detection frame and each pixel point position in the global probability map to be updated, respectively accumulating the probability value of each coordinate position to the corresponding probability value stored in each pixel point position so as to update the probability value stored in each pixel point position and obtain the global probability map updated again; step S35: resetting the global probability map after the re-updating as a global probability map to be updated, resetting the next frame of detection image as a current detection image, then transferring to step S31, and executing step S31 according to the reset global probability map to be updated and the reset current detection image.
In a third aspect, a control device is provided, where the control device includes a processor and a storage device, where the storage device is adapted to store a plurality of program codes, where the program codes are adapted to be loaded and executed by the processor to perform the moving object detection alarm method according to any one of the above-mentioned moving object detection alarm methods.
In a fourth aspect, a computer readable storage medium is provided, where a plurality of program codes are stored, the program codes are adapted to be loaded and executed by a processor to perform the moving object detection alarm method according to any one of the above-mentioned moving object detection alarm methods.
The technical scheme provided by the invention has at least one or more of the following beneficial effects:
conventional image target detection methods often cannot accurately detect an image target when detecting a low-quality image (such as reduced image brightness, increased image blur) in an environment with poor illumination conditions, such as at night or in a tree mountain forest shielding environment. In the technical scheme of the invention, no matter the image quality is high or low, the image target detection can be accurately carried out through the following steps: firstly, respectively carrying out moving object detection on each frame of detection image to obtain a target detection frame of each frame of detection image, and then respectively obtaining a first to-be-processed detection frame which has the same track association relationship with the target detection frame of each frame of detection image and a second to-be-processed detection frame which has no track association relationship from the target detection frame of one or more frames of detection images before each frame of detection image aiming at each frame of detection image. And finally, carrying out track connection on a first to-be-processed detection frame with the same track association relationship to form a motion track of the moving object, carrying out statistical analysis on the position of the detection frame on a second to-be-processed detection frame without the track association relationship, acquiring a motion area of the moving object, and alarming according to the obtained motion track and/or motion area. In an environment with poor illumination condition, the detected motion trail may be intermittent due to the fact that one or more complete motion trail cannot be formed, and therefore a moving target is easy to miss. By carrying out statistical analysis on the positions of the detection frames of the second to-be-processed detection frames without track association relation, the embodiment of the invention can obtain the moving region of the moving object (the region with larger probability of the moving object), overcomes the defect that the moving object is easy to miss detection, and can accurately obtain the moving track and the action region of the moving object even under the environment with poor illumination condition, thereby remarkably improving the accuracy of detecting the moving object of the image.
Drawings
Embodiments of the invention are described below with reference to the accompanying drawings, in which:
FIG. 1 is a flow chart illustrating the main steps of a moving object detection alert method according to one embodiment of the present invention;
FIG. 2 is a flow chart illustrating the main steps of a method for acquiring a target detection frame according to an embodiment of the present invention;
FIG. 3 is a flow chart illustrating the main steps of a method for merging target detection frames according to an embodiment of the present invention;
FIG. 4 is a flow chart illustrating main steps of a method for obtaining a second detection frame to be combined in the method shown in FIG. 3;
FIG. 5 is a flow chart illustrating main steps of a method for performing a merging process of a detection frame to be merged according to a benefit value comparison result in the method shown in FIG. 3;
FIG. 6 is a schematic diagram of each position coordinate of a second to-be-processed detection frame in a detection image according to one embodiment of the invention;
FIG. 7 is a schematic diagram of probability values for each coordinate position in a second pending detection frame according to one embodiment of the invention;
fig. 8 is a main structural block diagram of a moving object detection alarm device according to an embodiment of the present invention.
List of reference numerals:
11: a target detection frame acquisition module; 12: the detection frame acquisition module to be processed; 13: a motion trail generation module; 14: a movement region acquisition module; 15: and an alarm module.
Detailed Description
Some embodiments of the invention are described below with reference to the accompanying drawings. It should be understood by those skilled in the art that these embodiments are merely for explaining the technical principles of the present invention, and are not intended to limit the scope of the present invention.
In the description of the present invention, a "module," "processor" may include hardware, software, or a combination of both. A module may comprise hardware circuitry, various suitable sensors, communication ports, memory, or software components, such as program code, or a combination of software and hardware. The processor may be a central processor, a microprocessor, an image processor, a digital signal processor, or any other suitable processor. The processor has data and/or signal processing functions. The processor may be implemented in software, hardware, or a combination of both. Non-transitory computer readable storage media include any suitable medium that can store program code, such as magnetic disks, hard disks, optical disks, flash memory, read-only memory, random access memory, and the like. The term "a and/or B" means all possible combinations of a and B, such as a alone, B alone or a and B. The term "at least one A or B" or "at least one of A and B" has a meaning similar to "A and/or B" and may include A alone, B alone or A and B. The singular forms "a", "an" and "the" include plural referents.
The conventional perimeter intrusion detection method at present mainly detects pedestrians on images acquired by an image acquisition device in a target area, and judges whether pedestrians enter the target area according to the pedestrian detection result. However, in environments such as at night or in tree mountain forest, the quality of the image collected by the image collecting device is reduced (e.g. the brightness of the image is reduced, the ambiguity of the image is increased, etc.), thereby affecting the accuracy of pedestrian detection on the image. In the embodiment of the invention, the moving target detection can be respectively carried out on each frame of detection image to obtain the target detection frame of each frame of detection image, and then the first to-be-processed detection frame which has the same track association relationship with the target detection frame of each frame of detection image and the second to-be-processed detection frame which has no track association relationship are respectively obtained from the target detection frames of the previous frame or frames of detection images of each frame of detection image aiming at each frame of detection image. And finally, carrying out track connection on a first to-be-processed detection frame with the same track association relationship to form a motion track of the moving object, carrying out statistical analysis on the position of the detection frame on a second to-be-processed detection frame without the track association relationship, acquiring a motion area of the moving object, and alarming according to the obtained motion track and/or motion area. In an environment with poor illumination condition, the detected motion trail may be intermittent due to the fact that one or more complete motion trail cannot be formed, and therefore a moving target is easy to miss. By carrying out statistical analysis on the positions of the detection frames of the second to-be-processed detection frames without track association relation, the embodiment of the invention can obtain the moving region of the moving object (the region with larger probability of the moving object), overcomes the defect that the moving object is easy to miss detection, and can accurately obtain the moving track and the action region of the moving object even under the environment with poor illumination condition, thereby remarkably improving the accuracy of detecting the moving object of the image.
In an example of an application scenario of the present invention, a railway monitoring system (the system does not have a railway perimeter pedestrian intrusion monitoring function) is provided in a background server of a certain section of railway, and in order to meet the monitoring requirement of the system, a large number of infrared image acquisition devices are deployed near the line of the section of railway. When the function of the railway monitoring system needs to be upgraded so that the railway monitoring system can perform railway perimeter pedestrian intrusion monitoring on the railway, a device capable of executing the moving object detection alarm method according to one embodiment of the invention can be arranged in the railway monitoring system, and the device can acquire images acquired by image acquisition devices which are already deployed near the railway along the line and take the images as detection images to perform moving object detection, and acquire the moving track of the pedestrian and/or the moving area of the pedestrian according to the detection result. If the motion trail of the pedestrian and/or the motion area of the pedestrian is detected, corresponding alarm information is output, so that railway monitoring personnel can timely take effective safety protection measures, and the safety of the pedestrian and the normal operation of a railway are ensured.
Referring to fig. 1, fig. 1 is a schematic flow chart of main steps of a moving object detection alarm method according to an embodiment of the present invention. As shown in fig. 1, the moving object detection alarm method in the embodiment of the present invention mainly includes the following steps:
Step S101: and respectively detecting a moving target of each frame of detection image to obtain a target detection frame of each frame of detection image. The detection image refers to an image of an image detection area, which refers to a target area such as a railway perimeter, a station, or the like, where the image acquisition device is capable of image acquisition. The object detection frame refers to information capable of identifying a moving object in the detection image, and for example, the object detection frame may be boundary information of the moving object. If a conventional detection frame acquisition method in the image processing technology is adopted to detect the target detection frames in the image, an overlapping condition is often caused between a part of the target detection frames, so that the target detection frames are further subjected to merging processing after being acquired. In the merging process, each target detection frame is compared with other target detection frames one by one so as to determine a detection frame which is merged with each target detection frame. And the new combined detection frame is taken as the other target detection frame to continuously participate in the combination processing of the next target detection frame so as to determine the detection frame combined with the next target detection frame. According to the analysis, the conventional detection frame acquisition method has high complexity and takes a long time to accurately acquire the target detection frame in the detection image. In contrast, in the embodiment of the invention, foreground pixels in a detected image are respectively acquired by adopting a foreground detection algorithm, then the foreground pixels are subjected to regional communication to form one or more pixel groups, and an circumscribed rectangular frame of each pixel group is set as a target detection frame for detecting a moving target in the image. Because there is almost no overlapping condition between pixel groups, there is basically no overlapping condition of the target detection frames acquired according to the pixel groups, and detection frame combination is not needed to be performed for the problems, so that the acquisition time of the detection frames is greatly saved, and the acquisition efficiency of the detection frames is also improved on the basis of improving the accuracy of the target detection frames. Specifically, referring to fig. 2, in one embodiment, the target detection frame in the detection image may be acquired for each frame of the detection image through the following steps S201 to S207:
Step S201: and acquiring the current brightness of the current detection image and the historical brightness of a plurality of continuous frames of detection images before the current detection image. The historical brightness of successive multi-frame detected images preceding the current detected image includes, but is not limited to: average brightness of the continuous multi-frame detected image, maximum brightness in the continuous multi-frame detected image, minimum brightness in the continuous multi-frame detected image, and so on. The number of consecutive multi-frame detection images before the current detection image can be flexibly set by a person skilled in the art according to practical requirements, for example, the number is 5, and if the current detection image is the 16 th frame detection image, the consecutive multi-frame detection image before the current detection image can be the 11 th to 15 th frame detection images.
Step S202: and judging whether the brightness variation of the current brightness and the historical brightness is larger than or equal to a preset variation threshold value. If the brightness variation is greater than or equal to the preset variation threshold, go to step S207; if the brightness variation is smaller than the preset variation threshold, the process goes to step S203. When the brightness variation is greater than or equal to the preset variation threshold, it indicates that there is a large difference between the brightness of the current detected image and the brightness of the continuous multi-frame detected image before the current detected image, which may be caused by the obvious change of the ambient light during the acquisition of the current detected image, if the current detected image is continuously used for detecting the moving object, the detecting time may not only be wasted, but also the analysis of the moving track and/or the moving area of the moving object by using the detected object detecting frame in the subsequent step may be affected, so when the brightness variation is determined to be greater than or equal to the preset variation threshold, the current detected image may be skipped, and the moving object detection on the next frame of detected image may be continued. The preset variation threshold value can be flexibly set by a person skilled in the art according to actual requirements, so long as the comparison with the variation threshold value is performed, whether the current brightness of the current detection image is larger than the historical brightness of the continuous multi-frame detection image before the current detection image can be judged.
Step S203: and acquiring foreground pixels in the current detection image by adopting a foreground detection algorithm. It should be noted that, in this embodiment, a foreground detection algorithm that is conventional in the image processing technology field may be used to obtain foreground pixels in the detected image, where the foreground detection algorithm includes, but is not limited to: the Vibe algorithm. For brevity of description, the operation principle and specific operation procedure of the foreground detection algorithm are not described in detail herein.
Step S204: the foreground pixels are area-connected to form one or more pixel groups. In this embodiment, first, a foreground detection algorithm is used to obtain a foreground pixel and a background pixel in a current detection image, and a 0-1 binary image is constructed according to each foreground pixel and each background pixel, where the pixel value of the foreground pixel is 1 and the pixel value of the background pixel is 0 in the constructed 0-1 binary image. Then, performing image morphology processing on the 0-1 binary image, for example, performing open processing and then performing close processing on the 0-1 binary image, and acquiring a pixel group according to the result of the image morphology processing. It should be noted that, although the embodiment of the present invention only provides a specific embodiment of performing the open processing and then the close processing to obtain the pixel group, those skilled in the art can understand that the scope of the present invention is not limited to this specific embodiment, and the scheme of modifying or replacing the foregoing image morphological processing manner falls within the scope of the present invention as long as the foreground pixels can be subjected to the region communication to form the pixel group.
Step S205: and obtaining the circumscribed rectangular frame of each pixel group and the size of each circumscribed rectangular frame. In this embodiment, the minimum circumscribed rectangular frame of the pixel group may be acquired, or circumscribed rectangular frames of other sizes may be acquired.
Step S206: and acquiring an external rectangular frame consistent with the preset target size, and setting the external rectangular frame as a target detection frame of the current detection image. If the preset target size is a specific value, the fact that the size difference between the size of the circumscribed rectangular frame and the preset target size is smaller than or equal to a preset difference threshold value can be flexibly set by a person skilled in the art according to actual requirements, and the fact that the size of the circumscribed rectangular frame is relatively close to the preset target size can be judged through the difference threshold value. If the preset target size is a numerical value interval, the fact that the size of the circumscribed rectangular frame is consistent with the preset target size means that the size of the circumscribed rectangular frame falls into the numerical value interval. The preset target size refers to a size determined according to an actual size of a moving target, an actual moving detection range of the moving target, a size of a detection image, and a size of a target detection frame obtained by adopting a plurality of continuous frames of detection images before the current detection image. The actual activity detection range of the moving object may be flexibly set according to the monitoring requirement, for example, the monitoring requirement is to monitor personnel in the convenience store, and then the actual activity detection range of the moving object may be set according to the area of the convenience store. If the area of the convenience store is 10 square meters, the actual activity detection range of the moving object may be set to 10 square meters. In the present embodiment, the minimum target size and/or the maximum target size among the preset target sizes may be set according to the actual size of the moving target, the actual movement detection range of the moving target, and the size of the detection image. One example is: if the actual movement detection range of the moving object is 10 square meters and the actual size of the moving object is 1-2 meters (e.g., the moving object is a person), the minimum target size among the preset target sizes is set to be 24 pixels according to the size of the detected image. When the size of the circumscribed rectangle frame is smaller than 24 pixels, the pixel group represented by the circumscribed rectangle frame is irrelevant to a moving target, so that the circumscribed rectangle frame can be directly deleted. Further, after setting the minimum target size and/or the maximum target size according to the actual size of the moving target, the actual moving detection range of the moving target, and the size of the detection image, the minimum target size and/or the maximum target size may be adjusted according to the size of the target detection frame obtained by using the continuous multi-frame detection image before the current detection image. With continued reference to the above example, if the size of the target detection frame obtained by using the continuous multi-frame detection image before the current detection image is between 50 and 200 pixels, the minimum target size may be adjusted to be 35 pixels, so as to reduce the analysis and judgment of the target detection frame with the size smaller than 35 pixels, and save the computing power of the computer device executing the moving target detection frame alarm method according to the embodiment of the present invention.
Step S207: the moving object detection is not performed on the current detection image. Through the steps S201-S207, the target detection frames of the moving target can be rapidly and accurately acquired from the foreground pixels of the detection image, and the target detection frames are basically detection frames without overlapping, so that the combination processing analysis of each target detection frame is not needed one by one, and the acquisition efficiency of the target detection frames is greatly improved.
In practical application, when moving object detection is performed on an image acquired under the condition of poor light conditions or abrupt change of light, brightness differences of different areas in the moving object may be larger due to the influence of the light, and then a plurality of object detection frames with relatively close distances may be obtained when the moving object is detected, and the object detection frames actually refer to the same moving object. In addition, the size of some target detection frames may be far greater than the size of normal target detection frames, which have lost the value of performing motion trajectory analysis and motion area determination, and thus need to be filtered out. Therefore, after the target detection frames in the detected image are acquired through the steps S201 to S207, it is further required to determine whether to perform the merging process on these target detection frames, so as to reduce the influence of the image acquisition environment on the detection of the moving target, and in order to improve the efficiency of the merging process, to avoid performing the merging process analysis on each target detection frame, the merging process may be performed on the target detection frames having a smaller area and other detection frames in the vicinity. Meanwhile, after the target detection frame in the detected image is obtained through the steps S201 to S207, the target detection frame with larger fluctuation of size and/or brightness is filtered, so as to improve accuracy of motion trail analysis and motion area determination.
1. Merging process of target detection frames
In one embodiment, referring to fig. 3, for each frame of detection image, the following step S301-step S304 may be performed to combine the target detection frames in each frame of detection image:
step S301: and selecting a target detection frame with the area smaller than or equal to a preset area threshold from the target detection frames as a first detection frame to be combined. It should be noted that, a person skilled in the art can flexibly set a specific value of the preset area threshold according to an actual requirement, so long as the target detection frame with a smaller area can be determined by comparing the specific value with the preset area threshold. Step S302: and screening according to the distance between the first detection frame to be combined and other target detection frames in the current detection image so as to obtain a second detection frame to be combined. The second detection frame to be combined refers to a target detection frame which is closer to the first detection frame to be combined. Step S303: and calculating the combined gain value of the first detection frame to be combined and the second detection frame to be combined. The merging gain value refers to the sum of the areas of the first to-be-merged detecting frame and the second to-be-merged detecting frame and the ratio of the areas of the new detecting frames formed after the first to-be-merged detecting frame and the second to-be-merged detecting frame are merged. Step S304: and selectively combining the first detection frame to be combined with the second detection frame to be combined according to a comparison result of the combined gain value and a preset gain threshold value. If the combined gain value is larger than the preset gain threshold value, the sum of the areas of the first to-be-combined detection frame and the second to-be-combined detection frame is indicated, the occupied area of a new detection frame formed after the first to-be-combined detection frame and the second to-be-combined detection frame are larger, and the first to-be-combined detection frame and the second to-be-combined detection frame can be combined to form the new detection frame; if the combined gain value is smaller than the gain threshold value, the sum of the areas of the first to-be-combined detection frame and the second to-be-combined detection frame is indicated, the occupied area of a new detection frame formed after the first to-be-combined detection frame and the second to-be-combined detection frame are smaller, and the first to-be-combined detection frame and the second to-be-combined detection frame do not need to be combined.
The following describes the above-described step S302 and step S304 of the present embodiment in detail.
In the above step S302 of the present embodiment, referring to fig. 4, the second detection frame to be combined may be obtained through the following steps S3021 to S3028: step S3021: other target detection frames (other target detection frames described in step S302) in the current detection image except the current first detection frame to be combined are set as detection frames to be screened. Step S3022: and obtaining nearest neighbor detection frames of K first detection frames to be combined from the detection frames to be screened by adopting a nearest neighbor algorithm, wherein K is more than or equal to 1. It should be noted that, in this embodiment, a nearest neighbor algorithm that is conventional in the data classification technical field may be used to obtain the nearest neighbor detection frame of the first to-be-combined detection frame. Nearest neighbor algorithms include, but are not limited to: k Nearest Neighbor algorithm (K-Nearest Neighbor, KNN). For brevity of description, the working principle of the nearest neighbor algorithm is not repeated here. Step S3023: judging whether the distance between each nearest neighbor detection frame and the first detection frame to be combined is greater than or equal to a preset distance threshold value, and acquiring an initial second detection frame to be combined and an updated detection frame to be screened according to a judging result. If the distance corresponding to the current nearest neighbor detection frame is greater than or equal to a preset distance threshold, setting the current nearest neighbor detection frame as an initial second detection frame to be combined and deleting the current nearest neighbor detection frame from the detection frames to be screened so as to update the detection frames to be screened. After the above operation is executed for each nearest neighbor detection frame, the detection frames to be screened, which are updated finally, and all the initial second detection frames to be combined are obtained. The preset distance threshold value used by each first detection frame to be combined may be the same fixed value, or may be a different value set according to the size of the detection frame of each first detection frame to be combined. In this embodiment, the distance threshold value corresponding to each first to-be-combined detection frame may be set according to the diagonal distance of each first to-be-combined detection frame. For example: can adopt d th_i =d d_i Calculating the distance threshold value corresponding to each first detection frame to be combined by x y, and d th_i Represents the distance threshold value, d, corresponding to the ith first detection frame to be combined d_i Represents the diagonal distance of the ith first to-be-combined detection frame, y represents a preset proportionality coefficient and y >1. It should be noted that, a specific value of the scaling factor y may be flexibly set by a person skilled in the art according to actual requirements, and in one embodiment, y=3. Step S3024: judging whether the number of the initial second detection frames to be combined (the total number of the initial second detection frames to be combined acquired in step S3023) is K; if the initial second detection frame to be combined is equal to K, go to step S3028; if the initial second to-be-combined detection frame is smaller than K, the process goes to step S3025. Step S3025: judging whether the number of the updated detection frames to be screened (obtained in step S3023) is zero; if zero, go to step S3026; if not, the process proceeds to step S3027. Step S3026: if the number of the updated detection frames to be screened is zero, which indicates that after the nearest neighbor algorithm selection and the distance threshold judgment are performed on each target detection frame in the current detection image, enough second detection frames to be combined are not obtained, then the nearest K target detection frames can be directly selected and set as final second detection frames to be combined according to the distance between the first detection frame to be combined and each other target detection frame (the other target detection frames in step S302), and then K target detection frames can be selected from the other target detection frames according to the sequence that the distance between the first detection frame to be combined and each other target detection frame is from large to small. Step S3027: if the number of the updated frames to be screened is not zero, the process goes to step S3022, and the frames to be screened used in step S3022 are set as the updated frames to be screened obtained in step S3023, that is, after going to step S3022, step S3022 is executed according to the updated frames to be screened obtained in step S3023. Step S3028: and setting the initial second detection frame to be combined as a final second detection frame to be combined. Thus, through the steps S3021 to S3028, the potential second to-be-combined detection frame to be combined is obtained, which is closer to the first to-be-combined detection frame.
In the above step S304 of the present embodiment, referring to fig. 5, the first to-be-combined detection frame and the second to-be-combined detection frame may be combined through the following steps S3041 to S3047: step S3041: will beThe first detection frames to be combined and all the corresponding second detection frames to be combined form an initial detection frame set. All the second detection frames to be combined corresponding to the first detection frame to be combined are obtained through the aforementioned step S302. Step S3042: obtaining a combined benefit value B after combining the detection frames in the initial detection frame set 1 。B 1 The calculation formula of (2) is shown in the following formula (1):
the meaning of each parameter in the above formula (1) is as follows:
"area" means the calculation function of the detection frame area, box i Representing the i first detection frame to be combined and box j,1 Representing box i Corresponding 1 st second detection frame to be combined and box j,2 Representing box i Corresponding 2 nd second detection frame to be combined and box j,n-1 Representing box i Corresponding n-1 th second detection frame to be combined and box i ,box j,1 ,box j,2 ,...,box j,n-1 And forming an initial detection frame set corresponding to the ith first detection frame to be combined. area (box) i ) Representing box i Area (box) j,1 ) Representing box j,1 Area (box) j,2 ) Representing box j,2 Area (box) j,n-1 ) Representing box j,n-1 Area (box) i +box j,1 +box j,2 +…+box j,n-1 ) Representing a box i And the areas of the new detection frames formed by combining all the corresponding second detection frames to be combined.
Step S3043: judgment B 1 Whether or not to be greater than or equal to the profit threshold B 1th The method comprises the steps of carrying out a first treatment on the surface of the If B 1 ≥B 1th Combining the detection frames in the initial detection frame set (combining all the detection frames in the initial detection frame set); if B 1 <B 1th Then go to step S3044; wherein B is 1th =A·B n-1 A and B are both preset threshold coefficients, n is the initial detection frame setThe number of detection frames, i.e. the total number of detection frames in the initial detection frame set. It should be noted that, the preset threshold coefficients a and B may be empirical values obtained by performing a merging test on the target detection frame in the image sample. In one embodiment, a=0.6 and b=0.83. Step S3044: and respectively acquiring a combined gain value corresponding to each sub-detection frame set obtained after the detection frames in each sub-detection frame set are combined under the initial detection frame set, wherein the difference between each sub-detection frame set is different from one detection frame to be deleted. One example is: if the initial set of detection frames includes detection frame 1, detection frame 2, and detection frame 3, then three sub-sets of detection frames shown in table 1 below can be obtained:
TABLE 1
Sequence number of sub-detection frame set Detection frame in sub-detection frame set
1 Detection frame 1 and detection frame 2
2 Detection frame 1 and detection frame 3
3 Detection frame 2 and detection frame 3
The calculation method of the combined gain value corresponding to each sub-detection frame set is combined with the combined gain value B of the detection frames in the initial detection frame set 1 The calculation method of (2) is the same and is not described in detail herein for brevity.
Step S3045: obtaining the maximum merging from the merging gain value corresponding to each sub-detection frame setBenefit value B 2 . Step S3046: judgment B 2 Whether or not to be greater than or equal to the profit threshold B 2th The method comprises the steps of carrying out a first treatment on the surface of the If B 2 ≥B 2th Will B 2 Combining the detection frames in the corresponding sub detection frame sets; if it is judged that B 2 <B 2th Then go to step S3047; wherein B is 1th =A·B n′-1 N' is B 2 The number of detection frames of the corresponding sub-detection frame set, i.e. B 2 The total number of detection frames in the corresponding sub-detection frame set. Step S3047: judgment B 2 Whether the corresponding sub-detection frame set contains a first detection frame to be combined or not; if the detection frame is included, the first detection frame to be combined and the second detection frame to be combined are not combined (namely, the first detection frame to be combined is stopped from being combined by any detection frame, and the first detection frame to be combined is not combined by any detection frame); if not, B is 2 After the corresponding sub-detection frame set is reset to the initial detection frame set, the process goes to step S3044, i.e. after going to step S3044, step S3044 is performed according to the reset initial detection frame set. Thus, the above steps S3041-S3047 complete the merging process of the first to-be-merged detecting frame and all the corresponding second to-be-merged detecting frames.
The method of combining the detection frames described in the above steps S3041 to S3047 will be further described below by taking the first detection frame to be combined a and the second detection frames to be combined B, C and D corresponding thereto as an example. Specifically, the first to-be-combined detection frame a and the second to-be-combined detection frames B, C and D may be subjected to the combining process by the following steps 11 to 18: step 11: the first to-be-combined detection frame a and the second to-be-combined detection frames B, C and D form an initial detection frame set. Step 12: obtaining a combined benefit value B after combining the detection frames in the initial detection frame set 1 And get B 1 <B 1th ,B 1th =0.6·0.83 3 . Step 13: a sub-detection frame set 1, a sub-detection frame set 2, a sub-detection frame set 3 and a sub-detection frame set 4 of an initial detection frame set are obtained, wherein the sub-detection frame set 1 comprises a detection frame B, C, D, the sub-detection frame set 2 comprises a detection frame A, C, D, the sub-detection frame set 3 comprises a detection frame A, B, D and the sub-detection frame set 4The test frame set 4 includes a test frame A, B, C. Step 14: obtaining the combined gain value corresponding to each of the sub-detection frame set 1, the sub-detection frame set 2, the sub-detection frame set 3 and the sub-detection frame set 4, and obtaining the maximum combined gain value B according to each combined gain value 2 Is the combined benefit value of sub-detection box set 2 and gets B 2 <B 2th ,B 2th =0.6·0.83 2 . Step 15: since the sub-detection frame set 2 contains the first detection frame to be combined a, the first detection frame to be combined a is reset to the initial detection frame set (including the detection frame A, C, D). Step 16: a sub-set of detection frames 21, a sub-set of detection frames 22 and a sub-set of detection frames 23 of the reset initial set of detection frames (A, C, D) are acquired, wherein the sub-set of detection frames 21 comprises detection frames C and D, the sub-set of detection frames 22 comprises detection frames a and D, and the sub-set of detection frames 23 comprises detection frames a and C. Step 17: obtaining the combined gain value corresponding to each of the sub-detection frame set 21, the sub-detection frame set 22 and the sub-detection frame set 23, and obtaining the maximum combined gain value B according to each combined gain value 3 Is the combined benefit value of the sub-set of detection boxes 22 and gets B 3 ≥B 3th ,B 3th =0.6.0.83. Step 18: due to B 3 ≥B 3th The detection frames a and D within the sub-detection frame set 22 are combined. Thus, the above steps 11-18 complete the merging process of the first to-be-merged detecting frame a and the corresponding second to-be-merged detecting frame.
2. Filtering of target detection frames
In the embodiment of the invention, the target detection frame can be filtered and judged from a plurality of dimensions such as the size, the position and the brightness of the target detection frame, and the filtering implementation of each detection frame is specifically described below.
1. Detection frame filtering implementation mode based on detection frame state information
Specifically, in this embodiment, the detection frame filtering determination may be performed by: first, calculating a state change value of each target detection frame in the current detection image according to detection frame state information of the target detection frame in the current detection image and detection frame state information of the target detection frame in one or more previous detection images of the current detection image. Then, comparing the state change value with a preset change threshold value; if the state change value is greater than or equal to a preset change threshold value, deleting the corresponding target detection frame; if the state change value is smaller than the preset change threshold value, the corresponding target detection frame is reserved. Wherein the detection frame status information includes, but is not limited to: brightness, size, and location of the target detection frame, state change values include, but are not limited to: brightness change value, size change value, and movement speed, the preset change threshold values include, but are not limited to: a brightness change threshold, a size change threshold, and a movement speed threshold. The moving speed can calculate the displacement variation according to the position of the detection frame, and then calculate the moving speed of the target detection frame according to the displacement variation. One example is: if the brightness change value of the target detection frame in the current detection image is greater than or equal to the brightness change threshold value, the condition that the light mutation possibly occurs when the detection image is acquired is indicated, so that the brightness of the target detection frame is changed greatly, and the target detection frame needs to be filtered. Yet another example is: if the moving speed of the target detection frame in the current detection image is greater than or equal to the moving speed threshold value, the abnormal movement of the moving target represented by the target detection frame is indicated, so that the target detection frame needs to be filtered (the target detection frame is deleted).
2. Detection frame filtering implementation based on detection frame area and detection frame area
Specifically, in this embodiment, the area sum of all target detection frames in the current detection image can be obtained, and the ratio of this area sum to the image area of the current detection image is calculated; if the ratio is greater than or equal to the preset ratio threshold, it indicates that a light mutation may occur when the detected image is acquired, so that the brightness of the target detection frames in the whole detected image is changed greatly, and therefore, all target detection frames in the current detected image need to be filtered (all target detection frames are deleted).
3. Detection frame filtering implementation mode based on aspect ratio of detection frame
Specifically, in this embodiment, the aspect ratio of each target detection frame in the current detection image may be acquired, and whether the aspect ratio is consistent with the aspect ratio of the preset moving target may be determined; if the target detection frames are inconsistent, the moving target represented by the target detection frames does not belong to the preset moving target, so that the corresponding target detection frames need to be filtered (the corresponding target detection frames are deleted); if the target detection frames are consistent, the moving targets represented by the target detection frames belong to preset moving targets, so that the corresponding target detection frames can be reserved.
From the above description, the specific implementation procedure of step S101 of the embodiment of the present invention has been fully described. With continued reference to fig. 1, after the target detection frame of each frame of the image to be detected is obtained in the embodiment shown in step S101, the embodiment of the present invention further includes the following steps S102 to S105.
Step S102: and aiming at each frame of detection image, respectively acquiring a first to-be-processed detection frame with the same track association relation with the target detection frame of each frame of detection image and a second to-be-processed detection frame without the track association relation from the target detection frame of the previous frame or frames of detection image of each frame of detection image.
In this embodiment, for each frame of detection image, the following steps S1021 to S1022 may be performed to obtain a first to-be-processed detection frame having the same track association relationship with the target detection frame of the detection image and a second to-be-processed detection frame having no track association relationship, respectively: step S1021: setting a target detection frame of one or more frames of detection images before the current detection image as a history detection frame, and calculating an assignment gain corresponding to each history detection frame when each history detection frame and each target detection frame in the current detection image are assigned to belong to the same motion track. The assigned gain may represent the credibility of the same motion track of the historical detection frame and the target detection frame in the current detection image, and the value of the assigned gain and the credibility form a positive correlation. In one embodiment, the assigned gain for each history detection box may be calculated as shown in the following equation (2):
gain final_(i,j) =α 0 ·gain org_(i,j)1 ·S i,j2 ·(1-Δ 1_(i,j) )
3 ·(1-Δ 2_(i,j) )+α 4 ·(1-Δ 3_(i,j) ) (2)
The meaning of each parameter in the above formula (2) is as follows:
gain final_(i,j) box representing the i-th history detection box i Box with j-th target detection box in current detection image j Is used for the cross-over ratio of (a),“box i ∩box j "means box i With box j Is a part of the overlapping part of (area (box) i ∩ box j ) Representing box i With box j Is the area of the overlapping portion of the "box i ∪box j "means box i With box j Combining the formed new detection frames, area (box) i ∪box j ) Representing box i With box j Combining the areas of the formed new detection frames, S i,j A directional cosine representing the directional vector of the i-th history detection frame and the directional vector of the j-th target detection frame; delta 1_(i,j) Indicating the degree of change of the area of the detection frame and +.>Δ area_(i,j) Representing the area difference between the ith history detection frame and the jth target detection frame, area j Representing the area of the jth target detection frame; delta 2_(i,j) Indicating the brightness change degree of the detection frameΔ bright_(i,j) Representing the brightness difference between the ith history detection frame and the jth target detection frame, bright j Representing the brightness of the jth target detection frame; delta 3_(i,j) Indicating the degree of change of the color tone of the detection frameΔ hue_(i,j) Representing the hue difference value, hue, of the ith history detection box and the jth target detection box j Representing the hue value of the jth target detection box.
Step S1022: and setting a history detection frame corresponding to the maximum assignment gain as a first detection frame to be processed, and setting other history detection frames as a second detection frame to be processed. Step S103: generating a motion track of one or more moving targets according to the image arrangement sequence of each frame of detection image and according to the target detection frame of each frame of detection image and the first to-be-processed detection frame with the same track association relation with each target detection frame, namely connecting the to-be-processed detection frames belonging to the same track association relation in series according to the image arrangement sequence of each frame of detection image to form the motion track. One example is: the detected images include 1 st, 2 nd, 3 rd, 4 th and 5 th frame detected images, and the target detection frames in each frame detected image are shown in the following table 2:
TABLE 2
Serial number of detected image Target detection frame in detection image
Frame 1 Detection frame 11 and detection frame 12
Frame 2 Detection frame 21 and detection frame 22
Frame 3 Detection frame 31 and detection frame 32
Frame 4 Detection frame 41 and detection frame 42
Frame 5 Detection frame 51 and detection frame 52
The step S102 may determine that the first to-be-processed detection frames corresponding to each target detection frame in each frame of the detected image are shown in the following table 3:
TABLE 3 Table 3
Target detection frame First to-be-processed detection frame corresponding to target detection frame
Detection frame 11 Without any means for
Detection frame 12 Without any means for
Detection frame 21 Detection frame 11 in 1 st frame detection image
Detection frame 22 Detection frame 12 in 1 st frame detection image
Detection frame 31 Detection frame 21 in 2 nd frame detection image
Detection frame 32 Frame 2 detectionDetection frame 22 in an image
Detection frame 41 Detection frame 31 in 3 rd frame detection image
Detection frame 42 Detection frame 32 in 3 rd frame detection image
Detection frame 51 Detection frame 41 in the 4 th frame detection image
Detection frame 52 Detection frame 42 in frame 4 detection image
According to each target detection frame and its corresponding first detection frame to be processed shown in table 3, and according to the image arrangement order of each frame detection image, the motion trajectories of two moving targets can be generated. One is a movement locus formed by the series connection of the detection frames 11, 21, 31, 41, and 51, and one is a movement locus formed by the series connection of the detection frames 12, 22, 32, 42, and 52.
Step S104: and carrying out statistical analysis on the detection frame position of the second detection frame to be processed so as to obtain a motion area of the moving target. In the embodiment of the invention, the detection frame position of the second detection frame to be processed can be clustered, and the motion area of the motion target is obtained according to the result of the clustering; the probability distribution of the detection frame position of the second detection frame to be processed can also be obtained, and the motion area of the moving object is determined according to the probability distribution. The following describes the above-described embodiments of determining the motion area of the moving object by using the clustering process and the probability distribution, respectively.
1. Embodiments for determining a motion region of a moving object using clustering
Specifically, in the present embodiment, the moving region of the moving object can be determined by the following steps 21 to 24: step 21: and acquiring a second detection frame to be processed corresponding to each frame of detection image in the continuous multi-frame detection images. Step 22: and clustering according to the detection frame position of each second detection frame to be processed to obtain one or more clustering clusters. It should be noted that, in this embodiment, a conventional data clustering method may be used to perform clustering processing on the detection frame positions of the second to-be-processed detection frame, where the data clustering algorithm includes, but is not limited to: k-means clustering algorithm (k-means clustering algorithm). For brevity of description, a detailed description of the specific working principle of the data clustering algorithm is not repeated here. Step 23: and acquiring the density of the second detection frame to be processed in each cluster. Step 24: and acquiring a cluster with the density larger than or equal to a preset density threshold value, and setting a region corresponding to the cluster (a region determined according to the position of a detection frame of each second detection frame to be processed in the cluster) as a motion region of the moving object. It should be noted that, the specific value of the preset density threshold may be flexibly set by a person skilled in the art according to actual needs, for example, the preset density threshold may be an empirical value obtained by performing a clustering test on the target detection frame in the image sample.
2. Embodiments for determining a motion region of a moving object using probability distribution
Specifically, in the present embodiment, the moving region of the moving object can be determined by the following steps 31 to 34: step 31: and acquiring a second detection frame to be processed corresponding to each frame of detection image. Step 32: and taking the length and the width of each second detection frame to be processed as two-dimensional variables, and acquiring a two-dimensional Gaussian distribution function corresponding to each second detection frame to be processed. It should be noted that, in this embodiment, a gaussian distribution (Gaussian distribution) theory in the mathematical technical field is adopted, and the length and the width of each second to-be-processed detection frame are taken as two-dimensional variables, so as to obtain a two-dimensional gaussian distribution function corresponding to each second to-be-processed detection frame. For brevity of description, specific principles of gaussian distribution theory are not repeated here. Step 33: and respectively acquiring the probability value of each coordinate position in each second to-be-processed detection frame by adopting a two-dimensional Gaussian distribution function, and constructing a panoramic probability map according to the probability values, wherein the probability value stored by each pixel point in the panoramic probability map represents the probability value of the pixel point of each pixel point belonging to the moving object. Wherein the global probability map is the same size as the detected image. Step 34: and acquiring pixel point positions with probability values larger than or equal to a preset probability threshold in the global probability map, and setting the areas corresponding to the pixel point positions as the moving areas of the moving object. Further, in the above step 33 of the present embodiment, the panoramic probability map may be constructed by the following steps 331 to 335: step 331: after the moving object detection is performed on the current detection image (step S101), the probability value stored in each pixel point position in the global probability map to be updated is subtracted by a preset attenuation value respectively, so as to obtain the global probability map after the initial update. It should be noted that, a specific value of the preset attenuation value may be flexibly set by a person skilled in the art according to actual requirements, and in this embodiment, the preset attenuation value may be set to 1. Step 332: judging whether a second to-be-processed detection frame which has no track association relation with a target detection frame of the current detection image is acquired or not; if so, go to step 333; if not, skipping the current detection image, resetting the next frame detection image as the current detection image, and then turning to step 331, namely after turning to step 331, executing step 331 according to the reset current detection image. Step 333: and acquiring a probability value of each coordinate position in the second to-be-processed detection frame by adopting a two-dimensional Gaussian distribution function corresponding to the second to-be-processed detection frame. Step 334: according to the corresponding relation between each coordinate position in the second to-be-processed detection frame and each pixel position in the global probability map to be updated, the probability value of each coordinate position is respectively accumulated to the probability value stored in each corresponding pixel position so as to update the probability value stored in each pixel position, and the global probability map after being updated again is obtained. Because the size of the global probability map is the same as that of the detection image, the corresponding relation between each coordinate position in the second detection frame to be processed and each pixel position in the global probability map to be updated can be obtained according to the corresponding relation between each coordinate position in the second detection frame to be processed and each pixel position in the detection image and the corresponding relation between each pixel position in the detection image and each pixel position in the global probability map to be updated. Step 335: resetting the global probability map after being updated again as the global probability map to be updated, resetting the next frame of detection image as the current detection image, and then turning to the step 331, namely executing the step 331 according to the reset global probability map to be updated and the reset current detection image after turning to the step 331. Through the steps 331 to 335, after each frame of detected image is detected, the probability value of the global probability map to be updated is updated according to the detected result, so that the global probability map maintains the most accurate state in real time, thereby being beneficial to obtaining the more accurate motion region of the moving object according to the global probability map.
The steps 331 to 335 are specifically described below by taking the example that the global probability map and the detection image are both a probability image with a length and width of 10×10, the probability value stored in each pixel point position in the current global probability map to be updated is zero, and the size of the second detection frame to be processed is 5×3. First, referring to fig. 6, fig. 6 exemplarily shows each position coordinate of the second to-be-processed detection frame in the detection image. It should be noted that, for simplicity of description, the coordinates of each position in the second to-be-processed detection frame in fig. 6 are coordinates in a coordinate system constructed by taking the center point of the second to-be-processed detection frame as the origin. Then, a two-dimensional Gaussian distribution function corresponding to the second to-be-processed detection frame is adopted to obtain the probability value of each coordinate position in the second to-be-processed detection frame shown in fig. 6, and the probability value of each coordinate position is normalized to obtain the final probability value of each coordinate position in the second to-be-processed detection frame shown in fig. 7. Finally, since the global probability map and the detection image are both probability images with a length and width of 10×10, the probability value of each coordinate position in fig. 7 can be directly superimposed with the probability value stored in each pixel point position in the global probability map. Wherein, the analytical formula of the two-dimensional Gaussian distribution function corresponding to the second to-be-processed detection frame can be as follows Is thatV i,j Representing the probability value corresponding to the coordinate position (i, j).
Step S105: and alarming according to the motion trail and/or the motion area of the moving object. If the motion trail and/or the motion area of the moving object are detected, outputting alarm information, wherein the alarm information comprises but is not limited to: voice information, image information, text information, and the like. Further, before the alarm information is output, the detected motion trail can be compared and analyzed according to the detected motion trail and the preset motion trail of the moving object, so as to judge whether the detected motion trail is the motion trail of the moving object. Specifically, in one embodiment, the alert may be made by the following steps 41-43: step 41: analyzing whether the motion trail is the motion trail of the preset motion target or not according to the change trend of the motion trail and the comparison result of the change trend of the preset motion target; if yes, go to step 42; if not, go to step 43. One example is: if the preset moving object is an automobile, the change trend of the preset moving object is that the size of the object detection frame continuously increases at a certain speed (the automobile runs towards the image acquisition device), and the size of the object detection frame in the detected change trend of the moving track is negligent or negligent, then it can be determined that the moving track does not belong to the moving track of the preset moving object. Step 42: and analyzing whether the corresponding moving target has abnormal actions according to the comparison result, and alarming according to the analysis result. If no action abnormality occurs, the motion trail is directly output to alarm, and if the action abnormality occurs, the motion trail and the analyzed abnormal behavior are simultaneously output to alarm. In addition, if the movement region of the moving object is determined, an alarm is also given according to the movement region, for example, the position information of the movement region is output. One example is: when the motion trajectory is that of a pedestrian, if it is detected that the target detection frame in the motion trajectory suddenly changes from a vertical bar to a horizontal bar, it can be determined that the pedestrian falls, that is, that a mobility abnormality has occurred. The motion trail of the pedestrian and prompt information of falling of the pedestrian can be simultaneously output when the alarm is given. Step 43: and alarming according to the currently determined moving area of the moving object. When the motion trail is judged not to belong to the motion trail of the moving object, the warning is not required to be carried out on the motion trail, so that the warning can be carried out only according to the determined motion area of the moving object. It should be noted that, although the foregoing embodiments describe the steps in a specific order, it will be understood by those skilled in the art that, in order to achieve the effects of the present invention, the steps are not necessarily performed in such an order, and may be performed simultaneously (in parallel) or in other orders, and these variations are within the scope of the present invention.
Further, the invention also provides a moving object detection alarm device. Referring to fig. 8, fig. 8 is a main block diagram of a moving object detection alarm device according to an embodiment of the present invention. As shown in fig. 8, the moving object detection alarm device in the embodiment of the present invention mainly includes an object detection frame acquisition module 11, a to-be-processed detection frame acquisition module 12, a movement track generation module 13, a movement region acquisition module 14, and an alarm module 15. In some embodiments, one or more of the target detection frame acquisition module 11, the pending detection frame acquisition module 12, the motion trajectory generation module 13, the motion region acquisition module 14, and the alert module 15 may be combined together into one module. In some embodiments, the target detection frame acquisition module 11 may be configured to perform moving target detection on each frame of detection image, respectively, to acquire a target detection frame of each frame of detection image; the to-be-processed detection frame acquiring module 12 may be configured to acquire, for each frame of detection image, a first to-be-processed detection frame having the same track association relationship with the target detection frame of each frame of detection image and a second to-be-processed detection frame having no track association relationship, from target detection frames of one or more frames of detection images preceding each frame of detection image, respectively; the motion trajectory generation module 13 may be configured to generate a motion trajectory of one or more moving objects according to an image arrangement order of each frame of the detected image and a first to-be-processed detected frame having the same trajectory association relationship with each of the target detected frames; the motion region acquisition module 14 may be configured to perform statistical analysis on the detection frame positions of the second detection frame to be processed to acquire a motion region of the moving object; the alert module 15 may be configured to alert according to a motion profile and/or a motion area of a moving object. In one embodiment, the specific implementation functions may be described with reference to steps S101-S104.
In one embodiment, the target detection frame acquisition module 11 may be further configured to: acquiring the current brightness of a current detection image and the historical brightness of a continuous multi-frame detection image before the current detection image, and judging whether the brightness variation of the current brightness and the historical brightness is greater than or equal to a preset variation threshold; if yes, not detecting a moving target of the current detection image; if not, acquiring foreground pixels in the current detection image by adopting a foreground detection algorithm; performing area communication on foreground pixels to form one or more pixel groups; acquiring an external rectangular frame of each pixel group and the size of each external rectangular frame; obtaining an external rectangular frame consistent with a preset target size according to the size, and setting the external rectangular frame as a target detection frame of the current detection image; the preset target size is determined according to the actual size of the moving target, the actual moving detection range of the moving target, the size of the detection image and the size of a target detection frame obtained by adopting continuous multi-frame detection images before the current detection image. In one embodiment, the description of the specific implementation functions may be described with reference to step S201 to step S207.
In one embodiment, the moving object detection alarm device shown in fig. 8 may further include an object detection frame merging module and/or an object detection frame filtering module. In this embodiment, the target detection frame merging module may include a first detection frame to be merged to obtain a sub-module, a second detection frame to be merged to obtain a sub-module, a merging benefit value calculation sub-module, and a merging processing sub-module. Specifically, the first to-be-combined detection frame obtaining sub-module may be configured to select, from target detection frames, a target detection frame having an area less than or equal to a preset area threshold as the first to-be-combined detection frame; the second to-be-combined detection frame obtaining sub-module may be configured to screen according to the distance between the first to-be-combined detection frame and other target detection frames in the current detection image, so as to obtain a second to-be-combined detection frame; the combined benefit value calculation sub-module may be configured to calculate a combined benefit value of the first to-be-combined detection frame and the second to-be-combined detection frame; the merging processing sub-module may be configured to selectively perform merging processing on the first to-be-merged detection frame and the second to-be-merged detection frame according to a comparison result of the merging benefit value and a preset benefit threshold; the merging gain value is the sum of the areas of the first to-be-merged detecting frame and the second to-be-merged detecting frame and the ratio of the areas of the new detecting frames formed after the first to-be-merged detecting frame and the second to-be-merged detecting frame are merged. In one embodiment, the description of the specific implementation functions may be described with reference to step S301 to step S304.
In this embodiment, the target detection frame filtering module may include a first filtering sub-module and/or a second filtering sub-module and/or a third filtering sub-module. Specifically, the first filtering sub-module may be configured to calculate a state change value of each target detection frame in the current detection image according to detection frame state information of the target detection frame in the current detection image and detection frame state information of the target detection frame in one or more detection images preceding the current detection image; if the state change value is greater than or equal to a preset change threshold value, deleting the corresponding target detection frame; the detection frame state information comprises the brightness, the size and the position of the target detection frame, the state change value comprises a brightness change value, a size change value and a moving speed, and the preset change threshold comprises a brightness change threshold, a size change threshold and a moving speed threshold. The second filtering sub-module may be configured to obtain the sum of the areas of all target detection frames in the current detection image, calculate the ratio of the sum of the areas to the image area of the current detection image; and if the ratio is greater than or equal to a preset ratio threshold, deleting all target detection frames in the current detection image. The third filtering sub-module may be configured to obtain an aspect ratio of each target detection frame in the current detection image, and determine whether the aspect ratio is consistent with an aspect ratio of a preset moving target; if not, deleting the corresponding target detection frame. In one embodiment, the description of the specific implementation function may be described with reference to step S101.
In one embodiment, the second detection frame to be combined acquisition sub-module may be further configured to: step S11: setting the other target detection frames as detection frames to be screened; step S12: obtaining nearest neighbor detection frames of K first detection frames to be combined from detection frames to be screened by adopting a nearest neighbor algorithm, wherein K is more than or equal to 1; step S13: judging whether the distance between each nearest neighbor detection frame and the first detection frame to be combined is greater than or equal to a preset distance threshold value, and acquiring an initial second detection frame to be combined and an updated detection frame to be screened according to a judging result; if the distance corresponding to the current nearest neighbor detection frame is greater than or equal to a preset distance threshold, setting the current nearest neighbor detection frame as an initial second detection frame to be combined and deleting the current nearest neighbor detection frame from the detection frames to be screened so as to update the detection frames to be screened; step S14: judging whether the number of the initial second detection frames to be combined is K or not; if yes, setting the initial second detection frame to be combined as a final second detection frame to be combined; if not, go to step S15; step S15: judging whether the number of the updated detection frames to be screened is zero or not; if so, selecting K target detection frames from the other target detection frames according to the sequence that the distances between the first detection frame to be combined and each other target detection frame are from large to small, and setting the K target detection frames as final second detection frames to be combined; if not, go to step S12 and execute step S12 according to the updated detection frame to be screened. In one embodiment, the description of the specific implementation function may be described with reference to step S302.
In one embodiment, the merge processing sub-module may be further configured to: step S21: forming an initial detection frame set by the first detection frames to be combined and all the corresponding second detection frames to be combined; step S22: acquiring initial in-set detection of detection framesCombined benefit value B after frame combining 1 The method comprises the steps of carrying out a first treatment on the surface of the If B 1 ≥B 1th Combining the detection frames in the initial detection frame set; if B 1 <B 1th Then go to step S23; wherein B is 1th Is a benefit threshold and B 1th =A·B n-1 A and B are preset threshold coefficients, and n is the number of detection frames of the initial detection frame set; step S23: respectively obtaining the combined gain value corresponding to each sub-detection frame set obtained after the detection frames in each sub-detection frame set are combined under the initial detection frame set, and obtaining the maximum combined gain value B from the combined gain value corresponding to each sub-detection frame set 2 Wherein each set of sub-detection frames differ by a different deleted detection frame; if B 2 ≥B 2th Will B 2 Combining the detection frames in the corresponding sub detection frame sets; if B 2 <B 2th Then go to step S24; wherein B is 2th Is a benefit threshold and B 2th =A·B n′-1 N' is B 2 The number of detection frames of the corresponding sub-detection frame set; step S24: judgment B 2 Whether the corresponding sub-detection frame set contains a first detection frame to be combined or not; if yes, go to step S25; if not, not carrying out combination treatment on the first detection frame to be combined and the second detection frame to be combined; step S25: will B 2 After the corresponding sub-detection frame set is reset to the initial detection frame set, the process goes to step S23, and step S23 is executed according to the reset initial detection frame set. In one embodiment, the description of the specific implementation function may be described with reference to step S304.
In one embodiment, the pending detection box acquisition module 12 may include an assigned gain calculation sub-module and a pending detection box acquisition sub-module. In this embodiment, the assignment gain calculation sub-module may be configured to set a target detection frame of one or more frames of detection images that are previous to the current detection image as a history detection frame, and calculate an assignment gain corresponding to each history detection frame when each history detection frame and each target detection frame in the current detection image are assigned to belong to the same motion track; the to-be-processed detection frame acquisition sub-module may be configured to set a history detection frame corresponding to the maximum assignment gain as a first to-be-processed detection frame, and set other history detection frames as second to-be-processed detection frames; the assigned gain represents the credibility of the same motion track of the historical detection frame and the target detection frame in the current detection image, and the value of the assigned gain and the credibility form a positive correlation. In one embodiment, the description of the specific implementation function may be described with reference to step S102.
In one embodiment, the assigned gain calculation sub-module may be further configured to calculate the assigned gain corresponding to each history detection box according to the method shown in equation (2). In one embodiment, the description of the specific implementation function may be described with reference to step S102.
In one embodiment, the alert module 15 may be further configured to: analyzing whether the motion trail belongs to the motion trail of the preset motion target or not according to the change trend of the motion trail and the comparison result of the change trend of the preset motion target; when the motion trail belongs to the motion trail of the preset moving target, continuously analyzing whether the preset moving target is abnormal in action according to the comparison result, and alarming according to the analysis result and/or the motion area of the moving target; and if the motion trail does not belong to the preset motion trail of the moving target, alarming according to the motion area of the moving target. In one embodiment, the description of the specific implementation function may be described with reference to step S102.
In one embodiment, the motion region acquisition module 14 may include a first motion region acquisition sub-module and/or a second motion region acquisition sub-module. In this embodiment, the first motion region acquisition sub-module may be configured to perform the following operations: acquiring a second detection frame to be processed corresponding to each frame of detection image in the continuous multi-frame detection images; clustering is carried out according to the detection frame position of each second detection frame to be processed so as to obtain one or more clustering clusters; acquiring the density of a second detection frame to be processed in each cluster; obtaining clusters with density larger than or equal to a preset density threshold value, and setting the area corresponding to the clusters as a moving area of a moving object. In one embodiment, the description of the specific implementation function may be described with reference to step S104. In this embodiment, the second motion region acquisition sub-module may be configured to perform the following operations: acquiring a second detection frame to be processed corresponding to each frame of detection image; taking the length and the width of each second detection frame to be processed as two-dimensional variables, and acquiring a two-dimensional Gaussian distribution function corresponding to each second detection frame to be processed; respectively acquiring a probability value of each coordinate position in each second detection frame to be processed by adopting a two-dimensional Gaussian distribution function, and constructing a panoramic probability map according to the probability values, wherein the probability value stored by each pixel point in the panoramic probability map represents the probability value of the pixel point of each pixel point belonging to a moving object; acquiring pixel point positions with probability values larger than or equal to a preset probability threshold in the global probability map, and setting a region corresponding to the pixel point positions as a motion region of a moving target; wherein the global probability map is the same size as the detected image. In one embodiment, the description of the specific implementation function may be described with reference to step S104.
In one embodiment, the second motion region acquisition sub-module may be further configured to construct the panoramic probability map by performing the following operations: step S31: after moving object detection is carried out on the current detection image, respectively subtracting a preset attenuation value from a probability value stored in each pixel point position in the global probability map to be updated so as to obtain a global probability map after primary updating; step S32: judging whether a second to-be-processed detection frame which has no track association relation with the target detection frame of the current detection image is acquired or not; if yes, go to step S33; if not, resetting the next frame of detection image as the current detection image, and then turning to step S31, and executing step S31 according to the reset current detection image; step S33: acquiring a probability value of each coordinate position in the second to-be-processed detection frame by adopting a two-dimensional Gaussian distribution function corresponding to the second to-be-processed detection frame; step S34: according to the corresponding relation between each coordinate position in the second to-be-processed detection frame and each pixel position in the global probability map to be updated, respectively accumulating the probability value of each coordinate position to the probability value stored in each corresponding pixel position so as to update the probability value stored in each pixel position and obtain the global probability map updated again; step S35: resetting the global probability map after the re-updating as the global probability map to be updated, resetting the next frame of detection image as the current detection image, then turning to step S31, and executing step S31 according to the reset global probability map to be updated and the reset current detection image. In one embodiment, the description of the specific implementation function may be described with reference to step S104.
The above-mentioned moving object detection and alarm device is used for executing the embodiment of the moving object detection and alarm method shown in fig. 8, and the technical principles of the two, the technical problems to be solved and the technical effects to be produced are similar, and those skilled in the art can clearly understand that, for convenience and brevity of description, the specific working process and related description of the moving object detection and alarm device can refer to the description of the embodiment of the moving object detection and alarm method, and will not be repeated here.
It will be appreciated by those skilled in the art that the present invention may implement all or part of the above-described methods according to the above-described embodiments, or may be implemented by means of a computer program for instructing relevant hardware, where the computer program may be stored in a computer readable storage medium, and where the computer program may implement the steps of the above-described embodiments of the method when executed by a processor. Wherein the computer program comprises computer program code which may be in source code form, object code form, executable file or some intermediate form etc. The computer readable medium may include: any entity or device, medium, usb disk, removable hard disk, magnetic disk, optical disk, computer memory, read-only memory, random access memory, electrical carrier wave signals, telecommunications signals, software distribution media, and the like capable of carrying the computer program code. It should be noted that the computer readable medium contains content that can be appropriately scaled according to the requirements of jurisdictions in which such content is subject to legislation and patent practice, such as in certain jurisdictions in which such content is subject to legislation and patent practice, the computer readable medium does not include electrical carrier signals and telecommunication signals.
Further, the invention also provides a computer readable storage medium. In one embodiment of the computer-readable storage medium according to the present invention, the computer-readable storage medium may be configured to store a program for executing the moving object detection alert method of the above-described method embodiment, which may be loaded and executed by a processor to implement the moving object detection alert method described above. For convenience of explanation, only those portions of the embodiments of the present invention that are relevant to the embodiments of the present invention are shown, and specific technical details are not disclosed, please refer to the method portions of the embodiments of the present invention. The computer readable storage medium may be a storage device including various electronic devices, and optionally, the computer readable storage medium in the embodiments of the present invention is a non-transitory computer readable storage medium.
Further, the invention also provides a control device. In one control device embodiment according to the present invention, the control device includes a processor and a storage device, the storage device may be configured to store a program for executing the moving object detection warning method of the above-described method embodiment, and the processor may be configured to execute the program in the storage device, including, but not limited to, the program for executing the moving object detection warning method of the above-described method embodiment. For convenience of explanation, only those portions of the embodiments of the present invention that are relevant to the embodiments of the present invention are shown, and specific technical details are not disclosed, please refer to the method portions of the embodiments of the present invention. The control device may be a control device formed of various electronic devices.
Further, it should be understood that, since the respective modules are merely set for illustrating the functional units of the system of the present invention, the physical devices corresponding to the modules may be the processor itself, or a part of software in the processor, a part of hardware, or a part of a combination of software and hardware. Accordingly, the number of individual modules in the figures is merely illustrative.
Those skilled in the art will appreciate that the various modules in the system may be adaptively split or combined. Such splitting or combining of specific modules does not cause the technical solution to deviate from the principle of the present invention, and therefore, the technical solution after splitting or combining falls within the protection scope of the present invention.
Thus far, the technical solution of the present invention has been described in connection with one embodiment shown in the drawings, but it is easily understood by those skilled in the art that the scope of protection of the present invention is not limited to these specific embodiments. Equivalent modifications and substitutions for related technical features may be made by those skilled in the art without departing from the principles of the present invention, and such modifications and substitutions will fall within the scope of the present invention.

Claims (16)

1. A moving object detection alert method, the method comprising:
respectively detecting a moving target of each frame of detection image to obtain a target detection frame of each frame of detection image;
for each frame of detection image, respectively acquiring a first to-be-processed detection frame with the same track association relationship with the target detection frame of each frame of detection image and a second to-be-processed detection frame without the track association relationship from the target detection frame of the previous frame or frames of detection image of each frame of detection image;
generating a motion track of one or more motion targets according to the image arrangement sequence of each frame of detection image, the target detection frame of each frame of detection image and a first to-be-processed detection frame which has the same track association relation with each target detection frame;
carrying out statistical analysis on the detection frame position of the second detection frame to be processed so as to obtain a motion area of the moving target;
alarming according to the motion trail and/or the motion area of the moving object;
the method for obtaining the first to-be-processed detection frame and the second to-be-processed detection frame comprises the following steps:
setting a target detection frame of one or more frames of detection images before a current detection image as a history detection frame, and calculating an assignment gain corresponding to each history detection frame when each history detection frame and each target detection frame in the current detection image are assigned to belong to the same motion track;
Setting a history detection frame corresponding to the maximum assignment gain as a first detection frame to be processed, and setting other history detection frames as a second detection frame to be processed;
the assignment gain represents the credibility of the same motion track of the historical detection frame and the target detection frame in the current detection image, and the numerical value of the assignment gain and the credibility form a positive correlation;
the assigned gain for each history detection box is calculated according to the method shown in the following formula:
gain final_(i,j) =α 0 ·gain org_(i,j)1 ·S i,j2 ·(1-Δ 1_(i,j) )+α 3 ·(1-Δ 2_(i,j) )+α 4 ·(1-Δ 3_(i,j) )
wherein the gain final_(i,j) Indicating the assignment gain corresponding to the i-th historical detection frame when the i-th historical detection frame and the j-th target detection frame in the current detection image are assigned to belong to the same motion track; the gain org_(i,j) Representing the intersection ratio of the ith historical detection frame and the jth target detection frame in the current detection image, wherein S is as follows i,j A direction cosine representing the direction vector of the i-th history detection frame and the direction vector of the j-th target detection frame; said delta 1_(i,j) Indicating the change degree of the detection frame areaSaid delta area_(i,j) Representing the area difference between the ith history detection frame and the jth target detection frame, the area j Representing the area of the jth target detection frame; said delta 2_(i,j) Indicating the degree of change of the brightness of the detection frame and +.> Said delta bright_(i,j) Representing the brightness difference between the ith history detection frame and the jth target detection frame, the bright j Representing the brightness of the j-th target detection frame; said delta 3_(i,j) Indicating the degree of change of the color tone of the detection frame and +.> Said delta hue_(i,j) Representing the hue difference value of the ith history detection frame and the jth target detection frame, the hue j Representing the hue value of the jth target detection box.
2. The moving object detection alert method according to claim 1, wherein the step of acquiring an object detection frame of each frame of the detected image specifically includes:
acquiring the current brightness of a current detection image and the historical brightness of a continuous multi-frame detection image before the current detection image, and judging whether the brightness variation of the current brightness and the historical brightness is larger than or equal to a preset variation threshold;
if yes, not detecting a moving target of the current detection image;
if not, acquiring foreground pixels in the current detection image by adopting a foreground detection algorithm;
performing area communication on the foreground pixels to form one or more pixel groups;
obtaining an external rectangular frame of each pixel group and the size of each external rectangular frame;
Acquiring an external rectangular frame consistent with a preset target size according to the size, and setting the external rectangular frame as a target detection frame of the current detection image;
the preset target size is determined according to the actual size of the moving target, the actual moving detection range of the moving target, the size of the detection image and the size of a target detection frame obtained by adopting continuous multi-frame detection images before the current detection image.
3. The moving object detection alert method according to claim 2, wherein after the step of setting the circumscribed rectangular frame as the object detection frame of the current detection image, the method further comprises performing a merging process on the object detection frame by:
selecting a target detection frame with the area smaller than or equal to a preset area threshold value from the target detection frames as a first detection frame to be combined;
screening according to the distance between the first detection frame to be combined and other target detection frames in the current detection image to obtain a second detection frame to be combined;
calculating a combined gain value of the first detection frame to be combined and the second detection frame to be combined;
Selectively combining the first to-be-combined detection frame and the second to-be-combined detection frame according to a comparison result of the combined gain value and a preset gain threshold value;
the combined gain value is a sum of the areas of the first to-be-combined detection frame and the second to-be-combined detection frame, and a ratio of the areas of the new detection frames formed after the first to-be-combined detection frame and the second to-be-combined detection frame are combined;
and/or the number of the groups of groups,
after the step of "setting the circumscribed rectangular frame as the target detection frame of the current detection image", the method further includes performing a filtering process on the target detection frame by:
calculating a state change value of each target detection frame in the current detection image according to the detection frame state information of the target detection frame in the current detection image and the detection frame state information of the target detection frame in one or more previous detection images of the current detection image; if the state change value is greater than or equal to a preset change threshold value, deleting the corresponding target detection frame; the detection frame state information comprises the brightness, the size and the position of a target detection frame, the state change value comprises a brightness change value, a size change value and a moving speed, and the preset change threshold comprises a brightness change threshold, a size change threshold and a moving speed threshold;
And/or, obtaining the sum of the areas of all target detection frames in the current detection image, and calculating the ratio of the sum of the areas to the image area of the current detection image; if the ratio is greater than or equal to a preset ratio threshold, deleting all target detection frames in the current detection image;
and/or obtaining the length-width ratio of each target detection frame in the current detection image, and judging whether the length-width ratio is consistent with the length-width ratio of a preset moving target; if not, deleting the corresponding target detection frame.
4. The method of claim 3, wherein the step of screening according to the distance between the first to-be-combined detection frame and other target detection frames in the current detection image to obtain a second to-be-combined detection frame specifically includes:
step S11: setting the other target detection frames as detection frames to be screened;
step S12: obtaining K nearest neighbor detection frames of the first detection frames to be combined from the detection frames to be screened by adopting a nearest neighbor algorithm, wherein K is more than or equal to 1;
step S13: judging whether the distance between each nearest neighbor detection frame and the first detection frame to be combined is greater than or equal to a preset distance threshold value, and acquiring an initial second detection frame to be combined and an updated detection frame to be screened according to a judging result;
If the distance corresponding to the current nearest neighbor detection frame is greater than or equal to the preset distance threshold, setting the current nearest neighbor detection frame as an initial second detection frame to be combined and deleting the current nearest neighbor detection frame from the detection frames to be screened so as to update the detection frames to be screened;
step S14: judging whether the number of the initial second detection frames to be combined is K or not; if yes, setting the initial second detection frame to be combined as a final second detection frame to be combined; if not, go to step S15;
step S15: judging whether the number of the updated detection frames to be screened is zero or not;
if so, selecting K target detection frames from the other target detection frames according to the sequence that the distances between the first detection frame to be combined and each other target detection frame are from big to small, and setting the K target detection frames as final second detection frames to be combined; if not, turning to the step S12 and executing the step S12 according to the updated detection frame to be screened;
and/or the number of the groups of groups,
the step of selectively combining the first to-be-combined detection frame and the second to-be-combined detection frame according to the comparison result of the combined gain value and the preset gain threshold value specifically includes:
Step S21: forming an initial detection frame set by the first detection frames to be combined and all the corresponding second detection frames to be combined;
step S22: obtaining a combined benefit value B after combining the detection frames in the initial detection frame set 1
If B 1 ≥B 1th Combining the detection frames in the initial detection frame set; if B 1 <B 1th Then go to step S23; wherein the B is 1th Is a benefit threshold and B 1th =A·B n-1 The A and the B are preset threshold coefficients, and the n is the number of detection frames of the initial detection frame set;
step S23: respectively obtaining a combined gain value corresponding to each sub-detection frame set obtained after the detection frames in each sub-detection frame set are combined under the initial detection frame set, and obtaining a maximum combined gain value B from the combined gain value corresponding to each sub-detection frame set 2 Wherein each of the sub-sets of detection frames differ by a different one of the deleted detection frames;
if B 2 ≥B 2th Will B 2 Combining the detection frames in the corresponding sub detection frame sets; if B 2 <B 2th Then go to step S24; wherein the B is 2th Is a benefit threshold and B 2th =A·B n′-1 Said n' is said B 2 The number of detection frames of the corresponding sub-detection frame set;
step S24: judging the B 2 Whether the corresponding sub-detection frame set contains the first detection frame to be combined or not; if yes, go to step S25; if not, not carrying out combination treatment on the first detection frame to be combined and the second detection frame to be combined;
Step S25: the B is carried out 2 After the corresponding sub-detection frame set is reset to the initial detection frame set, the process goes to step S23, and step S23 is executed according to the reset initial detection frame set.
5. The moving object detection alarm method according to claim 1, wherein the specific steps of "alarm according to the motion trajectory and/or the motion area of the moving object" include:
analyzing whether the motion trail belongs to the motion trail of the preset motion target or not according to the comparison result of the motion trail and the preset motion target;
when the motion trail belongs to the motion trail of the preset moving target, continuously analyzing whether the preset moving target is abnormal in action according to the comparison result, and alarming according to the analysis result and/or the motion area of the moving target;
and if the motion trail does not belong to the preset motion trail of the moving target, alarming according to the motion area of the moving target.
6. The moving object detection alarm method according to claim 1, wherein the step of performing statistical analysis on the detection frame position of the second to-be-processed detection frame to obtain the moving area of the moving object specifically includes:
Acquiring a second detection frame to be processed corresponding to each frame of detection image in the continuous multi-frame detection images;
clustering is carried out according to the detection frame position of each second detection frame to be processed, so as to obtain one or more clustering clusters;
acquiring the density of a second detection frame to be processed in each cluster;
acquiring a cluster with density larger than or equal to a preset density threshold value, and setting a region corresponding to the cluster as a motion region of the moving object;
and/or the number of the groups of groups,
the step of performing statistical analysis on the detection frame position of the second detection frame to be processed to obtain the motion area of the moving object specifically includes:
acquiring a second detection frame to be processed corresponding to each frame of detection image;
taking the length and the width of each second detection frame to be processed as two-dimensional variables, and obtaining a two-dimensional Gaussian distribution function corresponding to each second detection frame to be processed;
respectively acquiring a probability value of each coordinate position in each second detection frame to be processed by adopting the two-dimensional Gaussian distribution function, and constructing a panoramic probability map according to the probability values, wherein the probability value stored by each pixel point in the panoramic probability map represents the probability value of the pixel point of each pixel point belonging to a moving object;
Acquiring pixel point positions with probability values larger than or equal to a preset probability threshold in the panoramic probability map, and setting a region corresponding to the pixel point positions as a motion region of the moving object;
wherein the panoramic probability map is the same size as the detection image.
7. The method of claim 6, wherein the step of constructing a panoramic probability map comprises:
step S31: after moving object detection is carried out on the current detection image, respectively subtracting a preset attenuation value from a probability value stored in each pixel point position in the panoramic probability map to be updated so as to obtain the panoramic probability map after primary updating;
step S32: judging whether a second to-be-processed detection frame which has no track association relation with the target detection frame of the current detection image is acquired or not; if yes, go to step S33; if not, resetting the next frame of detection image as the current detection image, and then turning to step S31, and executing step S31 according to the reset current detection image;
step S33: acquiring a probability value of each coordinate position in the second to-be-processed detection frame by adopting a two-dimensional Gaussian distribution function corresponding to the second to-be-processed detection frame;
Step S34: according to the corresponding relation between each coordinate position in the second to-be-processed detection frame and each pixel position in the panorama probability map to be updated, respectively accumulating the probability value of each coordinate position to the probability value stored in each corresponding pixel position so as to update the probability value stored in each pixel position and obtain the panorama probability map updated again;
step S35: resetting the re-updated panorama probability map as a panorama probability map to be updated, resetting the next frame of detection image as a current detection image, then transferring to step S31, and executing step S31 according to the reset panorama probability map to be updated and the reset current detection image.
8. A moving object detection alert device, the device comprising:
a target detection frame acquisition module configured to perform moving target detection on each frame of detection image, respectively, to acquire a target detection frame of each frame of detection image;
the system comprises a to-be-processed detection frame acquisition module, a detection frame detection module and a detection frame processing module, wherein the to-be-processed detection frame acquisition module is configured to acquire a first to-be-processed detection frame with the same track association relationship with a target detection frame of each frame detection image and a second to-be-processed detection frame without the track association relationship from the target detection frame of the previous frame or frames of each frame detection image;
The motion track generation module is configured to generate motion tracks of one or more motion targets according to the image arrangement sequence of each frame of detection image and a target detection frame of each frame of detection image and a first to-be-processed detection frame which has the same track association relation with each target detection frame;
the motion area acquisition module is configured to perform statistical analysis on the detection frame position of the second detection frame to be processed so as to acquire a motion area of a moving object;
an alarm module configured to alarm according to the motion trail and/or the motion area of the moving object;
the to-be-processed detection frame acquisition module comprises an assigned gain calculation sub-module and a to-be-processed detection frame acquisition sub-module;
the assignment gain calculation sub-module is configured to set a target detection frame of one or more frames of detection images before a current detection image as a history detection frame, and calculate an assignment gain corresponding to each history detection frame when each history detection frame and each target detection frame in the current detection image are assigned to belong to the same motion track;
the to-be-processed detection frame acquisition submodule is configured to set a history detection frame corresponding to the maximum assignment gain as a first to-be-processed detection frame, and set other history detection frames as second to-be-processed detection frames;
The assignment gain represents the credibility of the same motion track of the historical detection frame and the target detection frame in the current detection image, and the numerical value of the assignment gain and the credibility form a positive correlation;
the assignment gain calculation sub-module is further configured to calculate the assignment gain corresponding to each history detection box according to a method shown in the following formula:
gain final_(i,j) =α 0 ·gain org_(i,j)1 ·S i,j2 ·(1-Δ 1_(i,j) )+α 3 ·(1-Δ 2_(i,j) )+α 4 ·(1-Δ 3_(i,j) )
wherein the gain final_(i,j) Indicating the assignment gain corresponding to the i-th historical detection frame when the i-th historical detection frame and the j-th target detection frame in the current detection image are assigned to belong to the same motion track; the gain org_(i,j) Representing the intersection ratio of the ith historical detection frame and the jth target detection frame in the current detection image, wherein S is as follows i,j A direction cosine representing the direction vector of the i-th history detection frame and the direction vector of the j-th target detection frame; said delta 1_(i,j) Indicating the change degree of the detection frame areaSaid delta area_(i,j) Representing the area difference between the ith history detection frame and the jth target detection frame, the area j Representing the area of the jth target detection frame; said delta 2_(i,j) Indicating the degree of change of the brightness of the detection frame and +.> Said delta bright_(i,j) Representing the brightness difference between the ith history detection frame and the jth target detection frame, the bright j Representing the brightness of the j-th target detection frame; said delta 3_(i,j) Indicating the degree of change of the color tone of the detection frame and +.> Said delta hue_(i,j) Representing the hue difference value of the ith history detection frame and the jth target detection frame, the hue j Representing the jth target detection frameA hue value.
9. The moving object detection alert device according to claim 8, wherein the object detection frame acquisition module is further configured to:
acquiring the current brightness of a current detection image and the historical brightness of a continuous multi-frame detection image before the current detection image, and judging whether the brightness variation of the current brightness and the historical brightness is larger than or equal to a preset variation threshold;
if yes, not detecting a moving target of the current detection image;
if not, acquiring foreground pixels in the current detection image by adopting a foreground detection algorithm;
performing area communication on the foreground pixels to form one or more pixel groups;
obtaining an external rectangular frame of each pixel group and the size of each external rectangular frame;
acquiring an external rectangular frame consistent with a preset target size according to the size, and setting the external rectangular frame as a target detection frame of the current detection image;
The preset target size is determined according to the actual size of the moving target, the actual moving detection range of the moving target, the size of the detection image and the size of a target detection frame obtained by adopting continuous multi-frame detection images before the current detection image.
10. The moving object detection alarm device according to claim 9, further comprising an object detection frame merging module and/or an object detection frame filtering module;
the target detection frame merging module comprises a first detection frame obtaining sub-module to be merged, a second detection frame obtaining sub-module to be merged, a merging income value calculating sub-module and a merging processing sub-module; the first to-be-combined detection frame acquisition submodule is configured to select a target detection frame with the area smaller than or equal to a preset area threshold value from the target detection frames as a first to-be-combined detection frame; the second detection frame to be combined obtaining submodule is configured to screen according to the distance between the first detection frame to be combined and other target detection frames in the current detection image so as to obtain a second detection frame to be combined; the combined gain value calculating submodule is configured to calculate combined gain values of the first detection frame to be combined and the second detection frame to be combined; the merging processing submodule is configured to selectively merge the first to-be-merged detection frame and the second to-be-merged detection frame according to a comparison result of the merging benefit value and a preset benefit threshold; the combined gain value is a sum of the areas of the first to-be-combined detection frame and the second to-be-combined detection frame, and a ratio of the areas of the new detection frames formed after the first to-be-combined detection frame and the second to-be-combined detection frame are combined;
The target detection frame filtering module comprises a first filtering sub-module and/or a second filtering sub-module and/or a third filtering sub-module; the first filtering sub-module is configured to calculate a state change value of each target detection frame in the current detection image according to detection frame state information of the target detection frame in the current detection image and detection frame state information of the target detection frame in one or more detection images before the current detection image; if the state change value is greater than or equal to a preset change threshold value, deleting the corresponding target detection frame; the detection frame state information comprises the brightness, the size and the position of a target detection frame, the state change value comprises a brightness change value, a size change value and a moving speed, and the preset change threshold comprises a brightness change threshold, a size change threshold and a moving speed threshold; the second filtering sub-module is configured to acquire the sum of the areas of all target detection frames in the current detection image, and calculate the ratio of the sum of the areas to the image area of the current detection image; if the ratio is greater than or equal to a preset ratio threshold, deleting all target detection frames in the current detection image; the third filtering sub-module is configured to acquire the length-width ratio of each target detection frame in the current detection image, and judge whether the length-width ratio is consistent with the length-width ratio of a preset moving target; if not, deleting the corresponding target detection frame.
11. The moving object detection alert device according to claim 10, wherein the second to-be-combined detection frame acquisition sub-module is further configured to:
step S11: setting the other target detection frames as detection frames to be screened;
step S12: obtaining K nearest neighbor detection frames of the first detection frame to be combined from the detection frames to be screened by adopting a nearest neighbor algorithm, wherein K is more than or equal to 1:
step S13: judging whether the distance between each nearest neighbor detection frame and the first detection frame to be combined is greater than or equal to a preset distance threshold value, and acquiring an initial second detection frame to be combined and an updated detection frame to be screened according to a judging result;
if the distance corresponding to the current nearest neighbor detection frame is greater than or equal to the preset distance threshold, setting the current nearest neighbor detection frame as an initial second detection frame to be combined and deleting the current nearest neighbor detection frame from the detection frames to be screened so as to update the detection frames to be screened;
step S14: judging whether the number of the initial second detection frames to be combined is K or not; if yes, setting the initial second detection frame to be combined as a final second detection frame to be combined; if not, go to step S15;
Step S15: judging whether the number of the updated detection frames to be screened is zero or not;
if so, selecting K target detection frames from the other target detection frames according to the sequence that the distances between the first detection frame to be combined and each other target detection frame are from big to small, and setting the K target detection frames as final second detection frames to be combined; if not, turning to the step S12 and executing the step S12 according to the updated detection frame to be screened;
the merge processing sub-module is further configured to:
step S21: forming an initial detection frame set by the first detection frames to be combined and all the corresponding second detection frames to be combined;
step S22: obtaining a combined benefit value B after combining the detection frames in the initial detection frame set 1
If B 1 ≥B 1th Combining the detection frames in the initial detection frame set; if B 1 <B 1th Then go to step S23; wherein the B is 1th Is a benefit threshold and B 1th =A·B n-1 The A and the B are preset threshold coefficients, and the n is the number of detection frames of the initial detection frame set;
step S23: respectively obtaining a combined gain value corresponding to each sub-detection frame set obtained after the detection frames in each sub-detection frame set are combined under the initial detection frame set, and obtaining a maximum combined gain value B from the combined gain value corresponding to each sub-detection frame set 2 Wherein each of the sub-sets of detection frames differ by a different one of the deleted detection frames;
if B 2 ≥B 2th Will B 2 Combining the detection frames in the corresponding sub detection frame sets; if B 2 <B 2th Then go to step S24; wherein the B is 2th Is a benefit threshold and B 2th =A·B n′-1 Said n' is said B 2 The number of detection frames of the corresponding sub-detection frame set;
step S24: judging the B 2 Whether the corresponding sub-detection frame set contains the first detection frame to be combined or not; if yes, go to
Step S25; if not, not carrying out combination treatment on the first detection frame to be combined and the second detection frame to be combined;
step S25: and (3) resetting the sub detection frame set corresponding to the B2 as an initial detection frame set, then transferring to step S23, and executing step S23 according to the reset initial detection frame set.
12. The moving object detection alert device of claim 8, wherein the alert module is further configured to:
analyzing whether the motion trail belongs to the motion trail of the preset motion target or not according to the comparison result of the motion trail and the preset motion target;
When the motion trail belongs to the motion trail of the preset moving target, continuously analyzing whether the preset moving target is abnormal in action according to the comparison result, and alarming according to the analysis result and/or the motion area of the moving target;
and if the motion trail does not belong to the preset motion trail of the moving target, alarming according to the motion area of the moving target.
13. The moving object detection alert device according to claim 8, wherein the moving region acquisition module includes a first moving region acquisition sub-module and/or a second moving region acquisition sub-module;
the first motion region acquisition sub-module is configured to:
acquiring a second detection frame to be processed corresponding to each frame of detection image in the continuous multi-frame detection images;
clustering is carried out according to the detection frame position of each second detection frame to be processed, so as to obtain one or more clustering clusters;
acquiring the density of a second detection frame to be processed in each cluster;
acquiring a cluster with density larger than or equal to a preset density threshold value, and setting a region corresponding to the cluster as a motion region of the moving object;
The second motion region acquisition sub-module is configured to:
acquiring a second detection frame to be processed corresponding to each frame of detection image;
taking the length and the width of each second detection frame to be processed as two-dimensional variables, and obtaining a two-dimensional Gaussian distribution function corresponding to each second detection frame to be processed;
respectively acquiring a probability value of each coordinate position in each second detection frame to be processed by adopting the two-dimensional Gaussian distribution function, and constructing a panoramic probability map according to the probability values, wherein the probability value stored by each pixel point in the panoramic probability map represents the probability value of the pixel point of each pixel point belonging to a moving object;
acquiring pixel point positions with probability values larger than or equal to a preset probability threshold in the panoramic probability map, and setting a region corresponding to the pixel point positions as a motion region of the moving object;
wherein the panoramic probability map is the same size as the detection image.
14. The moving object detection alert device according to claim 13, wherein the second motion region acquisition sub-module is further configured to construct a panoramic probability map by:
step S31: after moving object detection is carried out on the current detection image, respectively subtracting a preset attenuation value from a probability value stored in each pixel point position in the panoramic probability map to be updated so as to obtain the panoramic probability map after primary updating;
Step S32: judging whether a second to-be-processed detection frame which has no track association relation with the target detection frame of the current detection image is acquired or not; if yes, go to step S33; if not, resetting the next frame of detection image as the current detection image, and then turning to step S31, and executing step S31 according to the reset current detection image;
step S33: acquiring a probability value of each coordinate position in the second to-be-processed detection frame by adopting a two-dimensional Gaussian distribution function corresponding to the second to-be-processed detection frame;
step S34: according to the corresponding relation between each coordinate position in the second to-be-processed detection frame and each pixel position in the panorama probability map to be updated, respectively accumulating the probability value of each coordinate position to the probability value stored in each corresponding pixel position so as to update the probability value stored in each pixel position and obtain the panorama probability map updated again;
step S35: resetting the re-updated panorama probability map as a panorama probability map to be updated, resetting the next frame of detection image as a current detection image, then transferring to step S31, and executing step S31 according to the reset panorama probability map to be updated and the reset current detection image.
15. A control device comprising a processor and a storage device, the storage device being adapted to store a plurality of program codes, characterized in that the program codes are adapted to be loaded and executed by the processor to perform the moving object detection alert method of any one of claims 1 to 7.
16. A computer readable storage medium having stored therein a plurality of program codes, wherein the program codes are adapted to be loaded and executed by a processor to perform the moving object detection alert method of any one of claims 1 to 7.
CN202110083706.9A 2021-01-21 2021-01-21 Moving object detection alarm method, moving object detection alarm device and computer readable storage medium Active CN112784738B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110083706.9A CN112784738B (en) 2021-01-21 2021-01-21 Moving object detection alarm method, moving object detection alarm device and computer readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110083706.9A CN112784738B (en) 2021-01-21 2021-01-21 Moving object detection alarm method, moving object detection alarm device and computer readable storage medium

Publications (2)

Publication Number Publication Date
CN112784738A CN112784738A (en) 2021-05-11
CN112784738B true CN112784738B (en) 2023-09-19

Family

ID=75758387

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110083706.9A Active CN112784738B (en) 2021-01-21 2021-01-21 Moving object detection alarm method, moving object detection alarm device and computer readable storage medium

Country Status (1)

Country Link
CN (1) CN112784738B (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113343856B (en) * 2021-06-09 2022-03-29 北京容联易通信息技术有限公司 Image recognition method and system
CN113255606A (en) * 2021-06-30 2021-08-13 深圳市商汤科技有限公司 Behavior recognition method and device, computer equipment and storage medium
CN114943936B (en) * 2022-06-17 2023-06-20 北京百度网讯科技有限公司 Target behavior recognition method and device, electronic equipment and storage medium
CN117671801B (en) * 2024-02-02 2024-04-23 中科方寸知微(南京)科技有限公司 Real-time target detection method and system based on binary reduction

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2004297697A (en) * 2003-03-28 2004-10-21 Fuji Photo Film Co Ltd Animation processor and method
WO2018095082A1 (en) * 2016-11-28 2018-05-31 江苏东大金智信息系统有限公司 Rapid detection method for moving target in video monitoring
CN110008867A (en) * 2019-03-25 2019-07-12 五邑大学 A kind of method for early warning based on personage's abnormal behaviour, device and storage medium
CN110349181A (en) * 2019-06-12 2019-10-18 华中科技大学 One kind being based on improved figure partition model single camera multi-object tracking method
CN110555868A (en) * 2019-05-31 2019-12-10 南京航空航天大学 method for detecting small moving target under complex ground background
CN111179311A (en) * 2019-12-23 2020-05-19 全球能源互联网研究院有限公司 Multi-target tracking method and device and electronic equipment
WO2020151084A1 (en) * 2019-01-24 2020-07-30 北京明略软件系统有限公司 Target object monitoring method, apparatus, and system
CN111738240A (en) * 2020-08-20 2020-10-02 江苏神彩科技股份有限公司 Region monitoring method, device, equipment and storage medium
CN111784739A (en) * 2020-06-24 2020-10-16 普联技术有限公司 Target identification method, device, equipment and storage medium
CN112015170A (en) * 2019-05-29 2020-12-01 北京市商汤科技开发有限公司 Moving object detection and intelligent driving control method, device, medium and equipment

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9582895B2 (en) * 2015-05-22 2017-02-28 International Business Machines Corporation Real-time object analysis with occlusion handling

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2004297697A (en) * 2003-03-28 2004-10-21 Fuji Photo Film Co Ltd Animation processor and method
WO2018095082A1 (en) * 2016-11-28 2018-05-31 江苏东大金智信息系统有限公司 Rapid detection method for moving target in video monitoring
WO2020151084A1 (en) * 2019-01-24 2020-07-30 北京明略软件系统有限公司 Target object monitoring method, apparatus, and system
CN110008867A (en) * 2019-03-25 2019-07-12 五邑大学 A kind of method for early warning based on personage's abnormal behaviour, device and storage medium
CN112015170A (en) * 2019-05-29 2020-12-01 北京市商汤科技开发有限公司 Moving object detection and intelligent driving control method, device, medium and equipment
CN110555868A (en) * 2019-05-31 2019-12-10 南京航空航天大学 method for detecting small moving target under complex ground background
CN110349181A (en) * 2019-06-12 2019-10-18 华中科技大学 One kind being based on improved figure partition model single camera multi-object tracking method
CN111179311A (en) * 2019-12-23 2020-05-19 全球能源互联网研究院有限公司 Multi-target tracking method and device and electronic equipment
CN111784739A (en) * 2020-06-24 2020-10-16 普联技术有限公司 Target identification method, device, equipment and storage medium
CN111738240A (en) * 2020-08-20 2020-10-02 江苏神彩科技股份有限公司 Region monitoring method, device, equipment and storage medium

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
基于区域相似信息的自适应运动目标检测算法;刘燕德 等;《计算机工程》;第46卷(第3期);第273-279页 *
基于目标运动特征的红外目标检测与跟踪方法;娄康 等;《南京理工大学学报》;第43卷(第4期);第455-461页 *

Also Published As

Publication number Publication date
CN112784738A (en) 2021-05-11

Similar Documents

Publication Publication Date Title
CN112784738B (en) Moving object detection alarm method, moving object detection alarm device and computer readable storage medium
CN102348128B (en) Surveillance camera system having camera malfunction detection function
CN110084165B (en) Intelligent identification and early warning method for abnormal events in open scene of power field based on edge calculation
CN104966304B (en) Multi-target detection tracking based on Kalman filtering and nonparametric background model
CN103106766A (en) Forest fire identification method and forest fire identification system
JP7121353B2 (en) VIDEO ANALYSIS DEVICE, VIDEO ANALYSIS METHOD, AND VIDEO ANALYSIS PROGRAM
CN112927262B (en) Camera lens shielding detection method and system based on video
CN112800846A (en) High-altitude parabolic monitoring method and device, electronic equipment and storage medium
CN115081957A (en) Useless management platform of danger of keeping in and monitoring useless
CN115311623A (en) Equipment oil leakage detection method and system based on infrared thermal imaging
CN117409341B (en) Unmanned aerial vehicle illumination-based image analysis method and system
CN114216434A (en) Target confirmation method, system, equipment and storage medium for mobile measurement and control station
CN111460917B (en) Airport abnormal behavior detection system and method based on multi-mode information fusion
CN113947744A (en) Fire image detection method, system, equipment and storage medium based on video
CN115187884A (en) High-altitude parabolic identification method and device, electronic equipment and storage medium
CN115984780B (en) Industrial solid waste warehouse-in and warehouse-out judging method and device, electronic equipment and medium
KR101581162B1 (en) Automatic detection method, apparatus and system of flame, smoke and object movement based on real time images
US20210150218A1 (en) Method of acquiring detection zone in image and method of determining zone usage
CN116543333A (en) Target recognition method, training method, device, equipment and medium of power system
CN113793365B (en) Target tracking method and device, computer equipment and readable storage medium
Babaryka et al. Technologies for building intelligent video surveillance systems and methods for background subtraction in video sequences
KR100615672B1 (en) Fire observation system using xy axis projection graph and its fire observation method and computer-readable recording medium to realize fire observation method
CN115393782A (en) Image processing method, image processing device, electronic equipment and storage medium
CN114663750B (en) Submarine cable filling strip fracture identification method based on machine vision and deep learning
CN117830032B (en) Method and system for monitoring snapshot and risk assessment of power transmission line network

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant