CN109427073B - Moving target determination method and device and electronic equipment - Google Patents

Moving target determination method and device and electronic equipment Download PDF

Info

Publication number
CN109427073B
CN109427073B CN201710769938.3A CN201710769938A CN109427073B CN 109427073 B CN109427073 B CN 109427073B CN 201710769938 A CN201710769938 A CN 201710769938A CN 109427073 B CN109427073 B CN 109427073B
Authority
CN
China
Prior art keywords
foreground
target
foreground target
determining
preset
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201710769938.3A
Other languages
Chinese (zh)
Other versions
CN109427073A (en
Inventor
曾钦清
张睿轩
车军
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou Hikvision Digital Technology Co Ltd
Original Assignee
Hangzhou Hikvision Digital Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hangzhou Hikvision Digital Technology Co Ltd filed Critical Hangzhou Hikvision Digital Technology Co Ltd
Priority to CN201710769938.3A priority Critical patent/CN109427073B/en
Publication of CN109427073A publication Critical patent/CN109427073A/en
Application granted granted Critical
Publication of CN109427073B publication Critical patent/CN109427073B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/246Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
    • G06T7/248Analysis of motion using feature-based methods, e.g. the tracking of corners or segments involving reference images or patches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/194Segmentation; Edge detection involving foreground-background segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30241Trajectory

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)

Abstract

The embodiment of the invention provides a moving target determining method, a moving target determining device and electronic equipment. Therefore, the moving target is determined from the foreground target instead of determining the moving target as the moving target in a multi-frame confidence score and historical movement track judgment mode, and the accuracy of correct alarm is greatly improved.

Description

Moving target determination method and device and electronic equipment
Technical Field
The invention relates to the technical field of intelligent monitoring, in particular to a moving target determining method and device and electronic equipment.
Background
With the continuous development of video monitoring technology, video monitoring equipment has been widely applied to the field of security protection.
By analyzing the image data collected by the video monitoring device, a moving object in the image data can be obtained, and then the behavior of the moving object is detected, for example: the moving target is a person A, whether the person A has a target behavior invading into the alarm area is detected, and an alarm is given under the condition that the target behavior is detected, so that the aim of alarm and prevention is fulfilled.
At present, due to various reasons such as leaf disturbance, light and shadow, etc., there are often phenomena that a moving object obtained from image data is not an actual moving object which is usually required to be detected, but interferes with the moving object, for example: the actual moving target needing to be detected is a person, and under the condition of blowing wind, the leaves can vibrate, and the leaves are taken as the moving target; or, in the case of wind blowing at night, the shadow of the leaves on the ground moves, and the shadow of the leaves is taken as a moving target. In this case, although the acquired moving targets are all interfering moving targets, since the moving targets are detected, the alarm terminal also alarms, which leads to false alarm of the moving targets and reduces the accuracy of correct alarm, and therefore how to improve the accuracy of correct alarm of the moving targets is an urgent problem to be solved.
Disclosure of Invention
The embodiment of the invention aims to provide a moving target determining method, a moving target determining device and electronic equipment so as to improve the accuracy of correct alarm. The specific technical scheme is as follows:
a moving object determination method, the method comprising:
extracting a foreground target in a current video frame;
determining a first foreground target from the foreground targets;
for each first foreground target, obtaining a first video frame containing the first foreground target;
calculating a confidence score of the first foreground object in each first video frame;
judging whether the first foreground target meets a preset confidence condition or not according to the confidence score of the first foreground target in each first video frame;
determining the first foreground target meeting the preset confidence coefficient condition as a second foreground target;
acquiring a historical motion track of each second foreground target;
judging whether the historical motion track of each second foreground target meets a preset motion track condition or not;
and determining a second foreground target with the historical motion track meeting the preset motion track condition as a motion target.
Optionally, the step of determining a first foreground object from the foreground objects includes:
and determining a first foreground target with a distance smaller than a preset distance threshold value from the foreground targets.
Optionally, the step of calculating a confidence score of the first foreground object in each first video frame includes:
in each first video frame, performing feature extraction on the first foreground target to obtain the feature of the first foreground target in each first video frame;
and obtaining the confidence score of the first foreground object in each first video frame according to the characteristics of the first foreground object in each first video frame.
Optionally, the feature of the first foreground object in each first video frame includes:
local binary pattern features and at least one of gradient features, contour features.
Optionally, the step of obtaining a confidence score of the first foreground target in each first video frame according to the feature of the first foreground target in each first video frame includes:
and respectively inputting the features of the first foreground target in each first video frame into a preset feature model to obtain a confidence score of the first foreground target in each first video frame, wherein the preset feature model is a model trained on positive and negative samples and used for judging the confidence of the target.
Optionally, the step of determining whether the first foreground target meets a preset confidence level condition according to the confidence level score of the first foreground target in each first video frame includes:
judging whether the confidence score of the first foreground target in each first video frame is larger than a preset confidence score or not;
counting the number of first video frames with the confidence score of the first foreground target being larger than a preset confidence score;
and if the counted number of the first video frames is greater than the preset number, determining the first foreground target as the first foreground target meeting the preset confidence level condition.
Optionally, the step of determining, for each second foreground target, whether the historical motion trajectory of the second foreground target meets a preset motion trajectory condition includes:
for each second foreground target, determining the motion range of the second foreground target according to the historical motion track of the second foreground target;
judging whether the determined movement range exceeds a preset movement range or not;
if so, judging whether the historical motion track of the second foreground target is intersected with the alarm area;
and if the intersection exists, determining the second foreground target as the second foreground target meeting the preset motion track condition.
Optionally, the step of determining, for each second foreground target, whether the historical motion trajectory of the second foreground target meets a preset motion trajectory condition includes:
for each second foreground target, determining the starting position and the ending position of the second foreground target according to the historical motion track of the second foreground target;
determining the movement distance of the second foreground target according to the starting position and the ending position;
judging whether the determined movement distance is larger than a preset movement distance threshold value or not;
if so, judging whether the historical motion track of the second foreground target is intersected with the alarm area;
and if the intersection exists, determining the second foreground target as the second foreground target meeting the preset motion track condition.
Optionally, after the step of determining a second foreground target of which the historical motion trajectory satisfies the preset motion trajectory condition as a moving target, the method further includes:
and aiming at suspected second foreground targets except the moving target in the second foreground targets, identifying whether the suspected second foreground targets have the moving target through a deep learning algorithm.
A moving object determining apparatus, the apparatus comprising:
the extraction module is used for extracting a foreground target in a current video frame;
the first foreground target determining module is used for determining a first foreground target from the foreground targets;
a first video frame obtaining module, configured to obtain, for each first foreground object, a first video frame including the first foreground object;
a confidence score calculation module for calculating a confidence score for the first foreground target in each first video frame;
the confidence coefficient judging module is used for judging whether the first foreground target meets a preset confidence coefficient condition according to the confidence coefficient score of the first foreground target in each first video frame;
the second foreground target determining module is used for determining the first foreground target meeting the preset confidence coefficient condition as a second foreground target;
the historical motion track acquisition module is used for acquiring the historical motion track of each second foreground target;
the historical motion track judging module is used for judging whether the historical motion track of each second foreground target meets a preset motion track condition or not;
and the moving target determining module is used for determining a second foreground target of which the historical moving track meets the preset moving track condition as a moving target.
Optionally, the first foreground object determining module is specifically configured to:
and determining a first foreground target with a distance smaller than a preset distance threshold value from the foreground targets.
Optionally, the confidence score calculating module includes:
the feature extraction unit is used for performing feature extraction on the first foreground target in each first video frame to obtain the feature of the first foreground target in each first video frame;
and the confidence score calculating unit is used for obtaining the confidence score of the first foreground object in each first video frame according to the characteristics of the first foreground object in each first video frame.
Optionally, the feature of the first foreground object in each first video frame includes:
local binary pattern features and at least one of gradient features, contour features.
Optionally, the confidence score calculating unit is specifically configured to:
and respectively inputting the features of the first foreground target in each first video frame into a preset feature model to obtain a confidence score of the first foreground target in each first video frame, wherein the preset feature model is a model trained on positive and negative samples and used for judging the confidence of the target.
Optionally, the confidence level determining module includes:
the judging unit is used for judging whether the confidence score of the first foreground target in each first video frame is larger than a preset confidence score or not;
the statistic unit is used for counting the number of first video frames of which the confidence coefficient scores of the first foreground targets are larger than the preset confidence coefficient scores;
and the first determining unit is used for determining the first foreground target as the first foreground target meeting the preset confidence level condition if the counted number of the first video frames is greater than the preset number.
Optionally, the historical motion trajectory determining module includes:
the motion range determining unit is used for determining the motion range of each second foreground target according to the historical motion track of the second foreground target;
the motion range judging unit is used for judging whether the determined motion range exceeds a preset motion range or not, and if so, triggering the first intersection judging unit;
the first intersection judging unit is used for judging whether the historical motion track of the second foreground target is intersected with the alarm area or not, and if so, triggering a second determining unit;
and the second determining unit is used for determining the second foreground target as the second foreground target meeting the preset motion track condition.
Optionally, the historical motion trajectory determining module includes:
the position determining unit is used for determining the starting position and the ending position of each second foreground target according to the historical motion track of the second foreground target;
a moving distance determining unit, configured to determine a moving distance of the second foreground object according to the starting position and the ending position;
the movement distance judging unit is used for judging whether the determined movement distance is larger than a preset movement distance threshold value or not, and if so, triggering the second intersection judging unit;
the second intersection judging unit is used for judging whether the historical motion track of the second foreground target is intersected with the alarm area or not, and if so, triggering a third determining unit;
and the third determining unit is used for determining the second foreground target as the second foreground target meeting the preset motion track condition.
Optionally, the apparatus further comprises:
and the identification module is used for identifying whether a moving target exists in suspected second foreground targets except the moving target in the second foreground targets by a deep learning algorithm after the second foreground targets with the historical moving tracks meeting the preset moving track condition are determined as the moving targets.
An electronic device comprising a processor and a memory, wherein,
the memory is used for storing a computer program;
the processor is configured to implement any of the above method steps when executing the computer program stored in the memory.
A computer-readable storage medium, in which a computer program is stored which, when being executed by a processor, carries out the method steps of any of the above.
In the embodiment of the invention, foreground targets in a current video frame are extracted, first foreground targets are determined from the foreground targets, a first video frame containing the first foreground targets is obtained for each first foreground target, the confidence score of the first foreground targets in each first video frame is calculated, whether the first foreground targets meet preset confidence conditions or not is judged according to the confidence score of the first foreground targets in each first video frame, the first foreground targets meeting the preset confidence conditions are determined as second foreground targets, the historical motion track of each second foreground target is obtained, whether the historical motion track of the second foreground targets meets the preset motion track conditions or not is judged for each second foreground target, and the second foreground targets with the historical motion tracks meeting the preset motion track conditions are determined as the motion targets. Therefore, the moving target is determined from the foreground target in a multi-frame confidence score and historical movement track judgment mode instead of simply determining the moving target as the moving target, and the accuracy rate of correct alarm is greatly improved.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts.
Fig. 1 is a first flowchart of a moving object determining method according to an embodiment of the present invention;
fig. 2 is a schematic flowchart of calculating a confidence score of the first foreground object in each first video frame according to an embodiment of the present invention;
fig. 3 is a schematic flowchart of a process for determining whether the first foreground target meets a preset confidence condition according to an embodiment of the present invention;
fig. 4 is a first flowchart illustrating that, for each second foreground target, whether the historical motion trajectory of the second foreground target meets a preset motion trajectory condition is determined according to the embodiment of the present invention;
fig. 5 is a schematic diagram of determining a motion range of a second foreground object according to an embodiment of the present invention;
fig. 6 is a second flowchart illustrating a process of determining, for each second foreground target, whether a historical motion trajectory of the second foreground target meets a preset motion trajectory condition according to the embodiment of the present invention;
fig. 7 is a schematic diagram of determining a start position and an end position of a second foreground object according to an embodiment of the present invention;
fig. 8 is a second flowchart of a moving object determining method according to an embodiment of the present invention;
fig. 9 is a schematic structural diagram of a moving object determining apparatus according to an embodiment of the present invention;
fig. 10 is a schematic structural diagram of an electronic device according to an embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
In order to solve the problem of the prior art, embodiments of the present invention provide a moving object determining method, a moving object determining device, and an electronic device.
First, a moving object determining method provided by an embodiment of the present invention is described below.
It should be noted that, it is reasonable that the moving object determining method provided by the embodiment of the present invention can be applied to, but is not limited to, an image capturing device or a server communicatively connected to the image capturing device.
As shown in fig. 1, a method for determining a moving object according to an embodiment of the present invention may include:
s101: and extracting foreground objects in the current video frame.
And continuously acquiring real-time scene images through an image acquisition device, and extracting a foreground target in the acquired current video frame.
The scene in the real-time scene image may be a scene that needs to be subjected to alarm detection, for example: and judging whether a person invades a certain cell, and alarming when the person invades the certain cell, wherein the scene is a scene containing the certain cell. For another example, whether a vehicle enters a certain parking lot is judged, when the vehicle is detected to be driven to an area where parking is forbidden, the current area is prompted to be forbidden to park through a loudspeaker, and an alarm prompt is sent out at a monitoring end, so that the scene comprises the parking lot.
The above-mentioned extracting the foreground object in the current video frame may be: and acquiring a foreground image corresponding to the current video frame, and extracting a foreground target from the foreground image.
In addition, in order to more accurately determine the actual moving target which needs to be detected, the foreground target may be a moving foreground target, wherein the manner of extracting the moving foreground target from the current video frame may be extracting the moving foreground target by establishing a hybrid gaussian model.
S102: a first foreground object is determined from the foreground objects.
After extracting foreground objects in the current video frame, in order to determine actual moving objects which usually need to be detected, a first foreground object needs to be determined from the foreground objects.
It should be noted that there are various ways to determine the first foreground object from the foreground objects, including but not limited to the following:
the first mode is as follows: and determining the extracted foreground target as a first foreground target.
Since each foreground object in the current video frame is likely to be an actual moving object that needs to be detected, the extracted foreground object may be determined as the first foreground object, and then subsequent calculation is performed.
The second mode is as follows: and determining a first foreground target with the distance to the alarm area smaller than a preset distance threshold from the foreground targets.
The method comprises the steps that an alarm area is arranged in a current video frame, and a motion target generally needs to be close to the alarm area to meet the alarm condition, so that the actual motion target needing to be detected can be more accurately determined, and a first foreground target, the distance between which and the alarm area is smaller than a preset distance threshold value, can be determined from a foreground target.
Therefore, the first foreground target with the distance to the alarm area smaller than the preset distance threshold value is screened out from the foreground targets, and then the first foreground target is subjected to subsequent calculation, so that the calculation amount is greatly reduced, and the calculation speed is improved.
S103: for each first foreground object, a first video frame containing the first foreground object is obtained.
After determining the first foreground object from the foreground objects, for each first foreground object, in order to determine whether the first foreground object is an actual moving object to be detected, a first video frame including the first foreground object needs to be obtained, that is, a video frame in which the first foreground object appears needs to be obtained.
Since the number of video frames in which the first foreground object appears may be 1 or more, a 1-frame or multi-frame first video frame containing the first foreground object may be obtained.
For example, only the first video frame containing the first foreground object in a certain time period may be obtained according to actual needs, for example: assuming the first foreground object is H, the first foreground object H appears at 8: 00-8: 30, then 8: 00-8: a first video frame containing the first foreground subject H within a period of 10 hours.
All first video frames containing the first foreground object may also be obtained, for example: assuming that the first foreground target is H, the current video frame is the 5 th frame, and the first foreground target H appears in the 2 nd frame to the 5 th frame, all the obtained first video frames containing the first foreground target H are: frame 2, frame 3, frame 4, and frame 5.
S104: a confidence score for the first foreground subject in each first video frame is calculated.
After the first video frame including the first foreground object is obtained, in order to determine whether the first foreground object is an actual moving object that needs to be detected, the determination may be performed by calculating a confidence score, where a higher confidence score indicates that the first foreground object is an actual moving object that usually needs to be detected with higher confidence.
The above calculation may be to determine a confidence score of the first foreground object in each first video frame by scoring the features. Referring to fig. 2, step S104 may include:
s1041: and in each first video frame, performing feature extraction on the first foreground target to obtain the feature of the first foreground target in each first video frame.
Since the actual moving object to be detected generally has certain characteristics, for example: the actual moving object to be detected is usually a person, and the person generally has a person's contour, a face contour, a skin texture of the person, and so on, and therefore, in order to determine whether the first foreground object is the actual moving object to be detected, the features of the first foreground object in each first video frame may be obtained by means of feature extraction in each first video frame.
There are many features that can be extracted, including but not limited to the following:
the first method comprises the following steps: texture characteristics:
the texture features describe surface properties of objects corresponding to the images or image regions, and a commonly used texture feature operator such as LBP (Local Binary Pattern) or LBP (Local Binary Pattern) feature descriptor is an operator for describing Local texture features of the images, and has significant advantages of rotation invariance, gray scale invariance and the like.
And the second method comprises the following steps: boundary characteristics:
the boundary features describe shape parameters of objects in the image, and a common boundary feature extraction method such as Hough transform is a method for connecting edge pixels to form a region closed boundary by using the global characteristics of the image, and the basic idea is duality of dotted lines.
And the third is that: gradient characteristics:
the gradient feature utilizes edge detection to extract the gradient feature of the image, and a commonly used gradient operator such as a sobel gradient operator obtains the direction and the amplitude of the gradient at each pixel point in the image through the following formulas:
Figure BDA0001394834180000101
Figure BDA0001394834180000111
wherein M (x, y) is the amplitude of the gradient of the pixel point, θ (x, y) is the direction of the gradient of the pixel point, IxIs the amplitude of the gradient of the pixel point in the horizontal direction, IyThe gradient amplitude of the pixel point in the vertical direction is shown, x is the horizontal direction, and y is the vertical direction.
Therefore, the gradient feature vector of each pixel point is formed through the direction and the amplitude of the gradient of each pixel point.
In summary, obtaining the feature of the first foreground object in each first video frame by means of feature extraction may include:
local binary pattern features and at least one of gradient features, contour features.
Since the more features are extracted, the greater the possibility of obtaining an accurate actual moving object to be detected, a plurality of features can be extracted as much as possible.
S1042: and obtaining the confidence score of the first foreground object in each first video frame according to the characteristics of the first foreground object in each first video frame.
After the features of the first foreground target in each first video frame are obtained, the confidence score of the first foreground target in each first video frame is obtained according to the features of the first foreground target in each first video frame, wherein the higher the confidence score is, the higher the confidence degree is that the first foreground target is the actual moving target needing to be detected is.
The obtaining the confidence score of the first foreground object in each first video frame according to the feature of the first foreground object in each first video frame may include:
and respectively inputting the characteristics of the first foreground target in each first video frame into a preset characteristic model to obtain the confidence score of the first foreground target in each first video frame, wherein the preset characteristic model is a model trained on positive and negative samples and used for judging the confidence of the target.
The process of establishing the preset feature model for the target confidence judgment may include, but is not limited to:
(1) extracting the characteristics of positive and negative samples, wherein the positive sample is an actual moving target, such as: human, cat, negative examples are interfering moving objects such as: the leaves can respectively extract the characteristics of the positive and negative samples according to the characteristic extraction mode in the step S1041;
(2) and training the extracted characteristics of the positive and negative samples by using a Support Vector Machine (SVM) to obtain a characteristic model.
In Machine learning, an SVM (Support Vector Machine) is a supervised learning model related to a related learning algorithm, and can analyze data and recognize patterns for classification and regression analysis.
S105: and judging whether the first foreground target meets a preset confidence level condition or not according to the confidence level score of the first foreground target in each first video frame.
Because the confidence score of the first foreground target in the first video frame of one frame only represents that the confidence of the first foreground target in the first video frame of one frame is the actual moving target to be detected, the probability of the occurrence of the erroneous judgment is higher.
Therefore, in order to reduce the probability of erroneous judgment, after the confidence score of the first foreground target in each first video frame is calculated, whether the first foreground target meets the preset confidence condition may be judged according to the confidence score of the first foreground target in each first video frame, that is, judgment is performed according to the multi-frame confidence score, instead of only according to the confidence score of one frame, and then the subsequent steps are performed according to the judgment result.
Referring to fig. 3, step S105 may include:
s1051: and judging whether the confidence score of the first foreground target in each first video frame is larger than a preset confidence score.
After the confidence score of the first foreground target in each first video frame is obtained, it is necessary to determine whether the confidence score of the first foreground target in each first video frame is greater than a preset confidence score.
If the confidence score of the first foreground target in a certain first video frame is greater than the preset confidence score, which indicates that the confidence score is higher, the confidence that the first foreground target is an actual moving target to be detected in the certain first video frame is higher; and if the confidence score of the first foreground target in a certain first video frame is smaller than the preset confidence score, which indicates that the confidence score is lower, the confidence that the first foreground target is an actual moving target to be detected in the certain first video frame is lower.
S1052: and counting the number of the first video frames of which the confidence score of the first foreground target is greater than a preset confidence score.
In order to determine whether the first foreground target meets the preset confidence level condition, after determining the magnitude relationship between the confidence level score of the first foreground target in each first video frame and the preset confidence level score, the number of the first video frames of which the confidence level score is greater than the preset confidence level score needs to be counted.
S1053: and if the counted number of the first video frames is greater than the preset number, determining the first foreground target as the first foreground target meeting the preset confidence level condition.
If the counted number of the first videos is larger than the preset number, which indicates that the confidence scores of the first foreground target in the plurality of video frames are all higher, the confidence of the first foreground target being an actual moving target to be detected is higher, and therefore the first foreground target can be determined as the first foreground target meeting the preset confidence condition.
For ease of understanding, the method of FIG. 3 is described in detail below with respect to a specific embodiment:
for example: assuming that the current video frame is the 3 rd frame and the first foreground object H appears in the 1 st frame for the first time, the first video frame is: the first foreground object H is assumed to have a confidence score of 60 in the 1 st frame, 50 in the 2 nd frame, 70 in the 3 rd frame, and 55 in the 1 st frame, the 2 nd frame, and the 3 rd frame;
determining whether the confidence score 60, 50, and 70 of the first foreground subject H in each first video frame is greater than a preset confidence score 55;
since 60 and 70 are larger than 55, the number of the first video frames with the confidence score of the first foreground object H larger than the preset confidence score 55 is counted as 2;
assuming that the preset number is 1, since the counted number 2 of the first video frames is greater than the preset number 1, the first foreground object H is determined as the first foreground object satisfying the preset confidence condition.
S106: and determining the first foreground target meeting the preset confidence level condition as a second foreground target.
And after the first foreground target meeting the preset confidence level condition is determined, determining the first foreground target meeting the preset confidence level condition as a second foreground target.
S107: and acquiring the historical motion track of each second foreground target.
The motion trajectory of the actual moving object to be detected generally satisfies certain conditions, such as: the actual moving target is a person B, the moving distance from the occurrence of the person B to the intrusion to the alarm area is generally long, or the moving range from the occurrence of the person B to the intrusion to the alarm area is generally large, and the interfering moving targets, such as leaves, are trembling or wandering and have small position change, so that in order to judge whether each second foreground target is an actual moving target to be detected, after the second foreground target is determined, the historical moving track of each second foreground target is obtained.
The historical motion track of each second foreground target is as follows: determined from the position of the second foreground subject in the video frame preceding the current video frame.
S108: and judging whether the historical motion track of each second foreground target meets a preset motion track condition or not.
Since the motion trajectory of the actual moving target to be detected generally meets a certain condition, after the historical motion trajectory of each second foreground target is obtained, in order to determine the actual moving target to be detected, it is determined whether the historical motion trajectory of the second foreground target meets a preset motion trajectory condition for each second foreground target.
It should be noted that, for each second foreground target, there are various ways of determining whether the historical motion trajectory of the second foreground target meets the preset motion trajectory condition, which are described in detail below:
in one implementation of the present invention, referring to fig. 4, step S108 may include:
s1081: and for each second foreground target, determining the motion range of the second foreground target according to the historical motion track of the second foreground target.
Because the motion range of the actual moving target to be detected is generally large, after the historical motion trajectory of each second foreground target is obtained, the motion range of the second foreground target can be determined for each second foreground target according to the historical motion trajectory of the second foreground target.
The method for determining the motion range of the second foreground object according to the historical motion trajectory of the second foreground object may be: and determining a target area which can completely cover the historical motion trail of the second foreground target, and determining the area as the motion range of the second foreground target.
For example: as shown in fig. 5, assuming that the historical motion trajectory of the second foreground object is W, a target area Y that can completely cover the historical motion trajectory W of the second foreground object is determined, and the target area Y is determined as the motion range of the second foreground object.
S1082: and judging whether the determined movement range exceeds a preset movement range, if so, executing the step S1083, and if not, not performing any processing.
Since the motion range of the actual moving object to be detected is generally large, after the motion range of the second foreground object is determined, it is necessary to determine whether the determined motion range exceeds a preset motion range, and execute subsequent steps according to the determination result.
S1083: and judging whether the historical motion track of the second foreground target is intersected with the alarm area, if so, executing the step S1084, and if not, not performing any processing.
If the determined motion range exceeds the preset motion range, the motion range of the second foreground object is larger, and because the actual motion object to be detected needs to have a behavior of invading the alarm area, whether the historical motion track of the second foreground object intersects with the alarm area needs to be further judged, and the subsequent steps are carried out according to the judgment result.
S1084: and determining the second foreground target as the second foreground target meeting the preset motion track condition.
And if the historical motion track of the second foreground target has intersection with the alarm area, the behavior that the second foreground target invades the alarm area is shown, and at the moment, the second foreground target is determined as the second foreground target meeting the preset motion track condition.
For example, the condition that the preset motion trajectory is satisfied may be that the motion trajectory of the second foreground object is not within a preset range, and the similarity of the motion trajectories in two consecutive time periods is low. For example, the motion trail of clothes hung on a clothesline in a monitoring video with the total time length of 10 minutes is always within 1 square meter, the motion trail in each minute shakes back and forth, the similarity of the motion trail in each minute is determined to reach more than 70% through similarity calculation, and the detected motion target (clothes) is judged to be an interference motion target.
In another implementation manner of the present invention, referring to fig. 6, step S108 may include:
s1085: and for each second foreground target, determining the starting position and the ending position of the second foreground target according to the historical motion track of the second foreground target.
Because the actual moving object to be detected generally has a longer moving distance, after the historical moving trajectory of each second foreground object is obtained, for each second foreground object, the starting position and the ending position of the second foreground object can be determined according to the historical moving trajectory of the second foreground object.
It should be noted that the starting position of the second foreground target is the position of the second foreground target in the target video frame, where the target video frame is the video frame where the second foreground target appears for the first time, and the ending position is the position of the second foreground target in the current video frame.
For example, according to the historical motion trajectory of the second foreground object, the start position of the second foreground object is determined, and the end position may be: and determining the starting point of the historical motion track of the second foreground target as the starting position of the second foreground target, and determining the end point of the historical motion track of the second foreground target as the end position of the second foreground target.
For example: as shown in fig. 7, assuming that the historical motion trajectory of the second foreground object is W, the starting point of the historical motion trajectory of the second foreground object is M, and the end point of the historical motion trajectory of the second foreground object is N, the starting point M is determined as the starting position of the second foreground object, and the end point N is determined as the end position of the second foreground object.
S1086: and determining the movement distance of the second foreground object according to the starting position and the ending position.
After the starting position and the ending position of the second foreground target are determined, the movement distance of the second foreground target is determined according to the starting position and the ending position.
Wherein, according to the starting position and the ending position, determining the moving distance of the second foreground object may be: and determining the straight-line distance between the starting position and the ending position as the moving distance of the second foreground object.
For example: continuing with fig. 7, connecting the starting position M and the ending position N, the length of MN is the moving distance of the second foreground object.
S1087: and judging whether the determined movement distance is larger than a preset movement distance threshold value or not, if so, executing the step S1088, and if not, not carrying out any processing.
Because the actual moving object to be detected generally has a longer moving distance, after the moving distance of the second foreground object is determined, whether the determined moving distance is greater than a preset moving distance threshold value is judged, and the subsequent steps are performed according to the judgment result.
S1088: and judging whether the historical motion track of the second foreground target is intersected with the alarm area, if so, executing the step S1089, and if not, not performing any processing.
If the determined movement distance is larger than the preset movement distance threshold, it is indicated that the movement distance of the second foreground object is longer, and because the actual movement object to be detected needs to have a behavior of invading the alarm area, it is further required to determine whether the historical movement track of the second foreground object intersects with the alarm area, and perform the subsequent steps according to the determination result.
S1089: and determining the second foreground target as the second foreground target meeting the preset motion track condition.
And if the historical motion track of the second foreground target has intersection with the alarm area, the behavior that the second foreground target invades the alarm area is shown, and at the moment, the second foreground target is determined as the second foreground target meeting the preset motion track condition.
S109: and determining the second foreground target with the historical motion track meeting the preset motion track condition as the motion target.
After the second foreground target with the historical motion track meeting the preset motion track condition is determined from the second foreground targets, the second foreground target with the historical motion track meeting the preset motion track condition can be determined as the motion target, namely the actual motion target needing to be detected.
In the embodiment of the invention, foreground targets in a current video frame are extracted, first foreground targets are determined from the foreground targets, a first video frame containing the first foreground targets is obtained for each first foreground target, the confidence score of the first foreground targets in each first video frame is calculated, whether the first foreground targets meet preset confidence conditions or not is judged according to the confidence score of the first foreground targets in each first video frame, the first foreground targets meeting the preset confidence conditions are determined as second foreground targets, the historical motion track of each second foreground target is obtained, whether the historical motion track of the second foreground targets meets the preset motion track conditions or not is judged for each second foreground target, and the second foreground targets with the historical motion tracks meeting the preset motion track conditions are determined as the motion targets. Therefore, the moving target is determined from the foreground target in a multi-frame confidence score and historical movement track judgment mode instead of simply determining the moving target as the moving target, and the accuracy rate of correct alarm is greatly improved.
On the basis of the method shown in fig. 1, as shown in fig. 8, a method for determining a moving object according to an embodiment of the present invention may further include, after step 109:
s110: and aiming at suspected second foreground targets except the moving target in the second foreground targets, identifying whether the moving target exists in the suspected second foreground targets through a deep learning method.
After the moving object is determined from the foreground object by the method shown in fig. 1, there may be an undetermined moving object in the foreground object, so that further determination is needed for a suspected second foreground object other than the moving object in the second foreground object.
In an implementation manner of the present application, a further determination manner for a suspected second foreground target other than a moving target in the second foreground target is as follows: and identifying whether a moving target exists in the suspected second foreground target through a deep learning algorithm.
Illustratively, identifying whether a moving object exists in the suspected second foreground object through the deep learning algorithm may include the steps of:
firstly, a shallow CNN (Convolutional neural network) network structure is constructed, and then judgment of a moving target is realized, wherein the process is as follows:
1) collecting a moving object sample: a large number of positive and negative samples of actual moving objects can be collected, such as 5000 samples each, wherein the positive samples can be samples of people in different weather and different time periods; the negative sample can be leaves or lamplight and the like;
2) model training: adjusting relevant parameters of a CNN model training actual moving target, setting preset number of training layers, such as 6 layers, 7 layers or 10 layers, and the like, and setting preset type target types, such as type 2 (personnel), type 3 (vehicle), type 7 (dog), type 8 (cat), and the like;
3) completing training to obtain a moving target verification module;
4) and checking the foreground target, inputting a suspected second foreground target into a moving target checking module, and if a preset label and a preset confidence coefficient are output, if label is 0 and the confidence coefficient is greater than 0.2 (empirical threshold), determining the suspected second foreground target as the moving target.
Wherein, the steps 1) to 4) are all processes executed by a machine.
Therefore, the moving target is determined from the suspected second foreground target in a deep learning mode, and the accuracy of moving target identification is improved.
With respect to the above method embodiment, as shown in fig. 9, an embodiment of the present invention further provides a moving object determining apparatus, where the apparatus may include:
an extracting module 201, configured to extract a foreground object in a current video frame;
a first foreground object determining module 202, configured to determine a first foreground object from the foreground objects;
a first video frame obtaining module 203, configured to obtain, for each first foreground object, a first video frame including the first foreground object;
a confidence score calculation module 204, configured to calculate a confidence score of the first foreground object in each first video frame;
a confidence level determining module 205, configured to determine whether the first foreground target meets a preset confidence level condition according to a confidence level score of the first foreground target in each first video frame;
a second foreground target determining module 206, configured to determine the first foreground target meeting the preset confidence condition as a second foreground target;
a historical motion track obtaining module 207, configured to obtain a historical motion track of each second foreground object;
a historical motion trajectory determination module 208, configured to determine, for each second foreground target, whether a historical motion trajectory of the second foreground target meets a preset motion trajectory condition;
and a moving target determining module 209, configured to determine, as a moving target, a second foreground target of which the historical moving trajectory satisfies the preset moving trajectory condition.
In the embodiment of the invention, foreground targets in a current video frame are extracted, first foreground targets are determined from the foreground targets, a first video frame containing the first foreground targets is obtained for each first foreground target, the confidence score of the first foreground targets in each first video frame is calculated, whether the first foreground targets meet preset confidence conditions or not is judged according to the confidence score of the first foreground targets in each first video frame, the first foreground targets meeting the preset confidence conditions are determined as second foreground targets, the historical motion track of each second foreground target is obtained, whether the historical motion track of the second foreground targets meets the preset motion track conditions or not is judged for each second foreground target, and the second foreground targets with the historical motion tracks meeting the preset motion track conditions are determined as the motion targets. Therefore, the moving target is determined from the foreground target in a multi-frame confidence score and historical movement track judgment mode instead of simply determining the moving target as the moving target, and the accuracy rate of correct alarm is greatly improved.
In an implementation manner, the first foreground object determining module 202 may be specifically configured to:
and determining a first foreground target with a distance smaller than a preset distance threshold value from the foreground targets.
In one implementation, the confidence score calculation module 204 may include:
the feature extraction unit is used for performing feature extraction on the first foreground target in each first video frame to obtain the feature of the first foreground target in each first video frame;
and the confidence score calculating unit is used for obtaining the confidence score of the first foreground object in each first video frame according to the characteristics of the first foreground object in each first video frame.
In one implementation, the feature of the first foreground object in each first video frame may include:
local binary pattern features and at least one of gradient features, contour features.
In an implementation manner, the confidence score calculating unit may be specifically configured to:
and respectively inputting the features of the first foreground target in each first video frame into a preset feature model to obtain a confidence score of the first foreground target in each first video frame, wherein the preset feature model is a model trained on positive and negative samples and used for judging the confidence of the target.
In one implementation, the confidence level determination module 205 may include:
the judging unit is used for judging whether the confidence score of the first foreground target in each first video frame is larger than a preset confidence score or not;
the statistic unit is used for counting the number of first video frames of which the confidence coefficient scores of the first foreground targets are larger than the preset confidence coefficient scores;
and the first determining unit is used for determining the first foreground target as the first foreground target meeting the preset confidence level condition if the counted number of the first video frames is greater than the preset number.
In one implementation, the historical motion trajectory determination module 208 may include:
the motion range determining unit is used for determining the motion range of each second foreground target according to the historical motion track of the second foreground target;
the motion range judging unit is used for judging whether the determined motion range exceeds a preset motion range or not, and if so, triggering the first intersection judging unit;
the first intersection judging unit is used for judging whether the historical motion track of the second foreground target is intersected with the alarm area or not, and if so, triggering a second determining unit;
and the second determining unit is used for determining the second foreground target as the second foreground target meeting the preset motion track condition.
In one implementation, the historical motion trajectory determination module 208 may include:
the position determining unit is used for determining the starting position and the ending position of each second foreground target according to the historical motion track of the second foreground target;
a moving distance determining unit, configured to determine a moving distance of the second foreground object according to the starting position and the ending position;
the movement distance judging unit is used for judging whether the determined movement distance is larger than a preset movement distance threshold value or not, and if so, triggering the second intersection judging unit;
the second intersection judging unit is used for judging whether the historical motion track of the second foreground target is intersected with the alarm area or not, and if so, triggering a third determining unit;
and the third determining unit is used for determining the second foreground target as the second foreground target meeting the preset motion track condition.
In one implementation, the apparatus may further include:
and the identification module is used for identifying whether a moving target exists in suspected second foreground targets except the moving target in the second foreground targets by a deep learning algorithm after the second foreground targets with the historical moving tracks meeting the preset moving track condition are determined as the moving targets.
An electronic device according to an embodiment of the present invention is further provided, as shown in fig. 10, and includes a processor 1001 and a memory 1002, wherein,
a memory 1002 for storing a computer program;
the processor 1001 is configured to implement the following steps when executing the computer program stored in the memory 1002:
extracting a foreground target in a current video frame;
determining a first foreground target from the foreground targets;
for each first foreground target, obtaining a first video frame containing the first foreground target;
calculating a confidence score of the first foreground object in each first video frame;
judging whether the first foreground target meets a preset confidence condition or not according to the confidence score of the first foreground target in each first video frame;
determining the first foreground target meeting the preset confidence coefficient condition as a second foreground target;
acquiring a historical motion track of each second foreground target;
judging whether the historical motion track of each second foreground target meets a preset motion track condition or not;
and determining a second foreground target with the historical motion track meeting the preset motion track condition as a motion target.
In the embodiment of the invention, foreground targets in a current video frame are extracted, first foreground targets are determined from the foreground targets, a first video frame containing the first foreground targets is obtained for each first foreground target, the confidence score of the first foreground targets in each first video frame is calculated, whether the first foreground targets meet preset confidence conditions or not is judged according to the confidence score of the first foreground targets in each first video frame, the first foreground targets meeting the preset confidence conditions are determined as second foreground targets, the historical motion track of each second foreground target is obtained, whether the historical motion track of the second foreground targets meets the preset motion track conditions or not is judged for each second foreground target, and the second foreground targets with the historical motion tracks meeting the preset motion track conditions are determined as the motion targets. Therefore, the moving target is determined from the foreground target in a multi-frame confidence score and historical movement track judgment mode instead of simply determining the moving target as the moving target, and the accuracy rate of correct alarm is greatly improved.
In an implementation manner of the embodiment of the present invention, the step of determining the first foreground object from the foreground objects includes:
and determining a first foreground target with a distance smaller than a preset distance threshold value from the foreground targets.
In an implementation manner of the embodiment of the present invention, the step of calculating the confidence score of the first foreground object in each first video frame includes:
in each first video frame, performing feature extraction on the first foreground target to obtain the feature of the first foreground target in each first video frame;
and obtaining the confidence score of the first foreground object in each first video frame according to the characteristics of the first foreground object in each first video frame.
In an implementation manner of the embodiment of the present invention, a feature of the first foreground object in each first video frame includes:
local binary pattern features and at least one of gradient features, contour features.
In an implementation manner of the embodiment of the present invention, the step of obtaining, according to a feature of the first foreground target in each first video frame, a confidence score of the first foreground target in each first video frame includes:
and respectively inputting the features of the first foreground target in each first video frame into a preset feature model to obtain a confidence score of the first foreground target in each first video frame, wherein the preset feature model is a model trained on positive and negative samples and used for judging the confidence of the target.
In an implementation manner of the embodiment of the present invention, the step of determining whether the first foreground target meets a preset confidence level condition according to the confidence level score of the first foreground target in each first video frame includes:
judging whether the confidence score of the first foreground target in each first video frame is larger than a preset confidence score or not;
counting the number of first video frames with the confidence score of the first foreground target being larger than a preset confidence score;
and if the counted number of the first video frames is greater than the preset number, determining the first foreground target as the first foreground target meeting the preset confidence level condition.
In an implementation manner of the embodiment of the present invention, the step of determining, for each second foreground target, whether a historical motion trajectory of the second foreground target meets a preset motion trajectory condition includes:
for each second foreground target, determining the motion range of the second foreground target according to the historical motion track of the second foreground target;
judging whether the determined movement range exceeds a preset movement range or not;
if so, judging whether the historical motion track of the second foreground target is intersected with the alarm area;
and if the intersection exists, determining the second foreground target as the second foreground target meeting the preset motion track condition.
In an implementation manner of the embodiment of the present invention, the step of determining, for each second foreground target, whether a historical motion trajectory of the second foreground target meets a preset motion trajectory condition includes:
for each second foreground target, determining the starting position and the ending position of the second foreground target according to the historical motion track of the second foreground target;
determining the movement distance of the second foreground target according to the starting position and the ending position;
judging whether the determined movement distance is larger than a preset movement distance threshold value or not;
if so, judging whether the historical motion track of the second foreground target is intersected with the alarm area;
and if the intersection exists, determining the second foreground target as the second foreground target meeting the preset motion track condition.
In an implementation manner of the embodiment of the present invention, after the step of determining the second foreground object whose historical motion trajectory satisfies the preset motion trajectory condition as the moving object, the method further includes:
and aiming at suspected second foreground targets except the moving target in the second foreground targets, identifying whether the suspected second foreground targets have the moving target through a deep learning algorithm.
The Memory may include a Random Access Memory (RAM) or a Non-Volatile Memory (NVM), such as at least one disk Memory. Optionally, the memory may also be at least one memory device located remotely from the processor.
The Processor may be a general-purpose Processor, including a Central Processing Unit (CPU), a Network Processor (NP), and the like; but also Digital Signal Processors (DSPs), Application Specific Integrated Circuits (ASICs), Field Programmable Gate Arrays (FPGAs) or other Programmable logic devices, discrete Gate or transistor logic devices, discrete hardware components.
An embodiment of the present invention further provides a computer-readable storage medium, in which a computer program is stored, and when the computer program is executed by a processor, the following steps are implemented:
extracting a foreground target in a current video frame;
determining a first foreground target from the foreground targets;
for each first foreground target, obtaining a first video frame containing the first foreground target;
calculating a confidence score of the first foreground object in each first video frame;
judging whether the first foreground target meets a preset confidence condition or not according to the confidence score of the first foreground target in each first video frame;
determining the first foreground target meeting the preset confidence coefficient condition as a second foreground target;
acquiring a historical motion track of each second foreground target;
judging whether the historical motion track of each second foreground target meets a preset motion track condition or not;
and determining a second foreground target with the historical motion track meeting the preset motion track condition as a motion target.
In the embodiment of the invention, foreground targets in a current video frame are extracted, first foreground targets are determined from the foreground targets, a first video frame containing the first foreground targets is obtained for each first foreground target, the confidence score of the first foreground targets in each first video frame is calculated, whether the first foreground targets meet preset confidence conditions or not is judged according to the confidence score of the first foreground targets in each first video frame, the first foreground targets meeting the preset confidence conditions are determined as second foreground targets, the historical motion track of each second foreground target is obtained, whether the historical motion track of the second foreground targets meets the preset motion track conditions or not is judged for each second foreground target, and the second foreground targets with the historical motion tracks meeting the preset motion track conditions are determined as the motion targets. Therefore, the moving target is determined from the foreground target in a multi-frame confidence score and historical movement track judgment mode instead of simply determining the moving target as the moving target, and the accuracy rate of correct alarm is greatly improved.
In an implementation manner of the embodiment of the present invention, the step of determining the first foreground object from the foreground objects includes:
and determining a first foreground target with a distance smaller than a preset distance threshold value from the foreground targets.
In an implementation manner of the embodiment of the present invention, the step of calculating the confidence score of the first foreground object in each first video frame includes:
in each first video frame, performing feature extraction on the first foreground target to obtain the feature of the first foreground target in each first video frame;
and obtaining the confidence score of the first foreground object in each first video frame according to the characteristics of the first foreground object in each first video frame.
In an implementation manner of the embodiment of the present invention, a feature of the first foreground object in each first video frame includes:
local binary pattern features and at least one of gradient features, contour features.
In an implementation manner of the embodiment of the present invention, the step of obtaining, according to a feature of the first foreground target in each first video frame, a confidence score of the first foreground target in each first video frame includes:
and respectively inputting the features of the first foreground target in each first video frame into a preset feature model to obtain a confidence score of the first foreground target in each first video frame, wherein the preset feature model is a model trained on positive and negative samples and used for judging the confidence of the target.
In an implementation manner of the embodiment of the present invention, the step of determining whether the first foreground target meets a preset confidence level condition according to the confidence level score of the first foreground target in each first video frame includes:
judging whether the confidence score of the first foreground target in each first video frame is larger than a preset confidence score or not;
counting the number of first video frames with the confidence score of the first foreground target being larger than a preset confidence score;
and if the counted number of the first video frames is greater than the preset number, determining the first foreground target as the first foreground target meeting the preset confidence level condition.
In an implementation manner of the embodiment of the present invention, the step of determining, for each second foreground target, whether a historical motion trajectory of the second foreground target meets a preset motion trajectory condition includes:
for each second foreground target, determining the motion range of the second foreground target according to the historical motion track of the second foreground target;
judging whether the determined movement range exceeds a preset movement range or not;
if so, judging whether the historical motion track of the second foreground target is intersected with the alarm area;
and if the intersection exists, determining the second foreground target as the second foreground target meeting the preset motion track condition.
In an implementation manner of the embodiment of the present invention, the step of determining, for each second foreground target, whether a historical motion trajectory of the second foreground target meets a preset motion trajectory condition includes:
for each second foreground target, determining the starting position and the ending position of the second foreground target according to the historical motion track of the second foreground target;
determining the movement distance of the second foreground target according to the starting position and the ending position;
judging whether the determined movement distance is larger than a preset movement distance threshold value or not;
if so, judging whether the historical motion track of the second foreground target is intersected with the alarm area;
and if the intersection exists, determining the second foreground target as the second foreground target meeting the preset motion track condition.
In an implementation manner of the embodiment of the present invention, after the step of determining the second foreground object whose historical motion trajectory satisfies the preset motion trajectory condition as the moving object, the method further includes:
and aiming at suspected second foreground targets except the moving target in the second foreground targets, identifying whether the suspected second foreground targets have the moving target through a deep learning algorithm.
It is noted that, herein, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other identical elements in a process, method, article, or apparatus that comprises the element.
All the embodiments in the present specification are described in a related manner, and the same and similar parts among the embodiments may be referred to each other, and each embodiment focuses on the differences from the other embodiments. In particular, for the system embodiment, since it is substantially similar to the method embodiment, the description is simple, and for the relevant points, reference may be made to the partial description of the method embodiment.
The above description is only for the preferred embodiment of the present invention, and is not intended to limit the scope of the present invention. Any modification, equivalent replacement, or improvement made within the spirit and principle of the present invention shall fall within the protection scope of the present invention.

Claims (19)

1. A moving object determining method, the method comprising:
extracting a foreground target in a current video frame;
determining a first foreground target from the foreground targets;
for each first foreground target, obtaining a first video frame containing the first foreground target;
calculating a confidence score of the first foreground object in each first video frame;
judging whether the first foreground target meets a preset confidence condition or not according to the confidence score of the first foreground target in each first video frame;
determining the first foreground target meeting the preset confidence coefficient condition as a second foreground target;
acquiring a historical motion track of each second foreground target;
judging whether the historical motion track of each second foreground target meets a preset motion track condition or not;
and determining a second foreground target with the historical motion track meeting the preset motion track condition as a motion target.
2. The method of claim 1, wherein the step of determining a first foreground object from the foreground objects comprises:
and determining a first foreground target with a distance smaller than a preset distance threshold value from the foreground targets.
3. The method of claim 1, wherein the step of calculating a confidence score for the first foreground object in each first video frame comprises:
in each first video frame, performing feature extraction on the first foreground target to obtain the feature of the first foreground target in each first video frame;
and obtaining the confidence score of the first foreground object in each first video frame according to the characteristics of the first foreground object in each first video frame.
4. The method of claim 3, wherein the feature of the first foreground object in each first video frame comprises:
local binary pattern features and at least one of gradient features, contour features.
5. The method of claim 3, wherein the step of obtaining the confidence score of the first foreground object in each first video frame according to the feature of the first foreground object in each first video frame comprises:
and respectively inputting the features of the first foreground target in each first video frame into a preset feature model to obtain a confidence score of the first foreground target in each first video frame, wherein the preset feature model is a model trained on positive and negative samples and used for judging the confidence of the target.
6. The method of claim 1, wherein the step of determining whether the first foreground object satisfies a preset confidence condition according to the confidence score of the first foreground object in each first video frame comprises:
judging whether the confidence score of the first foreground target in each first video frame is larger than a preset confidence score or not;
counting the number of first video frames with the confidence score of the first foreground target being larger than a preset confidence score;
and if the counted number of the first video frames is greater than the preset number, determining the first foreground target as the first foreground target meeting the preset confidence level condition.
7. The method according to claim 1, wherein the step of determining, for each second foreground object, whether the historical motion track of the second foreground object meets a preset motion track condition includes:
for each second foreground target, determining the motion range of the second foreground target according to the historical motion track of the second foreground target;
judging whether the determined movement range exceeds a preset movement range or not;
if so, judging whether the historical motion track of the second foreground target is intersected with the alarm area;
and if the intersection exists, determining the second foreground target as the second foreground target meeting the preset motion track condition.
8. The method according to claim 1, wherein the step of determining, for each second foreground object, whether the historical motion track of the second foreground object meets a preset motion track condition includes:
for each second foreground target, determining the starting position and the ending position of the second foreground target according to the historical motion track of the second foreground target;
determining the movement distance of the second foreground target according to the starting position and the ending position;
judging whether the determined movement distance is larger than a preset movement distance threshold value or not;
if so, judging whether the historical motion track of the second foreground target is intersected with the alarm area;
and if the intersection exists, determining the second foreground target as the second foreground target meeting the preset motion track condition.
9. The method according to claim 1, wherein after the step of determining a second foreground object with the historical motion trajectory satisfying the preset motion trajectory condition as a moving object, the method further comprises:
and aiming at suspected second foreground targets except the moving target in the second foreground targets, identifying whether the suspected second foreground targets have the moving target through a deep learning algorithm.
10. A moving object determining apparatus, characterized in that the apparatus comprises:
the extraction module is used for extracting a foreground target in a current video frame;
the first foreground target determining module is used for determining a first foreground target from the foreground targets;
a first video frame obtaining module, configured to obtain, for each first foreground object, a first video frame including the first foreground object;
a confidence score calculation module for calculating a confidence score for the first foreground target in each first video frame;
the confidence coefficient judging module is used for judging whether the first foreground target meets a preset confidence coefficient condition according to the confidence coefficient score of the first foreground target in each first video frame;
the second foreground target determining module is used for determining the first foreground target meeting the preset confidence coefficient condition as a second foreground target;
the historical motion track acquisition module is used for acquiring the historical motion track of each second foreground target;
the historical motion track judging module is used for judging whether the historical motion track of each second foreground target meets a preset motion track condition or not;
and the moving target determining module is used for determining a second foreground target of which the historical moving track meets the preset moving track condition as a moving target.
11. The apparatus of claim 10, wherein the first foreground object determining module is specifically configured to:
and determining a first foreground target with a distance smaller than a preset distance threshold value from the foreground targets.
12. The apparatus of claim 10, wherein the confidence score computation module comprises:
the feature extraction unit is used for performing feature extraction on the first foreground target in each first video frame to obtain the feature of the first foreground target in each first video frame;
and the confidence score calculating unit is used for obtaining the confidence score of the first foreground object in each first video frame according to the characteristics of the first foreground object in each first video frame.
13. The apparatus of claim 12, wherein the feature of the first foreground object in each first video frame comprises:
local binary pattern features and at least one of gradient features, contour features.
14. The apparatus according to claim 12, wherein the confidence score calculating unit is specifically configured to:
and respectively inputting the features of the first foreground target in each first video frame into a preset feature model to obtain a confidence score of the first foreground target in each first video frame, wherein the preset feature model is a model trained on positive and negative samples and used for judging the confidence of the target.
15. The apparatus of claim 10, wherein the confidence determination module comprises:
the judging unit is used for judging whether the confidence score of the first foreground target in each first video frame is larger than a preset confidence score or not;
the statistic unit is used for counting the number of first video frames of which the confidence coefficient scores of the first foreground targets are larger than the preset confidence coefficient scores;
and the first determining unit is used for determining the first foreground target as the first foreground target meeting the preset confidence level condition if the counted number of the first video frames is greater than the preset number.
16. The apparatus of claim 10, wherein the historical motion trajectory determining module comprises:
the motion range determining unit is used for determining the motion range of each second foreground target according to the historical motion track of the second foreground target;
the motion range judging unit is used for judging whether the determined motion range exceeds a preset motion range or not, and if so, triggering the first intersection judging unit;
the first intersection judging unit is used for judging whether the historical motion track of the second foreground target is intersected with the alarm area or not, and if so, triggering a second determining unit;
and the second determining unit is used for determining the second foreground target as the second foreground target meeting the preset motion track condition.
17. The apparatus of claim 10, wherein the historical motion trajectory determining module comprises:
the position determining unit is used for determining the starting position and the ending position of each second foreground target according to the historical motion track of the second foreground target;
a moving distance determining unit, configured to determine a moving distance of the second foreground object according to the starting position and the ending position;
the movement distance judging unit is used for judging whether the determined movement distance is larger than a preset movement distance threshold value or not, and if so, triggering the second intersection judging unit;
the second intersection judging unit is used for judging whether the historical motion track of the second foreground target is intersected with the alarm area or not, and if so, triggering a third determining unit;
and the third determining unit is used for determining the second foreground target as the second foreground target meeting the preset motion track condition.
18. The apparatus of claim 10, further comprising:
and the identification module is used for identifying whether a moving target exists in suspected second foreground targets except the moving target in the second foreground targets by a deep learning algorithm after the second foreground targets with the historical moving tracks meeting the preset moving track condition are determined as the moving targets.
19. An electronic device comprising a processor and a memory, wherein,
the memory is used for storing a computer program;
the processor, when executing the computer program stored in the memory, is configured to perform the method steps of any of claims 1-9.
CN201710769938.3A 2017-08-31 2017-08-31 Moving target determination method and device and electronic equipment Active CN109427073B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710769938.3A CN109427073B (en) 2017-08-31 2017-08-31 Moving target determination method and device and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710769938.3A CN109427073B (en) 2017-08-31 2017-08-31 Moving target determination method and device and electronic equipment

Publications (2)

Publication Number Publication Date
CN109427073A CN109427073A (en) 2019-03-05
CN109427073B true CN109427073B (en) 2020-12-11

Family

ID=65504637

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710769938.3A Active CN109427073B (en) 2017-08-31 2017-08-31 Moving target determination method and device and electronic equipment

Country Status (1)

Country Link
CN (1) CN109427073B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110807377B (en) * 2019-10-17 2022-08-09 浙江大华技术股份有限公司 Target tracking and intrusion detection method, device and storage medium
CN113487821A (en) * 2021-07-30 2021-10-08 重庆予胜远升网络科技有限公司 Power equipment foreign matter intrusion identification system and method based on machine vision

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103049751A (en) * 2013-01-24 2013-04-17 苏州大学 Improved weighting region matching high-altitude video pedestrian recognizing method
CN103324937A (en) * 2012-03-21 2013-09-25 日电(中国)有限公司 Method and device for labeling targets
CN103914702A (en) * 2013-01-02 2014-07-09 国际商业机器公司 System and method for boosting object detection performance in videos
CN104658011A (en) * 2015-01-31 2015-05-27 北京理工大学 Intelligent transportation moving object detection tracking method
CN106022249A (en) * 2016-05-16 2016-10-12 乐视控股(北京)有限公司 Dynamic object identification method, device and system
WO2017044550A1 (en) * 2015-09-11 2017-03-16 Intel Corporation A real-time multiple vehicle detection and tracking

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101231755B (en) * 2007-01-25 2013-03-06 上海遥薇(集团)有限公司 Moving target tracking and quantity statistics method
US9524426B2 (en) * 2014-03-19 2016-12-20 GM Global Technology Operations LLC Multi-view human detection using semi-exhaustive search
CN107103268A (en) * 2016-02-23 2017-08-29 中国移动通信集团浙江有限公司 A kind of method for tracking target and device
CN105809714A (en) * 2016-03-07 2016-07-27 广东顺德中山大学卡内基梅隆大学国际联合研究院 Track confidence coefficient based multi-object tracking method
CN106127802B (en) * 2016-06-16 2018-08-28 南京邮电大学盐城大数据研究院有限公司 A kind of movement objective orbit method for tracing

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103324937A (en) * 2012-03-21 2013-09-25 日电(中国)有限公司 Method and device for labeling targets
CN103914702A (en) * 2013-01-02 2014-07-09 国际商业机器公司 System and method for boosting object detection performance in videos
CN103049751A (en) * 2013-01-24 2013-04-17 苏州大学 Improved weighting region matching high-altitude video pedestrian recognizing method
CN104658011A (en) * 2015-01-31 2015-05-27 北京理工大学 Intelligent transportation moving object detection tracking method
WO2017044550A1 (en) * 2015-09-11 2017-03-16 Intel Corporation A real-time multiple vehicle detection and tracking
CN106022249A (en) * 2016-05-16 2016-10-12 乐视控股(北京)有限公司 Dynamic object identification method, device and system

Also Published As

Publication number Publication date
CN109427073A (en) 2019-03-05

Similar Documents

Publication Publication Date Title
CN108053427B (en) Improved multi-target tracking method, system and device based on KCF and Kalman
CN107123131B (en) Moving target detection method based on deep learning
CN104303193B (en) Target classification based on cluster
CN108052859B (en) Abnormal behavior detection method, system and device based on clustering optical flow characteristics
CN109145742B (en) Pedestrian identification method and system
CN108229256B (en) Road construction detection method and device
CN111210399B (en) Imaging quality evaluation method, device and equipment
CN110706247B (en) Target tracking method, device and system
Bedruz et al. Real-time vehicle detection and tracking using a mean-shift based blob analysis and tracking approach
Wang et al. Real-time camera anomaly detection for real-world video surveillance
CN109670383B (en) Video shielding area selection method and device, electronic equipment and system
CN112733814B (en) Deep learning-based pedestrian loitering retention detection method, system and medium
CN106778742B (en) Car logo detection method based on Gabor filter background texture suppression
CN111435436B (en) Perimeter anti-intrusion method and device based on target position
CN111259718B (en) Staircase retention detection method and system based on Gaussian mixture model
CN109427073B (en) Moving target determination method and device and electronic equipment
CN111639653A (en) False detection image determining method, device, equipment and medium
CN112417955A (en) Patrol video stream processing method and device
Hariyanto et al. Comparative study of tiger identification using template matching approach based on edge patterns
CN111444758A (en) Pedestrian re-identification method and device based on spatio-temporal information
CN110516538B (en) Prison double off-duty violation assessment method based on deep learning target detection
CN112257520A (en) People flow statistical method, device and system
CN108537105B (en) Dangerous behavior identification method in home environment
CN117333542A (en) Position detection method and device
CN114419489A (en) Training method and device for feature extraction network, terminal equipment and medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant