CN110706227A - Article state detection method, system, terminal device and storage medium - Google Patents
Article state detection method, system, terminal device and storage medium Download PDFInfo
- Publication number
- CN110706227A CN110706227A CN201910974637.3A CN201910974637A CN110706227A CN 110706227 A CN110706227 A CN 110706227A CN 201910974637 A CN201910974637 A CN 201910974637A CN 110706227 A CN110706227 A CN 110706227A
- Authority
- CN
- China
- Prior art keywords
- foreground object
- object region
- image data
- region
- video image
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000001514 detection method Methods 0.000 title claims abstract description 58
- 238000000034 method Methods 0.000 claims abstract description 38
- 238000004590 computer program Methods 0.000 claims description 20
- 230000003068 static effect Effects 0.000 claims description 13
- 238000004422 calculation algorithm Methods 0.000 abstract description 10
- 238000012544 monitoring process Methods 0.000 description 19
- 230000008569 process Effects 0.000 description 18
- 230000006870 function Effects 0.000 description 8
- 238000005516 engineering process Methods 0.000 description 6
- 230000000694 effects Effects 0.000 description 4
- 238000012545 processing Methods 0.000 description 4
- 230000001629 suppression Effects 0.000 description 4
- 238000004364 calculation method Methods 0.000 description 3
- 230000008859 change Effects 0.000 description 3
- 230000008878 coupling Effects 0.000 description 3
- 238000010168 coupling process Methods 0.000 description 3
- 238000005859 coupling reaction Methods 0.000 description 3
- 238000010586 diagram Methods 0.000 description 3
- 238000010191 image analysis Methods 0.000 description 3
- 238000004891 communication Methods 0.000 description 2
- 230000009977 dual effect Effects 0.000 description 2
- 238000004458 analytical method Methods 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000003247 decreasing effect Effects 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 238000005286 illumination Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/136—Segmentation; Edge detection involving thresholding
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/194—Segmentation; Edge detection involving foreground-background segmentation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10016—Video; Image sequence
Landscapes
- Engineering & Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Quality & Reliability (AREA)
- Image Analysis (AREA)
Abstract
The invention is suitable for the technical field of image detection, and provides a method, a system, a terminal device and a storage medium for detecting the state of an article, wherein the method comprises the steps of acquiring the data of the f-th frame of video image, wherein f is a positive integer; performing foreground object detection on the f frame video image data to obtain a first foreground object area, wherein the first foreground object area is a foreground object area in the f frame video image data; comparing the first foreground object region with a second foreground object region cached in advance to obtain a comparison result, wherein the second foreground object region is a foreground object region in the f-x frame video image data; and judging the state of the article according to the comparison result. The detection method in the mode consumes less resources and is fast in algorithm execution.
Description
Technical Field
The invention belongs to the technical field of image detection, and particularly relates to a method, a system, a terminal device and a storage medium for detecting the state of an article.
Background
At present, the objects are usually left and taken by adopting double-background modeling counting, a long-period background can detect the objects which move frequently and the objects which stay in a short period, a short-period background can detect the objects which move frequently, but the objects which stay in the short period can easily form a part of the background, and the left objects can be distinguished by comparing the background images of the long period and the short period according to the characteristic.
However, the dual background modeling technique has the disadvantages that the background model detects the foreground by monitoring the change of the image data of the background picture, the image data of the entire background needs to be monitored, and the article leave detection for the dual background modeling needs to maintain two background modeling processes at the same time, so that the related algorithm is slow to execute and requires a large storage space.
Disclosure of Invention
In view of this, embodiments of the present invention provide an article status detection method, system, terminal device and storage medium, so as to solve the problems in the prior art that the execution of the related algorithm is slow and the required storage space is large.
A first aspect of an embodiment of the present invention provides an article status detection method, including:
acquiring the f frame video image data, wherein f is a positive integer;
performing foreground object detection on the f frame video image data to obtain a first foreground object area, wherein the first foreground object area is a foreground object area in the f frame video image data;
comparing the first foreground object region with a second foreground object region cached in advance to obtain a comparison result, wherein the second foreground object region is a foreground object region in the f-x frame video image data, and x is a positive integer;
and judging the state of the article according to the comparison result.
In one embodiment, the performing foreground object detection on the f-th frame of video image data to obtain a first foreground object region includes:
acquiring background image data of a background model;
comparing the f frame video image data with the background image data;
and determining the area which is changed relative to the background image data as the first foreground object area.
In an embodiment, the comparing the first foreground object region with the pre-cached second foreground object region to obtain a comparison result includes:
calculating the size and position of the first foreground object region;
acquiring the size and the position of the second foreground object region;
calculating the intersection ratio of the first foreground object region and the second foreground object region;
if the intersection ratio of the first foreground object area and the second foreground object area is larger than a first threshold value, determining that the foreground object is in a static state;
and if the intersection ratio of the first foreground object region and the second foreground object region is smaller than the first threshold and larger than a second threshold, determining that the foreground object is in a motion state.
In one embodiment, after the obtaining the size and the position of the second foreground object region, the method further includes:
and if the foreground object is judged to be in the motion state, updating the size and the position of the second foreground object area to the size and the position of the first foreground object area.
In one embodiment, after the obtaining the size and the position of the second foreground object region, the method further includes:
if the intersection ratio of the first foreground object region and the second foreground object region is smaller than the second threshold value, determining that a new foreground object appears;
and establishing a new cache region to cache the new foreground object.
In one embodiment, the determining the article status according to the comparison result includes:
when the comparison result meets a first preset condition, increasing a first counting value by one counting unit;
when the comparison result meets a second preset condition, increasing a second counting value by one counting unit;
and judging the article state according to the first counting value and the second counting value.
In one embodiment, said determining the item status from said first count value and said second count value comprises:
and if the first counting value is larger than a preset first counting threshold value, determining that article leaving occurs.
And if the second counting value is larger than a preset second counting threshold value, determining that article taking occurs.
A second aspect of an embodiment of the present invention provides an article status detection system, including:
the acquisition module is used for acquiring the f frame video image data, wherein f is a positive integer;
a detection module, configured to perform foreground object detection on the f-th frame of video image data to obtain a first foreground object region, where the first foreground object region is a foreground object region in the f-th frame of video image data;
the comparison module is used for comparing the first foreground object region with a second foreground object region which is cached in advance to obtain a comparison result, wherein the second foreground object region is a foreground object region in the f-x frame video image data;
and the state judgment module is used for judging the state of the article according to the comparison result.
A third aspect of an embodiment of the present invention provides a terminal device, including:
a memory, a processor and a computer program stored in the memory and executable on the processor, the processor implementing the steps of the item status detection method according to the first aspect when executing the computer program.
A fourth aspect of embodiments of the present invention provides a computer-readable storage medium, which stores a computer program, and the computer program, when executed by a processor, implements the steps of the item status detection method according to the first aspect.
The embodiment of the invention has the beneficial effects that:
according to the article state detection method, the article state detection system, the terminal device and the storage medium, the motion state of the foreground object is judged by comparing the size and the position of the cache region of the foreground object detected by the long-period background model of the adjacent frames, and the technical effect same as that of the double-background modeling technology is achieved through the method.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present invention, the drawings needed for the embodiments or the prior art descriptions will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings without creative efforts.
Fig. 1 is a schematic flow chart of an implementation of an article status detection method according to an embodiment of the present invention;
fig. 2 is a schematic flowchart of a detailed example of foreground object detection performed on the f-th frame of video image data according to the embodiment of the present invention;
fig. 3 is a schematic diagram illustrating a state determination process of an article state detection method according to an embodiment of the present invention;
FIG. 4 is a schematic diagram of an article status detection system provided by an embodiment of the present invention;
fig. 5 is a schematic diagram of a terminal device according to an embodiment of the present invention.
Detailed Description
In order to make the technical solutions of the present invention better understood by those skilled in the art, the technical solutions in the embodiments of the present invention will be clearly described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some embodiments of the present invention, but not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The terms "comprises" and "comprising," and any variations thereof, in the description and claims of this invention and the above-described drawings are intended to cover non-exclusive inclusions. For example, a process, method, or system, article, or apparatus that comprises a list of steps or elements is not limited to only those steps or elements listed, but may alternatively include other steps or elements not listed, or inherent to such process, method, article, or apparatus. Furthermore, the terms "first," "second," and "third," etc. are used to distinguish between different objects and are not used to describe a particular order.
Example one
As shown in fig. 1, the article status detection method provided by the embodiment of the present invention includes:
s101: and acquiring the f frame video image data.
Wherein f is a positive integer. In application, monitoring image data are required to be acquired when a foreground object is detected, the monitoring image can be acquired from storage equipment for storing the monitoring image, and the monitoring image data transmitted from a monitoring camera can also be directly received.
The article state detection technology in the application is essentially to analyze the monitoring image data, firstly to obtain the background image data in the background model, then to analyze the transmitted f-th frame monitoring image data, to detect whether there is a region with changed pixels in the f-th frame monitoring image, and the image region with changed pixels is determined as the foreground region.
S102: and carrying out foreground object detection on the f frame video image data to obtain a first foreground object area.
The first foreground object region is a foreground object region in the f-th frame of video image data.
As shown in fig. 2, step S102 may specifically include the following processes:
s201: background image data of the background model is acquired.
S202: and comparing the f frame video image data with the background image data.
In application, foreground object detection is completed through a background model, the background model adopted in the application is a long-period background model, namely a background model with longer updating time, and compared with a short-period model, the long-period background model can detect not only moving objects but also static objects which are not in the background image originally. The background model is updated once every other period of time, the existence time of the foreground in the picture is detected during updating, if the foreground pixel point exists for a long enough time, the foreground can be used as the background, and if the existence time of the pixel point is insufficient, the foreground can still be considered as the foreground.
S203: and determining the area which is changed relative to the background image data as the first foreground object area.
The first foreground object area is a buffer area of a foreground object detected in the f-th frame of monitoring image.
In application, taking pixels as an example, the foreground detection method needs to detect image pixels of all regions of a monitored image, and can perform differentiated image analysis on a dynamic region and a static region in order to save storage space and improve detection efficiency. In a monitored scene, usually only a small part of regions are dynamic regions, but most of the regions are static regions, pixels of a stable region, namely the static region, always have the same pixel values, the pixel values are approximate to the pixel values of a background image in a background model, the pixels with the background image having a larger difference can be judged as the dynamic regions, and the adjacent frame pixels of the stable region have almost no change, so that the background parameter model of the pixels of the stable region does not need to be updated every frame, the dynamic regions need to be detected and compared more frequently than the stable regions so as to determine whether the regions are in a continuous motion state, and the dynamic regions and the static regions carry out differential image analysis, so that targeted and efficient foreground detection is carried out.
The number of frames per second of the existing monitoring image is between 30 and 60 frames, and because the number of frames per second is high, the background model identifies the foreground object and needs to analyze the image data of each frame, the workload is large, and the processing efficiency is low; in the case of such a large number of frames per second, unless there is an object moving at a very high speed in the monitored image, the image change of each frame is very small, so that generally, it is not necessary to perform pixel analysis frame by frame, and image analysis can be performed every 5 frames, of course, 5 frames are an example frame number, and more or less frames are specifically required, so that the background model does not need to update the parameters of the monitored image data points of each frame, and the calculation efficiency of the algorithm can be effectively improved by reducing the parameter update frequency of the background model.
S103: and comparing the first foreground object region with a second foreground object region cached in advance to obtain a comparison result, wherein the second foreground object region is a foreground object region in the f-x frame video image data.
As shown in fig. 3, step S103 may specifically include the following processes:
s301: calculating the size and position of the first foreground object region.
In application, after a foreground object is detected, the foreground object needs to be cached in a preset cache region to obtain a foreground object cache region of a current frame, and then comparison between the foreground object cache regions of adjacent frames is performed to obtain a comparison result. Taking the first foreground object and the second foreground object as examples, the first foreground object region is a foreground object cache region of the f-th frame, when the first foreground object region is obtained, the size and the position of the first foreground object region need to be calculated, and then the size and the position of the second foreground object region are obtained, the second foreground object region is the foreground object cache region in the f-x-th frame video image, where the background model is to detect the foreground object once every other frame, and if the foreground object is detected once every other 5 frames, the second foreground object region is the foreground object cache region in the f-5-th frame video image; and comparing the sizes and the positions of the first foreground object region and the second foreground object region to obtain a comparison result, and certainly, if the second foreground object region does not exist, not comparing. For example, the background model detects a foreground object every x frames, where x is a positive integer, and the f-th frame buffer area of the foreground object can only be compared with the f + x frame buffer area of the foreground object and the f-x frame buffer area of the foreground object, and in this embodiment, the f-th frame buffer area of the foreground object and the f-x frame buffer area of the foreground object are compared to obtain a comparison result.
S302: and acquiring the size and the position of the second foreground object region.
In application, if the size difference between the first foreground object region and the second foreground object region is within a preset range, it is indicated that the first foreground object region and the second foreground object region cache the same foreground object, and if the size difference between the first foreground object region and the second foreground object region exceeds the preset range, the first foreground object region may cache a newly-appeared foreground object, or the size of the foreground object may be calculated by mistake or the foreground object may not be detected completely.
S303: calculating an intersection ratio of the first foreground object region and the second foreground object region.
Wherein the intersection-to-union ratio of the first foreground object region and the second foreground object region is a ratio of an intersection and a union of the first foreground object region and the second foreground object region, for example, the intersection-to-union ratio of a and B is (intersection of a and B)/(union of a and B).
S304: and if the intersection ratio of the first foreground object region and the second foreground object region is greater than a first threshold value, determining that the foreground object is in a static state.
In application, if the intersection ratio of the first foreground object region and the second foreground object region is greater than a first threshold, it may be determined that the overlapping region of the first foreground object region and the second foreground object region is larger, and as the stationary object may be distinguished from each other in a monitoring scene due to external conditions such as illumination, or noise of a camera or instability of a photosensitive device, considering the above factors, if the intersection ratio of the first foreground object region and the second foreground object region is greater than a certain threshold (i.e., the first threshold), it may be determined that the object is in a stationary state.
The first threshold is empirical data, and may be set to 0.9 in this embodiment, that is, if the intersection ratio of the first foreground object region and the second foreground object region is greater than 0.9, the object is considered to be in a stationary state.
S305: and if the intersection ratio of the first foreground object region and the second foreground object region is smaller than the first threshold and larger than a second threshold, determining that the foreground object is in a motion state.
If the intersection ratio of the first foreground object region and the second foreground object region is within a preset range (i.e., smaller than the first threshold and larger than the second threshold), it may be determined that the first foreground object region and the second foreground object region cache the same foreground object, and since the first foreground object region and the second foreground object region are foreground object regions in the f-th frame and the f-x-th frame video image data, respectively, it may be determined that the foreground object is in a moving state if the first foreground object region is shifted from the second foreground object region.
In one embodiment, after S302, further comprising:
and if the foreground object is judged to be in the motion state, updating the size and the position of the second foreground object area to the size and the position of the first foreground object area.
When the foreground object is in a moving state, the region where the foreground object is located in the video image data also changes along with the movement of the foreground object, the second foreground object region is the foreground object region in the f-x frame video image data, namely, the second foreground object region is the old foreground object region, the first foreground object region is the foreground object region in the f-x frame video image data, namely, the new foreground object region, and in order to ensure the continuity of the monitoring process, the old foreground object region needs to be continuously updated to be the new foreground object region, namely, the size and the position of the second foreground object region are updated to be the size and the position of the first foreground object region.
The second threshold is used to determine whether a foreground object in the f-th frame of video image data and a foreground object in the f-x frame of video image data are the same object, when the intersection ratio of the first foreground object region and the second foreground object region is smaller than the second threshold, it indicates that the two foreground objects have a larger difference and are not the same object, and when the intersection ratio of the first foreground object region and the second foreground object region is larger than the second threshold, it indicates that the two foreground objects have a smaller difference and are the same object. On the premise that the first foreground object area and the second foreground object area are the same object, when the intersection ratio of the first foreground object area and the second foreground object area is larger than a first threshold value, the object is almost not changed in the two frames of video image data and can be considered to be in a static state, and when the intersection ratio of the first foreground object area and the second foreground object area is smaller than the first threshold value, the object is changed in the two frames of video image data to a certain extent and can be considered to be in a moving state. The second threshold is also empirical data, similar to the first threshold, and may be set to 0.7 in this embodiment.
Optionally, after S302, the method further includes:
and if the intersection ratio of the first foreground object region and the second foreground object region is smaller than the second threshold value, determining that a new foreground object appears.
And establishing a new cache region to cache the new foreground object.
In application, the cache region of the new foreground object is inevitably different in one or both of size and position from the previously detected cache region of the foreground object, so that whether the foreground object is the new foreground object can be effectively judged by comparing the size and position of the new foreground object with the size and position of the previous foreground object region.
The above comparing and obtaining the state of the foreground object through the cache regions of the same foreground object of different frames is only one article state detection method, and may also be detected in the following manner, when the foreground object is detected from the f-th frame monitoring image, the size and position of the foreground object are calculated first, then whether the cache region of the foreground object exists in the cache region is detected, if the cache region of the foreground object does not exist in the cache region, a cache region is newly created to cache the foreground object; if the cache region of the foreground object exists in the cache region, comparing the foreground object with the existing cache region of the foreground object, and judging the state of the foreground object; after the above process is finished, the size and the position of the existing buffer area of the foreground object are updated to the size and the position of the foreground object in the f-th frame.
Because the size of the buffer area of the foreground object needs to be calculated in the above steps, the foreground object needs to be detected accurately, and the shadow is a large interference factor for detecting the foreground object, the existing shadow suppression algorithm is mainly a shadow suppression algorithm based on a color space, and can effectively suppress weak shadows, but has a poor effect of suppressing strong shadows. When an object is under strong light, a shadow connected with the object is generated at a corresponding position, and the shadow area and the non-shadow area have obvious color difference, but the color difference of the shadow area is not obvious. Based on the characteristics, learning and updating aiming at strong-negative image pixel points can be added into a background model for detecting the strong shadow, shadow detection is carried out again in a foreground object region subjected to shadow suppression by adopting a shadow suppression algorithm, a region with pixel points approximate to the strong-negative image pixel points and approximate pixel points gathered is determined as a shadow region, and the shadow and the foreground object are divided.
In application, when the f-x frame monitoring image of the monitoring image only shoots a partial area of a foreground object, and the f-th frame monitoring image shoots more areas of the foreground object, the second foreground object area is compared with the first foreground object area to find that the second foreground object area is a subset of the first foreground object area, namely the first foreground object can completely surround the second foreground object, at the moment, the foreground object detection can be carried out again, the size and the position of the second foreground object area are updated to the size and the position of the detected foreground object area, and finally, the complete foreground object can be detected in a certain frame monitoring image after the f-th frame monitoring image through continuous circulation of the process.
The above is a detection process for gradually entering the monitored image for the object, and a process for gradually exiting the monitored image for the object, the size of the corresponding incomplete image can be updated without updating the position to restore the image, for example, if the f-x frame monitored image captures a complete foreground object, and the f-th frame monitored image only captures a part of the foreground object, the size of the foreground object cache region of the f-th frame is updated to the size of the cache region of the f-x frame, but the position of the f-th frame is not updated.
S104: and judging the state of the article according to the comparison result.
In one embodiment, S104 includes:
when the comparison result meets a first preset condition, increasing a first counting value by one counting unit;
the first preset condition is that the foreground object is in a static state and the foreground object originally does not belong to a part of the background.
And when the comparison result meets a second preset condition, increasing a second counting value by one counting unit.
The second preset condition is that the foreground object is in a motion state and the foreground object is a part of the background when the foreground object is static.
The article status may then be determined based on the first count value and the second count value.
And if the first counting value is larger than a preset first counting threshold value, determining that article leaving occurs.
In application, the first counting value is changed only when the front frame and the rear frame of the foreground object are in a static state, and the first counting value returns to the initial value when the front frame and the rear frame of the foreground object are in different states, so that when the first counting value associated with the foreground object is greater than the first counting threshold value, the foreground object can be judged to be in a continuous static state, and the foreground object is judged to be left.
And if the second counting value is larger than a preset second counting threshold value, determining that article taking occurs.
In the application, if it is determined that the foreground object is in a motion state and the foreground object is a part of the background when the foreground object is not in motion, the second count value associated with the foreground object is increased by one or more count units, or certainly, the second count value may be decreased by one or more count units, and the second count value is set according to a specific application context.
When the second count value associated with the foreground object is greater than the second count threshold, it may be determined that the foreground object is in a continuous or intermittent motion state, and thus it may be determined that the foreground object is taken.
The first count value and the second count value are variable parameters, and in this embodiment, the first count value and the second count value are both preferably 10N, where N is the number of frames per second.
A specific example of the object state detection process is as follows:
the method comprises the steps of firstly, receiving video image data of an nth frame, wherein n is a positive integer;
secondly, acquiring an nth frame foreground object detected by a preset background model from an nth frame video image;
thirdly, establishing a new cache region for the foreground object of the nth frame to cache the foreground object;
fourthly, calculating the size and the position of a foreground object cache region of the nth frame;
fifthly, repeating the steps from the first step to the fourth step to calculate the size and the position of the (n + x) th frame foreground object cache region;
sixthly, comparing the size and the position of the n frame foreground object cache region with the size and the position of the n + x frame foreground object cache region to obtain a comparison result;
seventhly, when the comparison result meets a preset first condition or a preset second condition, increasing a preset first counting value or a preset second counting value by one counting unit;
and eighthly, judging that the articles are left or taken when the first count value or the second count value is larger than or smaller than a preset first count threshold value or a preset second count value.
The article state detection method provided by the embodiment of the invention judges the motion state of the foreground object by comparing the size and the position of the cache region of the foreground object detected by the long-period background model of the adjacent frame, and achieves the same technical effect as the double-background modeling technology by the method, but the difference is that the foreground object detected by the long-period background model is directly cached in the preset cache region, the object is directly concerned, the pixel is not concerned, and the background is not required to be monitored like the short-period background model, so compared with the double-background modeling technology, the method has the advantages of less resource consumption, fast algorithm execution and fast detection speed.
It should be understood that, the sequence numbers of the steps in the foregoing embodiments do not imply an execution sequence, and the execution sequence of each process should be determined by its function and inherent logic, and should not constitute any limitation to the implementation process of the embodiments of the present invention.
Example two
Referring to fig. 4, the present embodiment provides an article status detecting system for executing the method steps of the first embodiment, the article status detecting system includes: an acquisition module 41, a detection module 44, a comparison module 43, and a state determination module 44.
An obtaining module 41, configured to obtain the f-th frame video image data, where f is a positive integer;
a detection module 42, configured to perform foreground object detection on the f-th frame of video image data to obtain a first foreground object region, where the first foreground object region is a foreground object region in the f-th frame of video image data;
a comparison module 43, configured to compare the first foreground object region with a second foreground object region cached in advance, to obtain a comparison result, where the second foreground object region is a foreground object region in the f-x frame video image data;
and the state judgment module 44 is used for judging the state of the article according to the comparison result.
In one embodiment, the detection module 42 includes:
a first acquisition unit configured to acquire background image data of a background model;
a comparison unit for comparing the f-th frame video image data with the background image data;
a first determination unit configured to determine a region that changes with respect to the background image data as the first foreground object region.
In one embodiment, the alignment module 43 comprises:
a first calculation unit for calculating the size and position of the first foreground object region;
a second obtaining unit, configured to obtain a size and a position of the second foreground object region;
a second calculation unit configured to calculate an intersection ratio of the first foreground object region and the second foreground object region;
a second determination unit configured to determine that the foreground object is in a stationary state when an intersection ratio of the first foreground object region and the second foreground object region is greater than a first threshold;
and the third judging unit is used for judging that the foreground object is in a motion state when the intersection ratio of the first foreground object area and the second foreground object area is smaller than the first threshold and larger than the second threshold.
In one embodiment, the alignment module 43 further comprises:
and the fourth judging unit is used for updating the size and the position of the second foreground object area to the size and the position of the first foreground object area if the foreground object is judged to be in the motion state.
Optionally, the alignment module 43 further includes:
a fifth judging unit, configured to judge that a new foreground object appears if an intersection ratio of the first foreground object region and the second foreground object region is smaller than the second threshold;
and the cache unit is used for establishing a new cache region to cache a new foreground object.
Further, the state determination module 44 further includes:
a sixth judging unit, configured to increase the first count value by one count unit when the comparison result meets a first preset condition;
a seventh judging unit, configured to increase the second count value by one count unit when the comparison result meets a second preset condition;
and the state judging unit is used for judging the state of the article according to the first counting value and the second counting value.
In one embodiment, the state determination unit further includes:
the left-over judging subunit is used for judging that article left-over occurs if the first counting value is greater than a preset first counting threshold value;
and the taking judgment subunit is used for judging that article taking occurs if the second counting value is greater than a preset second counting threshold value.
The article state detection system provided by the embodiment of the invention judges the motion state of the foreground object by comparing the size and the position of the cache region of the foreground object detected by the long-period background model of the adjacent frame, and achieves the same technical effect as the double-background modeling technology by the mode, but the difference is that the foreground object detected by the long-period background model is directly cached in the preset cache region, the object is directly concerned, the pixel is not concerned, and the background is not required to be monitored like the short-period background model, so compared with the double-background modeling technology, the article state detection system has the advantages of less resource consumption, fast algorithm execution and fast detection speed.
It should be understood that, the sequence numbers of the steps in the foregoing embodiments do not imply an execution sequence, and the execution sequence of each process should be determined by its function and inherent logic, and should not constitute any limitation to the implementation process of the embodiments of the present invention.
EXAMPLE III
As shown in fig. 5, the terminal device 5 of this embodiment includes: a processor 50, a memory 51 and a computer program 52, such as a foreground object detection program, stored in said memory 51 and executable on said processor 50. The processor 50, when executing the computer program 52, implements the steps in the above-described respective article status detection method embodiments, such as S101 to S104 shown in fig. 1. Alternatively, the processor 50, when executing the computer program 52, implements the functions of the modules/units in the above-mentioned device embodiments, such as the functions of the modules 41 to 44 shown in fig. 4.
Illustratively, the computer program 52 may be partitioned into one or more modules/units that are stored in the memory 51 and executed by the processor 50 to implement the present invention. The one or more modules/units may be a series of computer program instruction segments capable of performing specific functions, which are used to describe the execution process of the computer program 52 in the terminal device 5. For example, the computer program 52 may be divided into an obtaining module, a detecting module, a comparing module, a first determining module, and a second determining module, and the specific functions of each module are as follows:
the acquisition module is used for acquiring the f frame video image data, wherein f is a positive integer;
a detection module, configured to perform foreground object detection on the f-th frame of video image data to obtain a first foreground object region, where the first foreground object region is a foreground object region in the f-th frame of video image data;
the comparison module is used for comparing the first foreground object region with a second foreground object region which is cached in advance to obtain a comparison result, wherein the second foreground object region is a foreground object region in the f-x frame video image data;
and the state judgment module is used for judging the state of the article according to the comparison result.
The terminal device 5 may be a desktop computer, a notebook, a palm computer, a cloud server, or other computing devices. The terminal device may include, but is not limited to, a processor 50, a memory 51. Those skilled in the art will appreciate that fig. 5 is merely an example of a terminal device 5 and does not constitute a limitation of terminal device 5 and may include more or fewer components than shown, or some components may be combined, or different components, e.g., the terminal device may also include input-output devices, network access devices, buses, etc.
The Processor 50 may be a Central Processing Unit (CPU), other general purpose Processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) or other Programmable logic device, discrete Gate or transistor logic, discrete hardware components, etc. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
The memory 51 may be an internal storage unit of the terminal device 5, such as a hard disk or a memory of the terminal device 5. The memory 51 may also be an external storage device of the terminal device 5, such as a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card), and the like, which are provided on the terminal device 5. Further, the memory 51 may also include both an internal storage unit and an external storage device of the terminal device 5. The memory 51 is used for storing the computer program and other programs and data required by the terminal device. The memory 51 may also be used to temporarily store data that has been output or is to be output.
It will be apparent to those skilled in the art that, for convenience and brevity of description, only the above-mentioned division of the functional units and modules is illustrated, and in practical applications, the above-mentioned function distribution may be performed by different functional units and modules according to needs, that is, the internal structure of the apparatus is divided into different functional units or modules to perform all or part of the above-mentioned functions. Each functional unit and module in the embodiments may be integrated in one processing unit, or each unit may exist alone physically, or two or more units are integrated in one unit, and the integrated unit may be implemented in a form of hardware, or in a form of software functional unit. In addition, specific names of the functional units and modules are only for convenience of distinguishing from each other, and are not used for limiting the protection scope of the present application. The specific working processes of the units and modules in the system may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
In the above embodiments, the descriptions of the respective embodiments have respective emphasis, and reference may be made to the related descriptions of other embodiments for parts that are not described or illustrated in a certain embodiment.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention.
In the embodiments provided in the present invention, it should be understood that the disclosed apparatus/terminal device and method may be implemented in other ways. For example, the above-described embodiments of the apparatus/terminal device are merely illustrative, and for example, the division of the modules or units is only one logical division, and there may be other divisions when actually implemented, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated modules/units, if implemented in the form of software functional units and sold or used as separate products, may be stored in a computer readable storage medium. Based on such understanding, all or part of the flow of the method according to the embodiments of the present invention may also be implemented by a computer program, which may be stored in a computer-readable storage medium, and when the computer program is executed by a processor, the steps of the method embodiments may be implemented. Wherein the computer program comprises computer program code, which may be in the form of source code, object code, an executable file or some intermediate form, etc. The computer-readable medium may include: any entity or device capable of carrying the computer program code, recording medium, usb disk, removable hard disk, magnetic disk, optical disk, computer Memory, Read-Only Memory (ROM), Random Access Memory (RAM), electrical carrier wave signals, telecommunications signals, software distribution medium, and the like. It should be noted that the computer readable medium may contain content that is subject to appropriate increase or decrease as required by legislation and patent practice in jurisdictions, for example, in some jurisdictions, computer readable media does not include electrical carrier signals and telecommunications signals as is required by legislation and patent practice.
The above-mentioned embodiments are only used for illustrating the technical solutions of the present invention, and not for limiting the same; although the present invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; such modifications and substitutions do not substantially depart from the spirit and scope of the embodiments of the present invention, and are intended to be included within the scope of the present invention.
Claims (10)
1. An article condition detection method, comprising:
acquiring the f frame video image data, wherein f is a positive integer;
performing foreground object detection on the f frame video image data to obtain a first foreground object area, wherein the first foreground object area is a foreground object area in the f frame video image data;
comparing the first foreground object region with a second foreground object region cached in advance to obtain a comparison result, wherein the second foreground object region is a foreground object region in the f-x frame video image data, and x is a positive integer;
and judging the state of the article according to the comparison result. .
2. The method for detecting the status of an article according to claim 1, wherein the performing foreground object detection on the f-th frame of video image data to obtain a first foreground object area comprises:
acquiring background image data of a background model;
comparing the f frame video image data with the background image data;
and determining the area which is changed relative to the background image data as the first foreground object area.
3. The item state detection method according to claim 1, wherein the comparing the first foreground object region with a pre-cached second foreground object region to obtain a comparison result comprises:
calculating the size and position of the first foreground object region;
acquiring the size and the position of the second foreground object region;
calculating the intersection ratio of the first foreground object region and the second foreground object region;
if the intersection ratio of the first foreground object area and the second foreground object area is larger than a first threshold value, determining that the foreground object is in a static state;
and if the intersection ratio of the first foreground object region and the second foreground object region is smaller than the first threshold and larger than a second threshold, determining that the foreground object is in a motion state.
4. The item state detection method of claim 3, wherein after said obtaining the size and location of the second foreground object region, further comprising:
and if the foreground object is judged to be in the motion state, updating the size and the position of the second foreground object area to the size and the position of the first foreground object area.
5. The item state detection method of claim 3, wherein after said obtaining the size and location of the second foreground object region, further comprising:
if the intersection ratio of the first foreground object region and the second foreground object region is smaller than the second threshold value, determining that a new foreground object appears;
and establishing a new cache region to cache the new foreground object.
6. The method according to any one of claims 1 to 5, wherein said determining the status of the article according to the comparison result comprises:
when the comparison result meets a first preset condition, increasing a first counting value by one counting unit;
when the comparison result meets a second preset condition, increasing a second counting value by one counting unit;
and judging the article state according to the first counting value and the second counting value.
7. The item state detection method according to any one of claim 6, wherein said determining the item state from the first count value and the second count value comprises:
if the first counting value is larger than a preset first counting threshold value, determining that article leaving occurs;
and if the second counting value is larger than a preset second counting threshold value, determining that article taking occurs.
8. An article condition detection system, comprising:
the acquisition module is used for acquiring the f frame video image data, wherein f is a positive integer;
a detection module, configured to perform foreground object detection on the f-th frame of video image data to obtain a first foreground object region, where the first foreground object region is a foreground object region in the f-th frame of video image data;
the comparison module is used for comparing the first foreground object region with a second foreground object region which is cached in advance to obtain a comparison result, wherein the second foreground object region is a foreground object region in the f-x frame video image data, and x is a positive integer;
and the state judgment module is used for judging the state of the article according to the comparison result.
9. A terminal device comprising a memory, a processor and a computer program stored in the memory and executable on the processor, characterized in that the processor implements the steps of the item status detection method according to any one of claims 1 to 7 when executing the computer program.
10. A computer-readable storage medium, in which a computer program is stored which, when being executed by a processor, carries out the steps of the item status detection method according to any one of claims 1 to 7.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910974637.3A CN110706227B (en) | 2019-10-14 | 2019-10-14 | Article state detection method, system, terminal device and storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910974637.3A CN110706227B (en) | 2019-10-14 | 2019-10-14 | Article state detection method, system, terminal device and storage medium |
Publications (2)
Publication Number | Publication Date |
---|---|
CN110706227A true CN110706227A (en) | 2020-01-17 |
CN110706227B CN110706227B (en) | 2022-07-05 |
Family
ID=69198835
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910974637.3A Active CN110706227B (en) | 2019-10-14 | 2019-10-14 | Article state detection method, system, terminal device and storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN110706227B (en) |
Citations (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101552910A (en) * | 2009-03-30 | 2009-10-07 | 浙江工业大学 | Lave detection device based on comprehensive computer vision |
CN102314591A (en) * | 2010-07-09 | 2012-01-11 | 株式会社理光 | Method and equipment for detecting static foreground object |
CN102509075A (en) * | 2011-10-19 | 2012-06-20 | 北京国铁华晨通信信息技术有限公司 | Remnant object detection method and device |
CN106228572A (en) * | 2016-07-18 | 2016-12-14 | 西安交通大学 | The long inactivity object detection of a kind of carrier state mark and tracking |
US20170043982A1 (en) * | 2014-05-06 | 2017-02-16 | Otis Elevator Company | Object detector, and method for controlling a passenger conveyor system using the same |
CN106937120A (en) * | 2015-12-29 | 2017-07-07 | 北京大唐高鸿数据网络技术有限公司 | Object-based monitor video method for concentration |
CN109147254A (en) * | 2018-07-18 | 2019-01-04 | 武汉大学 | A kind of video outdoor fire disaster smog real-time detection method based on convolutional neural networks |
CN109859236A (en) * | 2019-01-02 | 2019-06-07 | 广州大学 | Mobile object detection method, calculates equipment and storage medium at system |
CN109948611A (en) * | 2019-03-14 | 2019-06-28 | 腾讯科技(深圳)有限公司 | A kind of method and device that method, the information of information area determination are shown |
CN110069961A (en) * | 2018-01-24 | 2019-07-30 | 北京京东尚科信息技术有限公司 | A kind of object detecting method and device |
CN110189355A (en) * | 2019-05-05 | 2019-08-30 | 暨南大学 | Safe escape channel occupies detection method, device, electronic equipment and storage medium |
-
2019
- 2019-10-14 CN CN201910974637.3A patent/CN110706227B/en active Active
Patent Citations (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101552910A (en) * | 2009-03-30 | 2009-10-07 | 浙江工业大学 | Lave detection device based on comprehensive computer vision |
CN102314591A (en) * | 2010-07-09 | 2012-01-11 | 株式会社理光 | Method and equipment for detecting static foreground object |
CN102509075A (en) * | 2011-10-19 | 2012-06-20 | 北京国铁华晨通信信息技术有限公司 | Remnant object detection method and device |
US20170043982A1 (en) * | 2014-05-06 | 2017-02-16 | Otis Elevator Company | Object detector, and method for controlling a passenger conveyor system using the same |
CN106937120A (en) * | 2015-12-29 | 2017-07-07 | 北京大唐高鸿数据网络技术有限公司 | Object-based monitor video method for concentration |
CN106228572A (en) * | 2016-07-18 | 2016-12-14 | 西安交通大学 | The long inactivity object detection of a kind of carrier state mark and tracking |
CN110069961A (en) * | 2018-01-24 | 2019-07-30 | 北京京东尚科信息技术有限公司 | A kind of object detecting method and device |
CN109147254A (en) * | 2018-07-18 | 2019-01-04 | 武汉大学 | A kind of video outdoor fire disaster smog real-time detection method based on convolutional neural networks |
CN109859236A (en) * | 2019-01-02 | 2019-06-07 | 广州大学 | Mobile object detection method, calculates equipment and storage medium at system |
CN109948611A (en) * | 2019-03-14 | 2019-06-28 | 腾讯科技(深圳)有限公司 | A kind of method and device that method, the information of information area determination are shown |
CN110189355A (en) * | 2019-05-05 | 2019-08-30 | 暨南大学 | Safe escape channel occupies detection method, device, electronic equipment and storage medium |
Also Published As
Publication number | Publication date |
---|---|
CN110706227B (en) | 2022-07-05 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110766679B (en) | Lens contamination detection method and device and terminal equipment | |
CN110248110B (en) | Shooting parameter setting method, setting device, terminal equipment and readable storage medium | |
CN110853076A (en) | Target tracking method, device, equipment and storage medium | |
CN111950543B (en) | Target detection method and device | |
CN108769634B (en) | Image processing method, image processing device and terminal equipment | |
CN110933497A (en) | Video image data frame insertion processing method and related equipment | |
CN110968718B (en) | Target detection model negative sample mining method and device and electronic equipment | |
CN108174057B (en) | Method and device for rapidly reducing noise of picture by utilizing video image inter-frame difference | |
CN112767281B (en) | Image ghost eliminating method and device, electronic equipment and storage medium | |
CA2910965A1 (en) | Tracker assisted image capture | |
CN109886864B (en) | Privacy mask processing method and device | |
CN101489121A (en) | Background model initializing and updating method based on video monitoring | |
CN113391779B (en) | Parameter adjusting method, device and equipment for paper-like screen | |
CN108765454A (en) | A kind of smog detection method, device and device end based on video | |
CN111144337A (en) | Fire detection method and device and terminal equipment | |
CN103810718A (en) | Method and device for detection of violently moving target | |
CN108961293B (en) | Background subtraction method, device, equipment and storage medium | |
CN108961316A (en) | Image processing method, device and server | |
CN110232303B (en) | Apparatus, method, and medium for image processing | |
CN113706446B (en) | Lens detection method and related device | |
CN105338221A (en) | Image processing method and electronic equipment | |
CN110706227B (en) | Article state detection method, system, terminal device and storage medium | |
CN104933688B (en) | Data processing method and electronic equipment | |
CN104978731A (en) | Information processing method and electronic equipment | |
CN116503640A (en) | Video detection method, device, electronic equipment and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant | ||
TR01 | Transfer of patent right | ||
TR01 | Transfer of patent right |
Effective date of registration: 20231219 Address after: 523000 beside Nanlang Road, Ecological Industrial Park, Dongguan City, Guangdong Province Patentee after: DONGGUAN TP-LINK TECHNOLOGY CO.,LTD. Address before: 518000 the 1st and 3rd floors of the south section of building 24 and the 1st-4th floor of the north section of building 28, Shennan Road Science and Technology Park, Nanshan District, Shenzhen City, Guangdong Province Patentee before: TP-LINK TECHNOLOGIES Co.,Ltd. |