CN111260695A - Throw-away sundry identification algorithm, system, server and medium - Google Patents
Throw-away sundry identification algorithm, system, server and medium Download PDFInfo
- Publication number
- CN111260695A CN111260695A CN202010051778.0A CN202010051778A CN111260695A CN 111260695 A CN111260695 A CN 111260695A CN 202010051778 A CN202010051778 A CN 202010051778A CN 111260695 A CN111260695 A CN 111260695A
- Authority
- CN
- China
- Prior art keywords
- image
- background
- acquiring
- texture difference
- diff1
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000004422 calculation algorithm Methods 0.000 title claims abstract description 39
- 239000012535 impurity Substances 0.000 claims abstract description 44
- 238000000034 method Methods 0.000 claims abstract description 34
- 238000012545 processing Methods 0.000 claims abstract description 18
- 238000001514 detection method Methods 0.000 claims abstract description 13
- 238000004364 calculation method Methods 0.000 claims abstract description 12
- 230000015654 memory Effects 0.000 claims description 21
- 230000008569 process Effects 0.000 claims description 21
- 238000004891 communication Methods 0.000 claims description 16
- 239000011159 matrix material Substances 0.000 claims description 7
- 238000004590 computer program Methods 0.000 claims description 5
- 238000012549 training Methods 0.000 abstract description 4
- 238000010586 diagram Methods 0.000 description 7
- 230000009471 action Effects 0.000 description 5
- 230000006870 function Effects 0.000 description 4
- 230000000087 stabilizing effect Effects 0.000 description 3
- 238000001914 filtration Methods 0.000 description 2
- 230000008859 change Effects 0.000 description 1
- 238000013500 data storage Methods 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 230000003993 interaction Effects 0.000 description 1
- 230000007246 mechanism Effects 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 238000003672 processing method Methods 0.000 description 1
- 230000011664 signaling Effects 0.000 description 1
- 239000007787 solid Substances 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/20—Analysis of motion
- G06T7/254—Analysis of motion involving subtraction of images
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/20—Analysis of motion
- G06T7/215—Motion-based segmentation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10016—Video; Image sequence
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20212—Image combination
- G06T2207/20224—Image subtraction
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Image Analysis (AREA)
Abstract
The invention discloses an sundry throwing identification algorithm, a system, a server and a medium, wherein the sundry throwing identification algorithm comprises the steps of carrying out background modeling based on a multi-Gaussian model, and obtaining a background image BG in a first preset time; obtaining a first texture difference image diff1 by making a background difference based on the foreground image FG and the background image BG; carrying out binarization processing on the first texture difference image diff1 based on a first threshold value to obtain a binary image, and carrying out contour detection on the binary image to obtain a preselected item impurity area; obtaining a corresponding image of the preselected impurity area and a new background image in a second preset time, performing background difference calculation to obtain a second texture difference image diff2, and judging whether the second texture difference image diff2 is smaller than a second threshold value; if the value is less than the second threshold value, the video clip is determined to be sundries and is reserved. Compared with the single-frame sundry recognition, the method has the characteristics of no need of model training and low false recognition rate, and the algorithm has good shielding robustness and wide application range.
Description
Technical Field
The invention relates to the technical field of data processing, in particular to a throw-away sundry identification algorithm, a throw-away sundry identification system, a throw-away sundry identification server and a throw-away sundry identification medium.
Background
Video-based action recognition is typically an algorithmic design from the action itself as the point of recognition entry. However, the action of throwing the sundries is difficult to capture and recognize from the action itself due to the diversified expression of the limbs, for example, a minute action of throwing the sundries to the ground while walking. The problem of how to identify the lost sundries through other ways on the side surface in places with strict requirements on ground environment, such as factories and markets, needs to be solved urgently.
Disclosure of Invention
The invention aims to provide a throwing sundry identification algorithm, a system, a server and a medium, which are used for identifying based on a video, and the side surface of the process from appearing in the video to stably existing in the video is used as a basis for identifying sundries according to the characteristics of the sundries, so that the throwing sundries identification algorithm has the characteristics of no need of model training, low error identification rate, good shielding robustness and wide application range.
In order to achieve the above object, in a first aspect, an embodiment of the present invention provides a throw-away sundry identification algorithm, including:
acquiring a data frame input by a video stream, performing background modeling based on a multi-Gaussian model, and acquiring a background image BG within a first preset time;
obtaining a first texture difference image diff1 by making a background difference based on a foreground image FG and the background image BG;
carrying out binarization processing on the first texture difference image diff1 based on a first threshold value to obtain a binary image, carrying out contour detection on the binary image to obtain a preselected item impurity area, and recording the contour of the preselected item impurity area and a corresponding image;
obtaining a corresponding image of the preselected impurity area and a new background image in a second preset time, performing background difference calculation to obtain a second texture difference image diff2, and judging whether the second texture difference image diff2 is smaller than a second threshold value;
if the value is smaller than the second threshold value, the video clip is determined to be sundries, the video clip is reserved, and prompt information is output to the terminal.
In an embodiment, the obtaining of the first texture difference image diff1 by making a background difference based on the foreground image FG and the background image BG specifically includes:
acquiring a frame which is a color image at present, converting the frame into a gray image, and acquiring a foreground image FG;
and obtaining a first texture difference image diff1 by performing matrix operation on the absolute value of the difference between the foreground image FG and the background image BG.
In one embodiment, the binarizing the first texture difference image diff1 based on a first threshold to obtain a binary image, performing contour detection on the binary image to obtain a preselected impurity region, and recording a contour of the preselected impurity region and a corresponding image, specifically includes:
acquiring a pixel which is larger than a first threshold value in the first texture difference image diff1 and is set as 255, and acquiring a binary image by setting a pixel which is smaller than or equal to the first threshold value as 0;
acquiring the area of the outline region of the binary image, and judging whether the area is smaller than a third threshold value;
if yes, obtaining a pre-selected impurity area;
if not, acquiring the first times of background subtraction.
In one embodiment, after obtaining the first number of background differences, the algorithm further comprises:
judging whether the first time of making the background difference exceeds a preset time or not;
if the content exceeds the preset value, determining that the content is not sundries;
if not, the first texture difference image diff1 is continuously binarized to obtain a binary image.
In one embodiment, the method includes acquiring a data frame of a video stream input, performing background modeling based on a multiple gaussian model, and after acquiring a background image BG within a first preset time, the method further includes:
judging whether the data frame is empty or not;
if yes, controlling to end the corresponding process;
if not, a background difference is made based on the foreground image FG and the background image BG to obtain a first texture difference image diff 1.
In an embodiment, obtaining an image corresponding to the preselected impurity area and a new background image within a second preset time, performing background difference calculation to obtain a second texture difference image diff2, and determining whether the second texture difference image diff2 is smaller than a second threshold, specifically further comprising:
if the second time is greater than or equal to the second threshold, acquiring a second time for making a background difference, and judging whether the second time exceeds a preset time;
if the content exceeds the preset value, determining that the content is not sundries;
if not, the first texture difference image diff1 is continuously binarized to obtain a binary image.
In a second aspect, an embodiment of the present invention provides a thrown sundry article identification system, including a module for executing the thrown sundry article identification algorithm of the first aspect.
In a third aspect, an embodiment of the present invention provides a server, including a processor, a communication interface, and a memory, where the processor, the communication interface, and the memory are connected to each other, where the memory is used to store a computer program, and the computer program includes program instructions, and the processor is configured to call the program instructions to execute the throw sundry identification algorithm according to the first aspect.
In a fourth aspect, an embodiment of the present invention provides a medium having stored therein instructions that, when run on a computer, cause the computer to execute the throw article recognition algorithm of the first aspect.
In a fifth aspect, an embodiment of the present invention provides a computer program product comprising instructions which, when run on a computer, cause the computer to perform the throw article recognition algorithm of the first aspect.
The invention discloses a throw-away sundries recognition algorithm, a system, a server and a medium.A background modeling is carried out based on a multi-Gaussian model, and a background image BG is obtained within a first preset time; obtaining a first texture difference image diff1 by making a background difference based on a foreground image FG and the background image BG; carrying out binarization processing on the first texture difference image diff1 based on a first threshold value to obtain a binary image, and carrying out contour detection on the binary image to obtain a preselected impurity area; obtaining a corresponding image of the preselected impurity area and a new background image in a second preset time, performing background difference calculation to obtain a second texture difference image diff2, and judging whether the second texture difference image diff2 is smaller than a second threshold value; if the value is less than the second threshold value, the video clip is determined to be sundries and is reserved. Compared with the single-frame sundry recognition, the method has the characteristics of no need of model training and low false recognition rate, and the algorithm has good shielding robustness and wide application range.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts.
FIG. 1 is a schematic flow chart of a throw-away identification algorithm according to an embodiment of the present invention;
FIG. 2 is a schematic diagram of a detailed flow chart of a throw-away item identification algorithm according to an embodiment of the present invention;
fig. 3 is a schematic structural diagram of a thrown sundry item recognition system according to an embodiment of the invention;
fig. 4 is a schematic structural diagram of a server according to an embodiment of the present invention;
FIG. 5 is a schematic diagram of a process for identifying sundries in a non-exclusive manner according to an embodiment of the present invention;
fig. 6 is a schematic diagram of an identification process of sundries according to an embodiment of the present invention;
in the figure: 300-throw sundries recognition system, 301-acquisition module, 302-processing module, 303-judgment module, 400-server, 401-processor, 402-communication interface and 403-memory.
Detailed Description
The technical solution in the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present invention.
Referring to fig. 1, fig. 1 is a schematic flow chart of an identification algorithm for throwing sundries according to an embodiment of the present invention. Specifically, the throw article identification algorithm may include the following steps:
s101, data frames input by video streaming are obtained, background modeling is carried out based on a multi-Gaussian model, and a background image BG is obtained within first preset time.
Referring to fig. 2, in the embodiment of the present invention, the data frame is a protocol data unit of a data link layer, and includes three parts: frame header, data section, frame trailer. The frame head and the frame tail contain necessary control information, such as synchronization information, address information, error control information, and the like; the data portion includes data passed down by the network layer, such as IP packets, etc. Judging whether the data frame is empty or not; if yes, controlling to end the corresponding process; if not, a background difference is made based on the foreground image FG and the background image BG to obtain a first texture difference image diff 1. Avoiding occupying the program and saving the space.
Background modeling, also known as background estimation, constructs a background image through the relationship between multiple frames of images. The background image is constructed mainly for converting the problem of detecting a moving object in a video frame image into a binary problem according to the current estimated background, and all pixels are classified into two types, namely background or moving foreground. The sundries of the newly entered picture belong to the foreground, but as the number of frames in which the sundries stay in the picture increases, the sundries gradually become a part of the background. The process from not being part of the background to slowly becoming the background is the process of identifying the sundries. The process from appearing in the picture to stabilizing the sundries in the picture is the complete process of losing the sundries. Based on a Gaussian model method, namely based on each independent pixel point in a video image, whether the pixel point belongs to the foreground or the background is judged according to the relation between the change of the observed value in a sequence image and a Gaussian distribution density function, so that a background image BG is obtained.
And S102, obtaining a first texture difference image diff1 by making a background difference based on the foreground image FG and the background image BG.
In the embodiment of the invention, starting from the first frame of the video, background modeling is carried out to obtain a background image. Each subsequent frame image, we call the foreground image. As the video is updated frame by frame, the background modeling algorithm will update the background based on the foreground statistics that previously appeared. The clutter defined in the invention is a pre-selection item which does not exist in the background at first and is called as the clutter by the newly appeared objects in the foreground frame.
The sundries can be distinguished into two stages, wherein the first stage is a sundries preselection item selection stage: typically including a stream of people walking around, debris discarded by workers, a moving pipeline, etc. Compared with other pre-options, the sundries which are really discarded have the characteristic of not moving for a long time, and the objects which do not move for a long time are easily updated into stable background objects by background modeling. Thus according to this feature, the second stage of debris identification is defined as: by the time the foreground object itself does not exist in the background and becomes stable as part of the background, we identify all foreground objects that satisfy this logic as clutter. To summarize, the whole process of debris identification is divided into two stages:
the first stage is as follows: selecting a foreground object which does not exist in the background image as a preselected item for second-stage identification;
and a second stage: within a certain time, judging which preselected items in the first stage are stable to become a part of the background;
according to the invention, through the discrimination of the two stages, the possible options of the sundries are found out through the side surface of the first stage, so that the sundries are prevented from being directly identified at one time, namely, the situation that the wrong identification is easily caused is avoided; through the filtering of the second stage, the object which is really the sundries is positioned from a plurality of pre-selected items, and the judgment basis is that the sundries are not moved after being thrown on the ground. The two stages are used for rough positioning and fine positioning, the process of the rough positioning is also the beginning of the appearance of sundries and is used for the beginning of video interception, and the end of the fine positioning is also the end of the video interception of the sundries.
Acquiring a frame which is a color image at present, converting the frame into a gray image, and acquiring a foreground image FG; obtaining a first texture difference image diff1 by matrix operation of absolute values of differences between a foreground image FG and a background image BG; the formula is described as: dis is abs (FG-BG), where abs is an absolute value.
S103, performing binarization processing on the first texture difference image diff1 based on a first threshold value to obtain a binary image, performing contour detection on the binary image to obtain a preselected impurity region, and recording a contour of the preselected impurity region and a corresponding image.
In the embodiment of the present invention, the range of the gray level image is 0 to 255, after the background difference is made, the texture of the generally unmoved object is close to 0, the texture of the Diff1 image with a moved or newly added object is close to more than 100, the pixel which is larger than the first threshold value in the first texture difference image Diff1 is obtained and set as 255, and the pixel which is smaller than or equal to the first threshold value is set as 0, so as to obtain a binary image; wherein the first threshold thresh is 30;
generally, sundries are not large, so large objects of the preselected item are filtered out according to the outline area, the remaining outline area is the area of the preselected item sundries, and the position and the image of the outline area are recorded for the judgment of the second stage. Acquiring the area of the outline region of the binary image, and judging whether the area is smaller than a third threshold value; if yes, obtaining a pre-selected impurity area; if not, acquiring the first times of background subtraction.
After the first number of background differences is obtained, the algorithm further comprises: judging whether the first time of making the background difference exceeds a preset time or not; if the content exceeds the preset value, determining that the content is not sundries; if not, the first texture difference image diff1 is continuously binarized to obtain a binary image until the contour traversal is finished.
S104, obtaining a corresponding image of the preselected impurity area and a new background image in a second preset time, calculating to obtain a second texture difference image diff2, and judging whether the second texture difference image diff2 is smaller than a second threshold value.
In the embodiment of the invention, because the background modeling is a continuous updating process, after a certain frame, fixed sundries which do not move can be updated into the background, and moving objects can not be updated into the background, namely moving objects are shown as a ghost image in the background updating. Every 10 frames of Interval, calculating the background difference between the image corresponding to the preselected impurity area in the first stage and the new background image to obtain a second texture difference image diff2, and judging whether the second texture difference image diff2 is smaller than a second threshold value; wherein the second threshold thresh is 15; if the value is less than the second threshold, it is determined that the contour region of the first stage is updated to be the background, and it can also be understood that the contour region of the first stage is not moved for a long time, because the contour region is not moved for a long time, and the contour region is repeatedly superimposed on the background in the process of updating the background every frame until becoming a part of the background. If the value is smaller than the second threshold value, the video clip is determined to be sundries, the video clip is reserved, and prompt information is output to the terminal. The prompt message can be 'sundries are available, please clean up', and the terminal can be a buzzer, a computer, a tablet and a mobile phone. If the second time is greater than or equal to the second threshold, acquiring a second time for making a background difference, and judging whether the second time exceeds a preset time; if the content exceeds the preset value, determining that the content is not sundries; if not, the first texture difference image diff1 is continuously binarized to obtain a binary image.
The invention discloses a throw-away sundries recognition algorithm, which is characterized in that background modeling is carried out based on a multi-Gaussian model, and a background image BG is obtained within a first preset time; obtaining a first texture difference image diff1 by making a background difference based on a foreground image FG and the background image BG; carrying out binarization processing on the first texture difference image diff1 based on a first threshold value to obtain a binary image, and carrying out contour detection on the binary image to obtain a preselected impurity area; obtaining a corresponding image of the preselected impurity area and a new background image in a second preset time, performing background difference calculation to obtain a second texture difference image diff2, and judging whether the second texture difference image diff2 is smaller than a second threshold value; if the value is less than the second threshold value, the video clip is determined to be sundries and is reserved. Compared with the single-frame sundry identification, the method has the characteristics of no need of model training and low false identification rate, the algorithm has good shielding robustness and wide application range, the background modeling is slightly influenced for sundry shielding in a short time, and the second stage is always smaller than a second threshold value. For the case of long-time shielding, a time filtering mechanism is adopted, namely, for the foreground preselected item region which does not meet the second stage for a long time, the object is considered not to meet the requirement, continuous tracking judgment is not carried out, and the robustness of judging sundries in long-time shielding is not achieved. On the whole, the characteristic that the sundries stay for a long time on the side surface is used as the basis for identifying the sundries, and the method has the advantages of high real-time performance, less occupied resources, stability and reliability in simple places like factories.
The overall flow of the sundry throwing identification algorithm is as follows: acquiring a data frame input by a video stream, performing background modeling based on a multi-Gaussian model, and acquiring a background image BG within a first preset time; obtaining a first texture difference image diff1 by making a background difference based on a foreground image FG and the background image BG; carrying out binarization processing on the first texture difference image diff1 based on a first threshold value to obtain a binary image, carrying out contour detection on the binary image to obtain a preselected item impurity area, and recording the contour of the preselected item impurity area and a corresponding image; obtaining a corresponding image of the preselected impurity area and a new background image in a second preset time, performing background difference calculation to obtain a second texture difference image diff2, and judging whether the second texture difference image diff2 is smaller than a second threshold value; if the value is smaller than the second threshold value, the video clip is determined to be sundries, the video clip is reserved, and prompt information is output to the terminal.
Referring to fig. 5, the process of identifying not sundries will now be explained in detail: at a first preset time, such as 2019-07-02 tuesday 03: 18: 08, acquiring a current background image BG, judging whether a data frame input by a video stream is empty, if not, acquiring a frame which is a current color image, converting the frame into a gray image, and acquiring a foreground image FG; and obtaining a first texture difference image diff1 by performing matrix operation on the absolute value of the difference between the foreground image FG and the background image BG. dis is abs (FG-BG), where abs is an absolute value. Acquiring a pixel which is larger than a first threshold value in the first texture difference image diff1 and is set as 255, and acquiring a binary image by setting a pixel which is smaller than or equal to the first threshold value as 0; acquiring the area of the outline region of the binary image, and judging whether the area is smaller than a third threshold value; if not, acquiring the first times of background subtraction. Judging whether the first time of making the background difference exceeds a preset time or not; if the content exceeds the preset value, determining that the content is not sundries; if not, the first texture difference image diff1 is continuously binarized to obtain a binary image, and the above process is repeated. I.e. there are no clutter present in the picture to the process of stabilizing into the picture.
Referring to fig. 6, the process of identifying the sundries will be explained in detail: at a first preset time, such as 2019-07-02 tuesday 03: 18: 08, acquiring a current background image BG, judging whether a data frame input by a video stream is empty, if not, acquiring a frame which is a current color image, converting the frame into a gray image, and acquiring a foreground image FG; and obtaining a first texture difference image diff1 by performing matrix operation on the absolute value of the difference between the foreground image FG and the background image BG. dis is abs (FG-BG), where abs is an absolute value. Acquiring a pixel which is larger than a first threshold value in the first texture difference image diff1 and is set as 255, and acquiring a binary image by setting a pixel which is smaller than or equal to the first threshold value as 0; acquiring the area of the outline region of the binary image, and judging whether the area is smaller than a third threshold value; if yes, obtaining a pre-selected impurity area; acquiring a corresponding image of the pre-selected impurity area and displaying the image in a second preset time such as 2019-07-02 Tuesday 03: 18: calculating a background difference of the new background image of 18 to obtain a second texture difference image diff2, and judging whether the second texture difference image diff2 is smaller than a second threshold value; if the value is smaller than the second threshold value, the video clip is determined to be sundries, the video clip is reserved, and prompt information is output to the terminal. That is, there is a process from appearing in the screen to stabilizing in the screen within 10 seconds.
Referring to fig. 3, fig. 3 is a schematic structural diagram of a system 300 for identifying thrown sundries according to an embodiment of the present invention. The throw article identification system 300 described in this embodiment includes modules for the throw article identification algorithm of the first aspect described above. The method specifically comprises the following steps: an acquisition module 301, a processing module 302 and a judgment module 303; wherein:
the acquiring module 301 is configured to acquire a data frame input by a video stream, perform background modeling based on a multiple gaussian model, and acquire a background image BG within a first preset time;
the processing module 302 is configured to obtain a first texture difference image diff1 by making a background difference based on a foreground image FG and the background image BG;
the processing module 302 is configured to perform binarization processing on the first texture difference image diff1 based on a first threshold value to obtain a binary image, perform contour detection on the binary image to obtain a preselected impurity area, and record a contour of the preselected impurity area and a corresponding image;
the judging module 303 is configured to obtain an image corresponding to the preselected impurity area and a new background image within a second preset time, perform background difference calculation to obtain a second texture difference image diff2, and judge whether the second texture difference image diff2 is smaller than a second threshold;
if the value is smaller than the second threshold value, the video clip is determined to be sundries, the video clip is reserved, and prompt information is output to the terminal.
In an embodiment, in obtaining a first texture difference image diff1 by making a background difference based on a foreground image FG and the background image BG, the obtaining module 301 is specifically configured to obtain a frame which is a color image at present and convert the frame into a grayscale image, so as to obtain the foreground image FG;
and obtaining a first texture difference image diff1 by performing matrix operation on the absolute value of the difference between the foreground image FG and the background image BG.
In an embodiment, in the step of performing binarization processing on the first texture difference image diff1 based on a first threshold to obtain a binary image, performing contour detection on the binary image to obtain a preselected impurity region, and recording the contour of the preselected impurity region and a corresponding image, the obtaining module 301 is configured to obtain a binary image by setting a pixel in the first texture difference image diff1, which is greater than the first threshold, to be 255 and setting a pixel in the first texture difference image diff1, which is less than or equal to the first threshold, to be 0;
the judging module 303 is configured to obtain an area of a contour region of the binary image, and judge whether the area is smaller than a third threshold;
if yes, obtaining a pre-selected impurity area;
if not, acquiring the first times of background subtraction.
In an embodiment, after obtaining the first number of times of making the background difference, the determining module 303 is configured to determine whether the first number of times of making the background difference exceeds a preset number of times;
if the content exceeds the preset value, determining that the content is not sundries;
if not, the first texture difference image diff1 is continuously binarized to obtain a binary image.
In an embodiment, the determining module 303 is configured to obtain a data frame input by a video stream, perform background modeling based on a multiple gaussian model, and after obtaining a background image BG within a first preset time, determine whether the data frame is empty;
if yes, controlling to end the corresponding process;
if not, a background difference is made based on the foreground image FG and the background image BG to obtain a first texture difference image diff 1.
In an embodiment, a second texture difference image diff2 is obtained by performing background difference calculation on the acquired image corresponding to the preselected impurity area and a new background image within a second preset time, and it is determined whether the second texture difference image diff2 is smaller than a second threshold, where the determining module 303 is configured to acquire a second number of times of background difference if the second number of times is greater than or equal to the second threshold, and determine whether the second number of times exceeds a preset number of times;
if the content exceeds the preset value, determining that the content is not sundries;
if not, the first texture difference image diff1 is continuously binarized to obtain a binary image.
In a third aspect, please refer to fig. 4, where fig. 4 is a schematic structural diagram of a server 400 according to an embodiment of the present invention, the server 400 described in the embodiment of the present invention includes: a processor 401, a communication interface 402, a memory 403. The processor 401, the communication interface 402, and the memory 403 may be connected by a bus or in other manners, and the embodiment of the present invention is exemplified by being connected by a bus.
The processor 401 may be a Central Processing Unit (CPU), a Network Processor (NP), a Graphics Processing Unit (GPU), or a combination of a CPU, a GPU, and an NP. The processor 401 may also be a core of a multi-core CPU, a multi-core GPU, or a multi-core NP for implementing communication identity binding.
The processor 401 may be a hardware chip. The hardware chip may be an application-specific integrated circuit (ASIC), a Programmable Logic Device (PLD), or a combination thereof. The PLD may be a Complex Programmable Logic Device (CPLD), a field-programmable gate array (FPGA), a General Array Logic (GAL), or any combination thereof.
The communication interface 402 may be used for transceiving information or signaling interaction, as well as receiving and transferring signals, and the communication interface 402 may be a transceiver. The memory 403 may mainly include a program storage area and a data storage area, where the program storage area may store an operating system, and a storage program required by at least one function (e.g., a text storage function, a location storage function, etc.); the storage data area may store data (such as image data, text data) created according to the use of the server 400, and the like, and may include an application storage program, and the like. Further, the memory 403 may include high speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid state storage device.
The memory 403 is also used for storing program instructions. The processor 401 may call the program instructions stored in the memory 403 to implement the data processing method according to the embodiment of the present invention.
Specifically, the processor 401 invokes a program instruction stored in the memory 403 to execute or invokes the communication interface 402 to execute the following steps:
acquiring a data frame input by a video stream, performing background modeling based on a multi-Gaussian model, and acquiring a background image BG within a first preset time;
obtaining a first texture difference image diff1 by making a background difference based on a foreground image FG and the background image BG;
carrying out binarization processing on the first texture difference image diff1 based on a first threshold value to obtain a binary image, carrying out contour detection on the binary image to obtain a preselected item impurity area, and recording the contour of the preselected item impurity area and a corresponding image;
obtaining a corresponding image of the preselected impurity area and a new background image in a second preset time, performing background difference calculation to obtain a second texture difference image diff2, and judging whether the second texture difference image diff2 is smaller than a second threshold value;
if the value is smaller than the second threshold value, the video clip is determined to be sundries, the video clip is reserved, and prompt information is output to the terminal.
In one embodiment, in obtaining the first texture difference image diff1 based on the background difference between the foreground image FG and the background image BG, the processor 401 invokes the program instructions stored in the memory 403 to execute or invokes the communication interface 402 to execute the following steps:
acquiring a frame which is a color image at present, converting the frame into a gray image, and acquiring a foreground image FG;
and obtaining a first texture difference image diff1 by performing matrix operation on the absolute value of the difference between the foreground image FG and the background image BG.
In one embodiment, in the step of performing binarization processing on the first texture difference image diff1 based on a first threshold value to obtain a binary map, performing contour detection on the binary map to obtain a preselected impurity region, and recording a contour of the preselected impurity region and a corresponding image, the processor 401 invokes the program instruction stored in the memory 403 to execute or invokes the communication interface 402 to execute the following steps:
acquiring a pixel which is larger than a first threshold value in the first texture difference image diff1 and is set as 255, and acquiring a binary image by setting a pixel which is smaller than or equal to the first threshold value as 0;
acquiring the area of the outline region of the binary image, and judging whether the area is smaller than a third threshold value;
if yes, obtaining a pre-selected impurity area;
if not, acquiring the first times of background subtraction.
In one embodiment, after obtaining the first number of background subtraction operations, the processor 401 calls a program instruction stored in the memory 403 to execute or calls the communication interface 402 to execute the following steps:
judging whether the first time of making the background difference exceeds a preset time or not;
if the content exceeds the preset value, determining that the content is not sundries;
if not, the first texture difference image diff1 is continuously binarized to obtain a binary image.
In an embodiment, after acquiring a data frame input by a video stream, performing background modeling based on a multiple gaussian model, and acquiring a background image BG within a first preset time, the processor 401 invokes a program instruction stored in the memory 403 to execute or invokes the communication interface 402 to execute the following steps:
judging whether the data frame is empty or not;
if yes, controlling to end the corresponding process;
if not, a background difference is made based on the foreground image FG and the background image BG to obtain a first texture difference image diff 1.
In an embodiment, after obtaining the corresponding image of the preselected impurity area and performing a background difference calculation on a new background image within a second preset time to obtain a second texture difference image diff2, and determining whether the second texture difference image diff2 is smaller than a second threshold, the processor 401 invokes the program instructions stored in the memory 403 to execute or invokes the communication interface 402 to execute the following steps:
if the second time is greater than or equal to the second threshold, acquiring a second time for making a background difference, and judging whether the second time exceeds a preset time;
if the content exceeds the preset value, determining that the content is not sundries;
if not, the first texture difference image diff1 is continuously binarized to obtain a binary image.
In a fourth aspect, the embodiment of the present invention further provides a medium, which is a computer-readable storage medium, and the computer-readable storage medium stores instructions that, when executed on a computer, cause the computer to execute the throw sundry identification algorithm according to the above method embodiment.
Embodiments of the present invention also provide a computer program product containing instructions, which when run on a computer, cause the computer to execute the throw article identification algorithm described in the above method embodiments.
It should be noted that, for simplicity of description, the above-mentioned embodiments of the method are described as a series of acts or combinations, but those skilled in the art will recognize that the present invention is not limited by the order of acts, as some steps may occur in other orders or concurrently in accordance with the invention. Further, those skilled in the art should also appreciate that the embodiments described in the specification are preferred embodiments and that the acts and modules referred to are not necessarily required by the invention.
The steps in the method of the embodiment of the invention can be sequentially adjusted, combined and deleted according to actual needs.
The modules in the device provided by the embodiment of the invention can be combined, divided and deleted according to actual needs.
Those skilled in the art will appreciate that all or part of the steps in the methods of the above embodiments may be implemented by associated hardware instructed by a program, which may be stored in a computer-readable storage medium, and the storage medium may include: flash disks, Read-Only memories (ROMs), Random Access Memories (RAMs), magnetic or optical disks, and the like.
While the invention has been described with reference to a preferred embodiment, it will be understood by those skilled in the art that various changes in form and detail may be made therein without departing from the spirit and scope of the invention as defined by the appended claims.
Claims (9)
1. A throw debris recognition algorithm, comprising:
acquiring a data frame input by a video stream, performing background modeling based on a multi-Gaussian model, and acquiring a background image BG within a first preset time;
obtaining a first texture difference image diff1 by making a background difference based on a foreground image FG and the background image BG;
carrying out binarization processing on the first texture difference image diff1 based on a first threshold value to obtain a binary image, carrying out contour detection on the binary image to obtain a preselected item impurity area, and recording the contour of the preselected item impurity area and a corresponding image;
obtaining a corresponding image of the preselected impurity area and a new background image in a second preset time, performing background difference calculation to obtain a second texture difference image diff2, and judging whether the second texture difference image diff2 is smaller than a second threshold value;
if the value is smaller than the second threshold value, the video clip is determined to be sundries, the video clip is reserved, and prompt information is output to the terminal.
2. A litter recognition algorithm as claimed in claim 1, wherein the obtaining of the first texture difference image diff1 based on the background difference between the foreground image FG and the background image BG specifically comprises:
acquiring a frame which is a color image at present, converting the frame into a gray image, and acquiring a foreground image FG;
and obtaining a first texture difference image diff1 by performing matrix operation on the absolute value of the difference between the foreground image FG and the background image BG.
3. A litter debris recognition algorithm according to claim 1, wherein binarizing the first texture difference image diff1 based on a first threshold value to obtain a binary image, performing contour detection on the binary image to obtain a preselected impurity region, and recording a preselected impurity region contour and a corresponding image, specifically comprises:
acquiring a pixel which is larger than a first threshold value in the first texture difference image diff1 and is set as 255, and acquiring a binary image by setting a pixel which is smaller than or equal to the first threshold value as 0;
acquiring the area of the outline region of the binary image, and judging whether the area is smaller than a third threshold value;
if yes, obtaining a pre-selected impurity area;
if not, acquiring the first times of background subtraction.
4. A litter size recognition algorithm as set forth in claim 3, wherein after obtaining the first number of background differences, the algorithm further comprises:
judging whether the first time of making the background difference exceeds a preset time or not;
if the content exceeds the preset value, determining that the content is not sundries;
if not, the first texture difference image diff1 is continuously binarized to obtain a binary image.
5. A litter recognition algorithm as claimed in claim 1, characterized in that the data frames of the video stream input are acquired, background modeling is performed based on a multiple gaussian model, and after acquiring the background image BG within a first preset time, the algorithm further comprises:
judging whether the data frame is empty or not;
if yes, controlling to end the corresponding process;
if not, a background difference is made based on the foreground image FG and the background image BG to obtain a first texture difference image diff 1.
6. A litter recognition algorithm as set forth in claim 1, wherein obtaining the corresponding image of the preselected impurity area and a new background image within a second preset time, performing background difference calculation to obtain a second texture difference image diff2, and determining whether the second texture difference image diff2 is smaller than a second threshold, specifically further comprises:
if the second time is greater than or equal to the second threshold, acquiring a second time for making a background difference, and judging whether the second time exceeds a preset time;
if the content exceeds the preset value, determining that the content is not sundries;
if not, the first texture difference image diff1 is continuously binarized to obtain a binary image.
7. A litter recognition system comprising means for performing the litter recognition algorithm of any of claims 1 to 6.
8. A server, characterized by comprising a processor, a communication interface and a memory, the processor, the communication interface and the memory being interconnected, wherein the memory is configured to store a computer program comprising program instructions, the processor being configured to invoke the program instructions to execute a throw debris recognition algorithm according to any one of claims 1 to 6.
9. A medium having stored therein instructions which, when run on a computer, cause the computer to execute a throw debris recognition algorithm according to any one of claims 1 to 6.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010051778.0A CN111260695A (en) | 2020-01-17 | 2020-01-17 | Throw-away sundry identification algorithm, system, server and medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010051778.0A CN111260695A (en) | 2020-01-17 | 2020-01-17 | Throw-away sundry identification algorithm, system, server and medium |
Publications (1)
Publication Number | Publication Date |
---|---|
CN111260695A true CN111260695A (en) | 2020-06-09 |
Family
ID=70923674
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202010051778.0A Pending CN111260695A (en) | 2020-01-17 | 2020-01-17 | Throw-away sundry identification algorithm, system, server and medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN111260695A (en) |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111931724A (en) * | 2020-09-23 | 2020-11-13 | 北京百度网讯科技有限公司 | Signal lamp abnormity identification method and device, electronic equipment and road side equipment |
CN115330993A (en) * | 2022-10-18 | 2022-11-11 | 小手创新(杭州)科技有限公司 | Recovery system new-entry discrimination method based on low computation amount |
US11653052B2 (en) | 2020-10-26 | 2023-05-16 | Genetec Inc. | Systems and methods for producing a privacy-protected video clip |
Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP1160726A2 (en) * | 2000-06-01 | 2001-12-05 | University of Washington | Object segmentation with background extraction and moving boundary techniques. |
CN102201121A (en) * | 2010-03-23 | 2011-09-28 | 鸿富锦精密工业(深圳)有限公司 | System and method for detecting article in video scene |
CN103325112A (en) * | 2013-06-07 | 2013-09-25 | 中国民航大学 | Quick detecting method for moving objects in dynamic scene |
CN103729858A (en) * | 2013-12-13 | 2014-04-16 | 广州中国科学院先进技术研究所 | Method for detecting article left over in video monitoring system |
CN106447674A (en) * | 2016-09-30 | 2017-02-22 | 北京大学深圳研究生院 | Video background removing method |
US20170358103A1 (en) * | 2016-06-09 | 2017-12-14 | California Institute Of Technology | Systems and Methods for Tracking Moving Objects |
CN109872341A (en) * | 2019-01-14 | 2019-06-11 | 中建三局智能技术有限公司 | A kind of throwing object in high sky detection method based on computer vision and system |
CN110232359A (en) * | 2019-06-17 | 2019-09-13 | 中国移动通信集团江苏有限公司 | It is detained object detecting method, device, equipment and computer storage medium |
CN110349189A (en) * | 2019-05-31 | 2019-10-18 | 广州铁路职业技术学院(广州铁路机械学校) | A kind of background image update method based on continuous inter-frame difference |
-
2020
- 2020-01-17 CN CN202010051778.0A patent/CN111260695A/en active Pending
Patent Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP1160726A2 (en) * | 2000-06-01 | 2001-12-05 | University of Washington | Object segmentation with background extraction and moving boundary techniques. |
CN102201121A (en) * | 2010-03-23 | 2011-09-28 | 鸿富锦精密工业(深圳)有限公司 | System and method for detecting article in video scene |
CN103325112A (en) * | 2013-06-07 | 2013-09-25 | 中国民航大学 | Quick detecting method for moving objects in dynamic scene |
CN103729858A (en) * | 2013-12-13 | 2014-04-16 | 广州中国科学院先进技术研究所 | Method for detecting article left over in video monitoring system |
US20170358103A1 (en) * | 2016-06-09 | 2017-12-14 | California Institute Of Technology | Systems and Methods for Tracking Moving Objects |
CN106447674A (en) * | 2016-09-30 | 2017-02-22 | 北京大学深圳研究生院 | Video background removing method |
CN109872341A (en) * | 2019-01-14 | 2019-06-11 | 中建三局智能技术有限公司 | A kind of throwing object in high sky detection method based on computer vision and system |
CN110349189A (en) * | 2019-05-31 | 2019-10-18 | 广州铁路职业技术学院(广州铁路机械学校) | A kind of background image update method based on continuous inter-frame difference |
CN110232359A (en) * | 2019-06-17 | 2019-09-13 | 中国移动通信集团江苏有限公司 | It is detained object detecting method, device, equipment and computer storage medium |
Non-Patent Citations (1)
Title |
---|
蒋唯一: ""视频智能行为分析的关键技术研究及应用"" * |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111931724A (en) * | 2020-09-23 | 2020-11-13 | 北京百度网讯科技有限公司 | Signal lamp abnormity identification method and device, electronic equipment and road side equipment |
US11653052B2 (en) | 2020-10-26 | 2023-05-16 | Genetec Inc. | Systems and methods for producing a privacy-protected video clip |
CN115330993A (en) * | 2022-10-18 | 2022-11-11 | 小手创新(杭州)科技有限公司 | Recovery system new-entry discrimination method based on low computation amount |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
KR101891225B1 (en) | Method and apparatus for updating a background model | |
CN111124888B (en) | Method and device for generating recording script and electronic device | |
CN112560999A (en) | Target detection model training method and device, electronic equipment and storage medium | |
CN111901604B (en) | Video compression method, video reconstruction method, corresponding devices, camera and video processing equipment | |
CN111260695A (en) | Throw-away sundry identification algorithm, system, server and medium | |
US11849240B2 (en) | Dynamically configured processing of a region of interest dependent upon published video data selected by a runtime configuration file | |
CN114169362A (en) | Event stream data denoising method based on space-time correlation filtering | |
CN111160187A (en) | Method, device and system for detecting left-behind object | |
CN114708287A (en) | Shot boundary detection method, device and storage medium | |
KR20160037480A (en) | Method for establishing region of interest in intelligent video analytics and video analysis apparatus using the same | |
CN113673454A (en) | Remnant detection method, related device, and storage medium | |
US20240040108A1 (en) | Method and system for preprocessing optimization of streaming video data | |
CN113253890A (en) | Video image matting method, system and medium | |
CN109934126B (en) | Vehicle tail smoke detection method and system | |
CN111930998A (en) | Video frame extraction method and device | |
JP2022529414A (en) | Methods and systems for motion detection without malfunction | |
CN117557790A (en) | Training method of image mask generator and image instance segmentation method | |
KR102131243B1 (en) | Plant Area Extraction System and Method Based on Deep Running and Connectivity Graphs | |
US20230206583A1 (en) | Dynamically configured extraction, preprocessing, and publishing of a region of interest that is a subset of streaming video data | |
CN110737652B (en) | Data cleaning method and system for three-dimensional digital model of surface mine and storage medium | |
CN112733730B (en) | Oil extraction operation field smoke suction personnel identification processing method and system | |
CN117411987B (en) | Drop-out time detection method, drop-out time detection equipment and storage medium for monitoring video | |
CN112399236B (en) | Video duplicate checking method and device and electronic equipment | |
CN114185630B (en) | Screen recording method, device, computer equipment and storage medium | |
CN117743634A (en) | Object retrieval method, system and equipment |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20200609 |
|
RJ01 | Rejection of invention patent application after publication |