CN113673454A - Remnant detection method, related device, and storage medium - Google Patents

Remnant detection method, related device, and storage medium Download PDF

Info

Publication number
CN113673454A
CN113673454A CN202110986893.1A CN202110986893A CN113673454A CN 113673454 A CN113673454 A CN 113673454A CN 202110986893 A CN202110986893 A CN 202110986893A CN 113673454 A CN113673454 A CN 113673454A
Authority
CN
China
Prior art keywords
image
target video
target
images
determining
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110986893.1A
Other languages
Chinese (zh)
Inventor
陈孝良
苏少炜
宁海洋
常乐
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing SoundAI Technology Co Ltd
Original Assignee
Beijing SoundAI Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing SoundAI Technology Co Ltd filed Critical Beijing SoundAI Technology Co Ltd
Priority to CN202110986893.1A priority Critical patent/CN113673454A/en
Publication of CN113673454A publication Critical patent/CN113673454A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • Image Analysis (AREA)

Abstract

The invention discloses a legacy detection method, related equipment and a storage medium. The method comprises the following steps: acquiring a target video, and reading image frames from the target video frame by frame; in the process of reading the image frames, updating a first image corresponding to the target video based on the N recently read image frames; determining a second image based on the M first images which are updated recently by the target video; performing difference operation on a current first image and a second image corresponding to a target video to obtain a target image; and performing differentiation processing on the target image to determine a carry-over area in the target image. In the embodiment of the invention, the plurality of image frames are used for determining the first image, and the plurality of first images are used for determining the second image, so that the moving objects which temporarily stay in the current image frame are filtered, and the accuracy of the detection of the carry-over object is improved.

Description

Remnant detection method, related device, and storage medium
Technical Field
The present invention relates to the field of image processing technologies, and in particular, to a method for detecting a carry-over, a related device, and a storage medium.
Background
The method is characterized in that the detection of the remnant is a basic function in intelligent video monitoring, and the detection of the remnant mainly utilizes a computer vision technology to detect the static remnant in a monitoring video and converts the region where the remnant is located into a bounding box capable of identifying the remnant.
At present, the method of detecting the carry-over object is to establish a background image frame, and perform a difference operation using a current image frame and the background image frame to obtain an area where the carry-over object in the background image frame is located. However, in the above-described carry-over detection process, a moving object that momentarily stays in the current image frame may be mistaken for a carry-over, generating an erroneous detection result, which reduces the accuracy of carry-over detection.
Disclosure of Invention
The embodiment of the invention aims to provide a method for detecting a remnant, related equipment and a storage medium, and solves the technical problem that the existing method for detecting the remnant cannot filter a moving target, so that the accuracy of detecting the remnant is reduced. The specific technical scheme is as follows:
in a first aspect of embodiments of the present invention, there is provided, first, a carryover detection method including:
acquiring a target video, and reading image frames from the target video frame by frame;
in the process of reading the image frames, updating a first image corresponding to the target video based on the recently read N image frames; the first image is used for representing the current scene of the target video, and N is a first preset numerical value;
determining a second image based on the M first images which are updated recently by the target video; the second image is used for representing the background of the target video, and M is a second preset value;
performing difference operation on a current first image and the second image corresponding to the target video to obtain a target image;
and performing differentiation processing on the target image, and determining a carry-over area in the target image.
In a second aspect of the embodiments of the present invention, there is also provided a carry-over detection apparatus, including:
the acquisition module is used for acquiring a target video and reading image frames from the target video frame by frame;
the updating module is used for updating a first image corresponding to the target video based on the recently read N image frames in the process of reading the image frames; the first image is used for representing the current scene of the target video, and N is a first preset numerical value;
a first determining module, configured to determine a second image based on the latest updated M first images of the target video; the second image is used for representing the background of the target video, and M is a second preset value;
the operation module is used for carrying out differential operation on the current first image and the second image corresponding to the target video to obtain a target image;
and the second determination module is used for carrying out differentiation processing on the target image and determining a left object area in the target image.
In a third aspect of the embodiments of the present invention, there is also provided an intelligent trash can, the intelligent trash can including a camera and a processing module;
the camera is used for acquiring a target video;
the processing module is used for reading image frames from the target video frame by frame;
in the process of reading the image frames, updating a first image corresponding to the target video based on the recently read N image frames; the first image is used for representing the current scene of the target video, and N is a first preset numerical value;
determining a second image based on the M first images which are updated recently by the target video; the second image is used for representing the background of the target video, and M is a second preset value;
performing difference operation on a current first image and the second image corresponding to the target video to obtain a target image;
and performing differentiation processing on the target image, and determining a carry-over area in the target image.
In a fourth aspect of the embodiments of the present invention, there is also provided a computer-readable storage medium having stored therein instructions, which, when run on a computer, cause the computer to execute the carryover detection method according to any one of the above-described embodiments.
In a fifth aspect of the present invention, there is also provided a computer program product containing instructions which, when run on a computer, cause the computer to perform the carryover detection method of any one of the above embodiments.
In the embodiment of the invention, a target video is obtained, and image frames are read from the target video frame by frame; in the process of reading the image frames, updating a first image corresponding to the target video based on the N recently read image frames; determining a second image based on the M first images which are updated recently by the target video; performing difference operation on a current first image and a second image corresponding to a target video to obtain a target image; and performing differentiation processing on the target image to determine a carry-over area in the target image. In the embodiment of the invention, the first image is determined based on the N image frames which are read recently, and the second image is determined based on the M first images which are updated recently by the target video. That is, a first image is determined using a plurality of image frames, and a second image is determined using the plurality of first images, thereby filtering a moving object that momentarily stays in a current image frame, thereby improving the accuracy of the carry-over detection.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below.
FIG. 1 is a schematic flow chart of a method for detecting carryover in an embodiment of the present disclosure;
FIG. 2 is a schematic structural diagram of a carry-over detection apparatus according to an embodiment of the present invention;
fig. 3 is a schematic structural diagram of an electronic device according to an embodiment of the invention;
fig. 4 is a schematic structural diagram of an intelligent trash can in an embodiment of the invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be described below with reference to the drawings in the embodiments of the present invention.
Referring to fig. 1, fig. 1 is a flowchart illustrating a carryover detection method according to an embodiment of the present invention. The method for detecting the carry-over provided by the embodiment of the invention comprises the following steps:
s101, acquiring a target video, and reading image frames from the target video frame by frame.
The target video may be a surveillance video or other types of videos, and is not limited in this respect. In this step, after the target video is acquired, the image frames are read from the target video frame by frame, that is, the image frames in the target video are sequentially read according to the sequence of each image frame in the target video.
S102, in the process of reading the image frames, updating a first image corresponding to the target video based on the recently read N image frames.
The value of the N is a first preset value, and the first image is used for representing the current scene of the target video.
In the step, when N image frames are read for the first time, a first image corresponding to a target video is generated based on the N image frames; thereafter, in the process of reading the image frames frame by frame subsequently, when the continuous N image frames are read again, the first image is updated.
For example, the first predetermined value is 20, i.e., the value of N is 20. When the 20 th frame of the target video is read, a first image corresponding to the target video is generated based on the read 1 st to 20 th frames. When the 40 th frame of the target video is read, updating the first image corresponding to the target video based on the 21 st to 40 th frames.
Please refer to the following embodiments, how to update the first image based on the N image frames.
S103, determining a second image based on the M first images which are updated recently by the target video.
And M is a second preset value, and the second image is used for representing the background of the target video.
In this step, after the M first images are updated, a second image is generated based on the M first images.
For example, N has a value of 20 and M has a value of 10. Since the first image corresponding to the update target video is updated every 20 image frames are read, the number of the first images updated by the target video is determined to be 10 when the 200 th frame of the target video is read, and thus, the second image can be determined based on the 1 st first image to the 10 th first image.
When the 400 th frame of the target video is read, the number of the first images updated by the target video is determined to be 20, and thus, the second image can be determined based on the 10 most recently updated first images of the target video, i.e., the 11 th to 20 th first images.
Please refer to the following embodiments, how to determine a specific scheme of the second image based on the latest updated M first images of the target video.
And S104, performing difference operation on the current first image and the second image corresponding to the target video to obtain a target image.
In this step, after the second image is obtained, a difference operation is performed on the current first image and the second image corresponding to the target video to obtain the target image. The current first image may be understood as the first image of the target video that is updated recently, and the target image is also referred to as a difference image.
The difference operation between the 2 images can be understood as subtracting pixel values of corresponding pixel points of the 2 images to weaken similar parts of the images.
For example, the value of N is 20 and the value of M is 10, that is, the first image is updated once every 20 image frames are read, the second image is generated based on the 10 first images that are updated recently every 10 first images are updated. In this case, when the 210 th frame is read, the number of the first images updated by the target video is determined to be 10, and the second image is determined based on the 1 st first image to the 10 th first image; and determining that the current first image is the most recently updated first image of the target video, namely the 10 th first image. And carrying out difference operation on the 10 th first image and the second image to obtain a target image.
And S105, performing differentiation processing on the target image, and determining a left object area in the target image.
In this step, after the target image is obtained, the target image is subjected to differentiation processing, at least one target area is generated in the target image, and further, whether the target area is a left-over area is determined.
The above-described differencing process is used to highlight an object appearing in the target image, which may be a carry-over or noise present in the target image. The carry-over region is a region where the carry-over is in the target image.
Please refer to the following embodiments for a specific manner of performing differentiation processing on the target image and a manner of determining the carry-over region in the target image.
In the embodiment of the invention, a target video is obtained, and image frames are read from the target video frame by frame; in the process of reading the image frames, updating a first image corresponding to the target video based on the N recently read image frames; determining a second image based on the M first images which are updated recently by the target video; performing difference operation on a current first image and a second image corresponding to a target video to obtain a target image; and performing differentiation processing on the target image to determine a carry-over area in the target image. In the embodiment of the invention, the first image is determined based on the N image frames which are read recently, and the second image is determined based on the M first images which are updated recently by the target video. That is, a first image is determined using a plurality of image frames, and a second image is determined using the plurality of first images, thereby filtering a moving object that momentarily stays in a current image frame, thereby improving the accuracy of the carry-over detection. Meanwhile, under the condition that the pixel difference between the image frames is large, the remnant in the target video can be detected more quickly according to the current first image and the current second image.
In the following, a technical solution of how to update the first image based on the N image frames is specifically described:
optionally, the updating the first image corresponding to the target video based on the N image frames read recently includes:
accumulating and calculating N pixel values corresponding to the N image frames one by one to obtain a first accumulation result;
dividing the first accumulation result by N to obtain a first pixel average value corresponding to the N image frames;
and constructing a first image, wherein the pixel value of the first image is the first pixel average value.
In this embodiment, the first pixel average values corresponding to the N image frames are calculated, wherein the first pixel average values can be calculated in the following manner.
Reading a pixel value corresponding to each image frame to obtain N pixel values; a pixel value corresponding to an image frame may be understood as a pixel value corresponding to an image that is characterized by the image frame. And performing accumulation calculation on the N pixel values to obtain a first accumulation result. And performing division operation on the first accumulation result and N, and taking the division result as a first pixel average value.
Further, an image is constructed, and the pixel value of the image is set as the first pixel average value, so that a first image used for representing the current scene of the target video is obtained.
In the embodiment, the N image frames which are read recently are used for determining the first image which represents the current scene of the target video, and the first image is obtained based on the average value of the first pixels of the N image frames, so that the moving target which temporarily stays in the image frames can be filtered, and the accuracy of the detection of the remnant is improved.
Optionally, the determining a second image based on the M first images that are updated recently by the target video includes:
storing a first image corresponding to the target video to a preset image set at intervals of preset duration; arranging the first images in the image set according to the sequence of storage time;
and under the condition that the number of the first images stored in the image set is a positive integer multiple of a third preset value, determining a second image based on the last M first images in the image set.
In this embodiment, an image set is preset, where the image set is used to store a first image, and the first images in the image set are arranged according to the sequence of storage time.
In this embodiment, the first image corresponding to the target video is stored to a preset image set at intervals of a preset duration. Illustratively, the preset time duration is 1 minute, and then, in the process of reading the image frames of the target video frame by frame, the current first image corresponding to the target video is stored into the image set at intervals of 1 minute.
And when the number of the first images stored in the image set is positive integral multiple of a third preset value, determining a second image based on the last M first images in the image set. Please refer to the following embodiments for a specific technical solution of how to determine the second image based on the last M first images in the image set.
In an alternative embodiment, the third predetermined value is the same as the value of M. That is, in the case where the number of first images stored in the image set is a positive integer multiple of M, the second image is determined based on the last M first images in the image set.
For example, if M is 10 and 10 first images are stored in the image set, the second image is determined based on the 10 first images. And if 20 first images are stored in the image set, determining a second image based on the 11 th first image to the 20 th first image in the image set.
In another alternative embodiment, the third preset value is a different value of M.
For example, if M is 10 and the third preset value is 15, and if 15 first images are stored in the image set, the second image is determined based on the 6 th to 15 th first images in the image set.
Optionally, the determining a second image based on the last M first images in the image set includes:
performing accumulation calculation on the M pixel values corresponding to the M first images one by one to obtain a second accumulation result;
dividing the second accumulation result by M to obtain second pixel average values corresponding to the M first images;
and constructing a second image, wherein the pixel value of the second image is the average value of the second pixels.
In this embodiment, second pixel average values corresponding to the M first images are calculated, wherein the second pixel average values can be calculated in the following manner.
And reading the pixel value corresponding to each first image to obtain M pixel values. And performing accumulation calculation on the M pixel values to obtain a second accumulation result. And performing division operation on the second accumulation result and M, and taking the division result as a second pixel average value.
Further, an image is constructed, and the pixel value of the image is set as the second pixel average value, so as to obtain a second image used for representing the background of the target video, and the second image is also called as the background image of the target video.
In this embodiment, the M first images that are updated recently by the target video are used to determine the background image of the target video, and since the second image is obtained based on the average value of the second pixels of the M first images, the moving target that stays in the background of the target image for a short time can be filtered out, and the accuracy of the detection of the carry-over is improved.
The following specifically describes a manner of performing differentiation processing on the target image and a manner of determining a carry-over region in the target image: optionally, the performing differentiation processing on the target image, and determining a carry-over area in the target image includes:
performing binarization processing and expansion processing on the target image to generate at least one target area in the target image;
and determining the target area with the area larger than a preset threshold as the remaining area.
In this embodiment, the target image is binarized, and it should be understood that the image binarization processing is to set the gray value corresponding to the pixel point of the image to be 0 or 255, so that the whole image has an obvious black-and-white effect, and a binarized image capable of reflecting the overall and local characteristics of the image is obtained. And performing expansion processing on the binarized target image, wherein the expansion processing of the image is to add pixel values at the edge of the image so as to expand the whole pixel values, thereby achieving the expansion effect of the image. In this way, the whole image is composed of two colors of white and black, and alternatively, the region where the black pixel is located in the image may be determined as the target region.
In this embodiment, a preset threshold is further set. Detecting the area of each target area, and if the area of each target area is larger than a preset threshold value and a remnant exists in the target area on the surface, determining the target area as a remnant area; and if the area of the target region is smaller than or equal to the preset threshold, the target region is a noise region in the target image.
In this embodiment, the left object region is highlighted in the target image by performing binarization processing and expansion processing on the target image, and then the left object region can be quickly determined based on the region area corresponding to each target region, so that the efficiency of detecting the left object is improved.
Optionally, after the target video is obtained, the method includes:
reducing the resolution corresponding to each image frame in the target video;
and carrying out gray processing and Gaussian filtering processing on the image frame with the reduced resolution.
In this embodiment, after the target video is acquired, the resolution corresponding to each image frame in the target video is reduced, and an optional implementation manner is that the resolution of each image frame is reduced to one tenth of the original resolution.
And carrying out gray processing and Gaussian filtering processing on the image frame with the reduced resolution, wherein the gray processing means that the R value, the G value and the B value corresponding to each pixel point in the image frame are set to be the same. Alternatively, a gaussian smoothing filter may be used to perform gaussian filtering on the image frame core.
In the embodiment, the resolution of each image frame is reduced, and the image frames are subjected to gray processing and gaussian filtering processing to suppress noise in the image frames, so that the interference of the noise in the image frames on the detection result of the carry-over object is eliminated, and the accuracy of the carry-over object detection is improved.
As shown in fig. 2, an embodiment of the present invention further provides a legacy detection apparatus 200, including:
an obtaining module 201, configured to obtain a target video, and read image frames from the target video frame by frame;
an updating module 202, configured to update a first image corresponding to the target video based on N recently read image frames in a process of reading the image frames;
a first determining module 203, configured to determine a second image based on the latest updated M first images of the target video;
the operation module 204 is configured to perform a difference operation on the current first image and the second image corresponding to the target video to obtain a target image;
a second determining module 205, configured to perform differentiation processing on the target image, and determine a carry-over area in the target image.
Optionally, the update module 202 is specifically configured to:
accumulating and calculating N pixel values corresponding to the N image frames one by one to obtain a first accumulation result;
dividing the first accumulation result by N to obtain a first pixel average value corresponding to the N image frames;
and constructing a first image, wherein the pixel value of the first image is the first pixel average value.
Optionally, the first determining module 203 is specifically configured to:
storing a first image corresponding to the target video to a preset image set at intervals of preset duration;
and under the condition that the number of the first images stored in the image set is a positive integer multiple of a third preset value, determining a second image based on the last M first images in the image set.
Optionally, the first determining module 203 is specifically configured to:
performing accumulation calculation on the M pixel values corresponding to the M first images one by one to obtain a second accumulation result;
dividing the second accumulation result by M to obtain second pixel average values corresponding to the M first images;
and constructing a second image, wherein the pixel value of the second image is the average value of the second pixels.
Optionally, the second determining module 205 is specifically configured to:
performing binarization processing and expansion processing on the target image to generate at least one target area in the target image;
and determining the target area with the area larger than a preset threshold as the remaining area.
Optionally, the carry-over detection apparatus 200 further comprises:
the first processing module is used for reducing the resolution corresponding to each image frame in the target video;
and the second processing module is used for carrying out gray processing and Gaussian filtering processing on the image frame with the reduced resolution.
The embodiment of the present invention further provides an electronic device, as shown in fig. 3, including a processor 301, a communication interface 302, a memory 303, and a communication bus 304, where the processor 301, the communication interface 302, and the memory 303 complete mutual communication through the communication bus 304.
A memory 303 for storing a computer program;
a processor 301 configured to execute a program stored in a memory 303, wherein when the computer program is executed by the processor 301, the computer program is configured to acquire a target video and read image frames from the target video frame by frame;
in the process of reading the image frames, updating a first image corresponding to the target video based on the recently read N image frames;
determining a second image based on the M first images which are updated recently by the target video;
performing difference operation on a current first image and the second image corresponding to the target video to obtain a target image;
and performing differentiation processing on the target image, and determining a carry-over area in the target image.
Optionally, when being executed by the processor 301, the computer program is further configured to perform accumulation calculation on N pixel values corresponding to the N image frames one by one, so as to obtain a first accumulation result;
dividing the first accumulation result by N to obtain a first pixel average value corresponding to the N image frames;
and constructing a first image, wherein the pixel value of the first image is the first pixel average value.
Optionally, when being executed by the processor 301, the computer program is further configured to store the first image corresponding to the target video to a preset image set every preset time interval;
and under the condition that the number of the first images stored in the image set is a positive integer multiple of a third preset value, determining a second image based on the last M first images in the image set.
Optionally, when being executed by the processor 301, the computer program is further configured to perform accumulation calculation on M pixel values corresponding to the M first images one to one, so as to obtain a second accumulation result;
dividing the second accumulation result by M to obtain second pixel average values corresponding to the M first images;
and constructing a second image, wherein the pixel value of the second image is the average value of the second pixels.
Optionally, the computer program, when executed by the processor 301, is further configured to perform binarization processing and dilation processing on the target image, so as to generate at least one target region in the target image;
and determining the target area with the area larger than a preset threshold as the remaining area.
Optionally, the computer program, when executed by the processor 301, is further configured to reduce a resolution corresponding to each image frame in the target video;
and carrying out gray processing and Gaussian filtering processing on the image frame with the reduced resolution.
The communication bus mentioned in the electronic device may be a Peripheral Component Interconnect (PCI) bus, an Extended Industry Standard Architecture (EISA) bus, or the like. The communication bus may be divided into an address bus, a data bus, a control bus, etc. For ease of illustration, only one thick line is shown, but this does not mean that there is only one bus or one type of bus.
The communication interface is used for communication between the terminal and other equipment.
The Memory may include a Random Access Memory (RAM) or a non-volatile Memory (non-volatile Memory), such as at least one disk Memory. Optionally, the memory may also be at least one memory device located remotely from the processor.
The Processor may be a general-purpose Processor, and includes a Central Processing Unit (CPU), a Network Processor (NP), and the like; the Integrated Circuit may also be a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) or other Programmable logic device, a discrete Gate or transistor logic device, or a discrete hardware component.
In yet another embodiment of the present invention, a computer-readable storage medium is further provided, which has instructions stored therein, and when the instructions are executed on a computer, the computer is caused to execute the carryover detection method according to any one of the above-mentioned embodiments.
As shown in fig. 4, an embodiment of the present invention further provides an intelligent trash can 400, where the intelligent trash can 400 includes a camera 401 and a processing module 402;
the camera 401 is used for acquiring a target video;
the processing module 402 is configured to read image frames from the target video frame by frame;
in the process of reading the image frames, updating a first image corresponding to the target video based on the recently read N image frames;
determining a second image based on the M first images which are updated recently by the target video;
performing difference operation on a current first image and the second image corresponding to the target video to obtain a target image;
and performing differentiation processing on the target image, and determining a carry-over area in the target image.
In this embodiment, the camera 401 may be installed on the intelligent trash can 400, and optionally, the camera 401 may be installed at a can cover of the intelligent trash can 400 body, and the camera 401 is used for acquiring the target video. The processing module 402 of the intelligent trash can 400 runs the method for detecting the remains described in any of the above embodiments, so that the monitoring video captured by the camera 401 is detected for the remains, further, the trash appearing in the monitoring video can be regarded as the remains, and the behavior of littering trash can be monitored.
The intelligent trash can 400 provided in the embodiment of the application can implement each process implemented in the embodiment of the method in fig. 1, and is not described herein again to avoid repetition.
In yet another embodiment, a computer program product containing instructions is provided, which when run on a computer, causes the computer to perform the carryover detection method of any of the above embodiments.
In the above embodiments, the implementation may be wholly or partially realized by software, hardware, firmware, or any combination thereof. When implemented in software, may be implemented in whole or in part in the form of a computer program product. The computer program product includes one or more computer instructions. When loaded and executed on a computer, cause the processes or functions described in accordance with the embodiments of the invention to occur, in whole or in part. The computer may be a general purpose computer, a special purpose computer, a network of computers, or other programmable device. The computer instructions may be stored in a computer readable storage medium or transmitted from one computer readable storage medium to another, for example, from one website site, computer, server, or data center to another website site, computer, server, or data center via wired (e.g., coaxial cable, fiber optic, Digital Subscriber Line (DSL)) or wireless (e.g., infrared, wireless, microwave, etc.). The computer-readable storage medium can be any available medium that can be accessed by a computer or a data storage device, such as a server, a data center, etc., that incorporates one or more of the available media. The usable medium may be a magnetic medium (e.g., floppy Disk, hard Disk, magnetic tape), an optical medium (e.g., DVD), or a semiconductor medium (e.g., Solid State Disk (SSD)), among others.
It is noted that, herein, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other identical elements in a process, method, article, or apparatus that comprises the element.
All the embodiments in the present specification are described in a related manner, and the same and similar parts among the embodiments may be referred to each other, and each embodiment focuses on the differences from the other embodiments. In particular, for the system embodiment, since it is substantially similar to the method embodiment, the description is simple, and for the relevant points, reference may be made to the partial description of the method embodiment.
The above description is only for the preferred embodiment of the present invention, and is not intended to limit the scope of the present invention. Any modification, equivalent replacement, or improvement made within the spirit and principle of the present invention shall fall within the protection scope of the present invention.

Claims (10)

1. A carryover detection method, comprising:
acquiring a target video, and reading image frames from the target video frame by frame;
in the process of reading the image frames, updating a first image corresponding to the target video based on the recently read N image frames; the first image is used for representing the current scene of the target video, and N is a first preset numerical value;
determining a second image based on the M first images which are updated recently by the target video; the second image is used for representing the background of the target video, and M is a second preset value;
performing difference operation on a current first image and the second image corresponding to the target video to obtain a target image;
and performing differentiation processing on the target image, and determining a carry-over area in the target image.
2. The method according to claim 1, wherein the updating the first image corresponding to the target video based on the N image frames read recently comprises:
accumulating and calculating N pixel values corresponding to the N image frames one by one to obtain a first accumulation result;
dividing the first accumulation result by N to obtain a first pixel average value corresponding to the N image frames;
and constructing a first image, wherein the pixel value of the first image is the first pixel average value.
3. The method of claim 1, wherein determining the second image based on the most recently updated M first images of the target video comprises:
storing a first image corresponding to the target video to a preset image set at intervals of preset duration; arranging the first images in the image set according to the sequence of storage time;
and under the condition that the number of the first images stored in the image set is a positive integer multiple of a third preset value, determining a second image based on the last M first images in the image set.
4. The method of claim 3, wherein determining the second image based on the last M first images in the set of images comprises:
performing accumulation calculation on the M pixel values corresponding to the M first images one by one to obtain a second accumulation result;
dividing the second accumulation result by M to obtain second pixel average values corresponding to the M first images;
and constructing a second image, wherein the pixel value of the second image is the average value of the second pixels.
5. The method of claim 1, wherein the differencing the target image, and wherein determining a carry-over region in the target image comprises:
performing binarization processing and expansion processing on the target image to generate at least one target area in the target image;
and determining the target area with the area larger than a preset threshold as the remaining area.
6. The method of claim 1, wherein after the target video is obtained, the method comprises:
reducing the resolution corresponding to each image frame in the target video;
and carrying out gray processing and Gaussian filtering processing on the image frame with the reduced resolution.
7. A carry-over detection device, comprising:
the acquisition module is used for acquiring a target video and reading image frames from the target video frame by frame;
the updating module is used for updating a first image corresponding to the target video based on the recently read N image frames in the process of reading the image frames; the first image is used for representing the current scene of the target video, and N is a first preset numerical value;
a first determining module, configured to determine a second image based on the latest updated M first images of the target video; the second image is used for representing the background of the target video, and M is a second preset value;
the operation module is used for carrying out differential operation on the current first image and the second image corresponding to the target video to obtain a target image;
and the second determination module is used for carrying out differentiation processing on the target image and determining a left object area in the target image.
8. The intelligent garbage can is characterized by comprising a camera and a processing module;
the camera is used for acquiring a target video;
the processing module is used for reading image frames from the target video frame by frame;
in the process of reading the image frames, updating a first image corresponding to the target video based on the recently read N image frames; the first image is used for representing the current scene of the target video, and N is a first preset numerical value;
determining a second image based on the M first images which are updated recently by the target video; the second image is used for representing the background of the target video, and M is a second preset value;
performing difference operation on a current first image and the second image corresponding to the target video to obtain a target image;
and performing differentiation processing on the target image, and determining a carry-over area in the target image.
9. An electronic device is characterized by comprising a processor, a communication interface, a memory and a communication bus, wherein the processor and the communication interface are used for realizing mutual communication by the memory through the communication bus;
a memory for storing a computer program;
a processor for implementing the carry-over detection method according to any one of claims 1 to 6 when executing a program stored in a memory.
10. A computer-readable storage medium on which a computer program is stored, the program, when executed by a processor, implementing the carryover detection method of any one of claims 1-6.
CN202110986893.1A 2021-08-26 2021-08-26 Remnant detection method, related device, and storage medium Pending CN113673454A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110986893.1A CN113673454A (en) 2021-08-26 2021-08-26 Remnant detection method, related device, and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110986893.1A CN113673454A (en) 2021-08-26 2021-08-26 Remnant detection method, related device, and storage medium

Publications (1)

Publication Number Publication Date
CN113673454A true CN113673454A (en) 2021-11-19

Family

ID=78546543

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110986893.1A Pending CN113673454A (en) 2021-08-26 2021-08-26 Remnant detection method, related device, and storage medium

Country Status (1)

Country Link
CN (1) CN113673454A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116246215A (en) * 2023-05-11 2023-06-09 小手创新(杭州)科技有限公司 Method for identifying new articles based on visual algorithm, barrel cover and intelligent recycling bin
CN117854256A (en) * 2024-03-05 2024-04-09 成都理工大学 Geological disaster monitoring method based on unmanned aerial vehicle video stream analysis

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116246215A (en) * 2023-05-11 2023-06-09 小手创新(杭州)科技有限公司 Method for identifying new articles based on visual algorithm, barrel cover and intelligent recycling bin
CN116246215B (en) * 2023-05-11 2024-01-09 小手创新(杭州)科技有限公司 Method for identifying new articles based on visual algorithm, barrel cover and intelligent recycling bin
CN117854256A (en) * 2024-03-05 2024-04-09 成都理工大学 Geological disaster monitoring method based on unmanned aerial vehicle video stream analysis
CN117854256B (en) * 2024-03-05 2024-06-11 成都理工大学 Geological disaster monitoring method based on unmanned aerial vehicle video stream analysis

Similar Documents

Publication Publication Date Title
CN110839016B (en) Abnormal flow monitoring method, device, equipment and storage medium
CN110766679B (en) Lens contamination detection method and device and terminal equipment
CN107509107B (en) Method, device and equipment for detecting video playing fault and readable medium
CN113673454A (en) Remnant detection method, related device, and storage medium
CN109446061B (en) Page detection method, computer readable storage medium and terminal device
CN111917740A (en) Abnormal flow alarm log detection method, device, equipment and medium
CN110708568B (en) Video content mutation detection method and device
CN110060278B (en) Method and device for detecting moving target based on background subtraction
CN111383246B (en) Scroll detection method, device and equipment
CN113487639A (en) Image processing method and device, electronic equipment and storage medium
CN110647818A (en) Identification method and device for shielding target object
CN111723634A (en) Image detection method and device, electronic equipment and storage medium
CN114283132A (en) Defect detection method, device, equipment and storage medium
CN113052019A (en) Target tracking method and device, intelligent equipment and computer storage medium
CN113297409A (en) Image searching method and device, electronic equipment and storage medium
CN116091450A (en) Obstacle detection method, obstacle detection device, obstacle detection equipment, obstacle detection medium and obstacle detection product
CN115661475A (en) Image foreign matter identification method, device, equipment and storage medium
CN114998283A (en) Lens blocking object detection method and device
CN114897801A (en) AOI defect detection method, device and equipment and computer medium
CN114581711A (en) Target object detection method, apparatus, device, storage medium, and program product
CN114550256A (en) Method, device and equipment for detecting tiny target and computer readable medium
CN110211085B (en) Image fusion quality evaluation method and system
CN112418244A (en) Target detection method, device and electronic system
CN113838110B (en) Verification method and device for target detection result, storage medium and electronic equipment
JP4071336B2 (en) Image processing device for monitoring

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination