CN115083008A - Moving object detection method, device, equipment and storage medium - Google Patents

Moving object detection method, device, equipment and storage medium Download PDF

Info

Publication number
CN115083008A
CN115083008A CN202110268735.2A CN202110268735A CN115083008A CN 115083008 A CN115083008 A CN 115083008A CN 202110268735 A CN202110268735 A CN 202110268735A CN 115083008 A CN115083008 A CN 115083008A
Authority
CN
China
Prior art keywords
image
detected
frame
frame image
difference
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110268735.2A
Other languages
Chinese (zh)
Inventor
苗亮亮
黄鑫辰
刘洋
武耀文
马博闻
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tianyi Cloud Technology Co Ltd
Original Assignee
Tianyi Cloud Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tianyi Cloud Technology Co Ltd filed Critical Tianyi Cloud Technology Co Ltd
Priority to CN202110268735.2A priority Critical patent/CN115083008A/en
Publication of CN115083008A publication Critical patent/CN115083008A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/136Segmentation; Edge detection involving thresholding

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)

Abstract

The disclosure provides a moving target detection method, a moving target detection device and a storage medium, and relates to the field. The method comprises the following steps: acquiring a frame image to be detected and an image of a preset frame number before the frame image to be detected; obtaining an initial background image according to the image of a preset frame number before the frame image to be detected; carrying out difference on the frame image to be detected and the initial background image to obtain a difference image of the frame image to be detected; when the frame image to be detected contains a moving target based on the difference image of the frame image to be detected, acquiring a binary image of the difference image of the frame image to be detected; performing expansion convolution processing on the binary image of the difference image of the frame image to be detected to obtain a repaired image of the difference image of the frame image to be detected; and identifying the moving target based on the repaired image of the difference image of the frame image to be detected. The method improves the accuracy of the moving target detection method suitable for real-time detection.

Description

Moving object detection method, device, equipment and storage medium
Technical Field
The present disclosure relates to the field of image processing technologies, and in particular, to a moving object detection method, apparatus, device, and readable storage medium.
Background
The moving object detection is also called moving object detection, and can be applied to automatic alarm of an unmanned video monitoring system, for example. In the unmanned video monitoring system, a plurality of frames of images are acquired through a camera according to different frame rates, a part of specific images are selected to be calculated according to a specific algorithm, and whether the change of the images is caused by a moving object is judged.
When detecting a moving object, how to segment the moving object from the background is a key issue. The related art moving object detection methods include an optical flow method, a background subtraction method, and a time difference method. The optical flow method is a method for calculating motion information of an object between adjacent frames by using the change of pixels in a multi-frame image in a time domain and the correlation between the adjacent frames to find the corresponding relationship between the previous frame and the current frame, and is large in calculation amount and not beneficial to real-time application. Background subtraction is to establish an image background based on historical data, and then detect a motion region by using the difference between a current image and a background image, but has a high requirement on light stability, and the detection accuracy is reduced when light changes. The time difference method is to make a difference between adjacent frames in a continuous image, and threshold the result to extract a motion region in the image, but complete moving object information is not easy to extract, so that the accuracy rate is low during subsequent target identification.
As described above, how to improve the accuracy of the moving object detection method suitable for real-time detection is an urgent problem to be solved.
The above information disclosed in this background section is only for enhancement of understanding of the background of the disclosure and therefore it may contain information that does not constitute prior art that is already known to a person of ordinary skill in the art.
Disclosure of Invention
The present disclosure is directed to a moving object detection method, apparatus, device and readable storage medium, which at least partially overcome the problem of low accuracy of a moving object detection method suitable for real-time detection in the related art.
Additional features and advantages of the disclosure will be set forth in the detailed description which follows, or in part will be obvious from the description, or may be learned by practice of the disclosure.
According to an aspect of the present disclosure, there is provided a moving object detection method including: acquiring a frame image to be detected and an image of a preset frame number before the frame image to be detected; obtaining an initial background image according to the image of a preset frame number before the frame image to be detected; carrying out difference on the frame image to be detected and the initial background image to obtain a difference image of the frame image to be detected; when the frame image to be detected contains a moving target based on the difference image of the frame image to be detected, acquiring a binary image of the difference image of the frame image to be detected; performing expansion convolution processing on the binary image of the difference image of the frame image to be detected to obtain a repaired image of the difference image of the frame image to be detected; and identifying the moving target based on the repaired image of the difference image of the frame image to be detected.
According to an embodiment of the present disclosure, the obtaining an initial background image according to an image of a predetermined number of frames before the frame image to be detected includes: respectively acquiring the pixel value of each frame image in the images of the preset frame number before the frame image to be detected; and carrying out average calculation on the pixel values of the images of the preset frame number before the frame image to be detected to obtain the initial background image.
According to an embodiment of the present disclosure, the difference image of the frame image to be detected includes a gray difference image of the frame image to be detected; the step of obtaining a difference image of the frame image to be detected by differentiating the frame image to be detected and the initial background image comprises: acquiring a gray level image of a frame image to be detected and a gray level image of an initial background image; carrying out difference on the gray level image of the frame image to be detected and the gray level image of the initial background image to obtain a gray level difference image of the frame image to be detected; when the frame image to be detected is determined to contain the moving target based on the difference image of the frame image to be detected, acquiring the binary image of the difference image of the frame image to be detected comprises the following steps: obtaining the number of pixels with zero gray value in the gray difference image of the frame image to be detected; when the frame image to be detected contains a moving target according to the number of pixels with zero gray values in the gray difference image of the frame image to be detected, acquiring a gray threshold matrix of the gray difference image of the frame image to be detected; and carrying out binarization processing on the gray level difference image of the frame image to be detected according to the gray level threshold matrix of the gray level difference image of the frame image to be detected to obtain a binarized image of the difference image of the frame image to be detected.
According to an embodiment of the present disclosure, the method further comprises: when the frame image to be detected does not contain a moving target based on the difference image of the frame image to be detected, updating an initial background image according to the image of the preset frame number before the frame image to be detected and the frame image to be detected; and detecting a moving target of the next frame image of the frame images to be detected based on the updated initial background image.
According to an embodiment of the present disclosure, the performing an expansion convolution process on the binarized image of the difference image of the frame image to be detected to obtain a restored image of the difference image of the frame image to be detected includes: performing expansion convolution processing on the binary image of the difference image of the frame image to be detected to obtain an expanded convolution binary image; and performing edge restoration on the expanded and convolved binary image to obtain a restored image of the difference image of the frame image to be detected.
According to an embodiment of the present disclosure, the identifying the moving object based on the repaired image of the difference image of the frame image to be detected includes: carrying out edge detection on the repaired image of the difference image of the frame image to be detected to obtain an interested area; and identifying the region of interest and determining the moving target.
According to an embodiment of the present disclosure, the identifying the region of interest and determining the moving object includes: acquiring identification parameters of the region of interest, wherein the identification parameters comprise at least one of the number of pixel points of the region of interest and the area of the region of interest; and determining the moving target to be a person, an animal or a vehicle according to the identification parameters of the region of interest.
According to still another aspect of the present disclosure, there is provided a moving object detecting apparatus including: the image acquisition module is used for acquiring a frame image to be detected and images of a preset frame number before the frame image to be detected; the background acquisition module is used for acquiring an initial background image according to the image of a preset frame number before the frame image to be detected; the difference obtaining module is used for carrying out difference on the frame image to be detected and the initial background image to obtain a difference image of the frame image to be detected; the binarization module is used for acquiring a binarization image of the difference image of the frame image to be detected when the frame image to be detected is determined to contain a moving target based on the difference image of the frame image to be detected; the image restoration module is used for performing expansion convolution processing on the binary image of the difference image of the frame image to be detected so as to obtain a restored image of the difference image of the frame image to be detected; and the target identification module is used for identifying the moving target based on the repaired image of the difference image of the frame image to be detected.
According to an embodiment of the present disclosure, the context obtaining module includes: the pixel value acquisition module is used for respectively acquiring the pixel value of each frame image in the images of the preset frame number before the frame image to be detected; and the background calculation module is used for carrying out average calculation on the pixel values of the images of the preset frame number before the frame image to be detected to obtain the initial background image.
According to an embodiment of the present disclosure, the difference image of the frame image to be detected includes a gray difference image of the frame image to be detected; the difference obtaining module includes: the gray level image acquisition module is used for acquiring a gray level image of a frame image to be detected and a gray level image of an initial background image; the difference calculation module is used for carrying out difference on the gray level image of the frame image to be detected and the gray level image of the initial background image to obtain a gray level difference image of the frame image to be detected; the binarization module comprises: a black block obtaining module, configured to obtain the number of pixels with a gray value of zero in a gray difference image of the frame image to be detected; the threshold value obtaining module is used for obtaining a gray threshold value matrix of the gray difference image of the frame image to be detected when the frame image to be detected contains a moving target according to the number of pixels with the gray values of zero in the gray difference image of the frame image to be detected; and the binarization processing module is used for carrying out binarization processing on the gray level difference image of the frame image to be detected according to the gray level threshold matrix of the gray level difference image of the frame image to be detected so as to obtain a binarization image of the difference image of the frame image to be detected.
According to an embodiment of the present disclosure, the apparatus further comprises: the background updating module is used for updating an initial background image according to the image of the preset frame number before the frame image to be detected and the frame image to be detected when the frame image to be detected does not contain the moving target based on the difference image of the frame image to be detected; the device is further used for detecting a moving target of a next frame image of the frame images to be detected based on the updated initial background image.
According to an embodiment of the present disclosure, the image inpainting module includes: the expansion convolution processing module is used for performing expansion convolution processing on the binary image of the difference image of the frame image to be detected to obtain the binary image after the expansion convolution; and the edge repairing module is used for repairing the edge of the expanded and convolved binary image to obtain a repaired image of the difference image of the frame image to be detected.
According to an embodiment of the present disclosure, the target recognition module includes: the edge detection module is used for carrying out edge detection on the repaired image of the difference image of the frame image to be detected to obtain an interested area; and the interested area identification module is used for identifying the interested area and determining the moving target.
According to an embodiment of the present disclosure, the region of interest identification module includes: an identification parameter obtaining module, configured to obtain an identification parameter of the region of interest, where the identification parameter includes at least one of the number of pixels in the region of interest and an area of the region of interest; and the moving target determining module is used for determining that the moving target is a person, an animal or a vehicle according to the identification parameters of the region of interest.
According to yet another aspect of the present disclosure, there is provided an apparatus comprising: a memory, a processor and executable instructions stored in the memory and executable in the processor, the processor implementing any of the methods described above when executing the executable instructions.
According to yet another aspect of the present disclosure, there is provided a computer-readable storage medium having stored thereon computer-executable instructions that, when executed by a processor, implement any of the methods described above.
The moving target detection method provided by the embodiment of the disclosure includes the steps of obtaining an initial background image according to an image of a preset frame number before a frame image to be detected, then carrying out difference on the frame image to be detected and the initial background image to obtain a difference image of the frame image to be detected, obtaining a binary image of the difference image of the frame image to be detected when the frame image to be detected is determined to contain a moving target based on the difference image of the frame image to be detected, then carrying out expansion convolution processing on the binary image of the difference image of the frame image to be detected to obtain a restored image of the difference image of the frame image to be detected, and then identifying the moving target based on the restored image of the difference image of the frame image to be detected, so that the accuracy of the moving target detection method suitable for real-time detection can be improved.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure.
Drawings
The above and other objects, features and advantages of the present disclosure will become more apparent by describing in detail exemplary embodiments thereof with reference to the attached drawings.
Fig. 1 shows a schematic diagram of a system architecture in an embodiment of the disclosure.
Fig. 2 shows a flowchart of a moving object detection method in an embodiment of the present disclosure.
Fig. 3 is a schematic diagram illustrating a processing procedure of step S204 shown in fig. 2 in an embodiment.
Fig. 4 is a schematic diagram illustrating a processing procedure of step S208 shown in fig. 2 in an embodiment.
Fig. 5 is a schematic diagram illustrating a processing procedure of step S212 shown in fig. 2 in an embodiment.
Fig. 6 is a flowchart illustrating a moving object detection system according to fig. 2 to 5.
Fig. 7 is a schematic view of a monitoring interface of a moving object detection system according to an embodiment of the disclosure.
Fig. 8 shows a block diagram of a moving object detection apparatus in an embodiment of the present disclosure.
Fig. 9 shows a block diagram of another moving object detecting apparatus in the embodiment of the present disclosure.
Fig. 10 shows a schematic structural diagram of an electronic device in an embodiment of the present disclosure.
Detailed Description
Example embodiments will now be described more fully with reference to the accompanying drawings. Example embodiments may, however, be embodied in many different forms and should not be construed as limited to the examples set forth herein; rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the concept of example embodiments to those skilled in the art. The drawings are merely schematic illustrations of the present disclosure and are not necessarily drawn to scale. The same reference numerals in the drawings denote the same or similar parts, and thus their repetitive description will be omitted.
Furthermore, the described features, structures, or characteristics may be combined in any suitable manner in one or more embodiments. In the following description, numerous specific details are provided to give a thorough understanding of embodiments of the disclosure. One skilled in the relevant art will recognize, however, that the embodiments of the present disclosure can be practiced without one or more of the specific details, or with other methods, apparatus, steps, etc. In other instances, well-known structures, methods, devices, implementations, or operations are not shown or described in detail to avoid obscuring aspects of the disclosure.
Furthermore, the terms "first", "second", etc. are used for descriptive purposes only and are not to be construed as indicating or implying relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defined as "first" or "second" may explicitly or implicitly include one or more of that feature. In the description of the present disclosure, "a plurality" means at least two, e.g., two, three, etc., unless explicitly specifically limited otherwise. The symbol "/" generally indicates that the former and latter associated objects are in an "or" relationship.
In the present disclosure, unless otherwise expressly specified or limited, the terms "connected" and the like are to be construed broadly, e.g., as meaning electrically connected or in communication with each other; may be directly connected or indirectly connected through an intermediate. The specific meaning of the above terms in the present disclosure can be understood by those of ordinary skill in the art as appropriate.
As described above, in the related art, the optical flow method has high computational complexity, large computational complexity, and high requirement on hardware, which is not favorable for real-time application; background subtraction is easily affected by natural factors such as illumination change and weather, and is sensitive to environmental noise; the time difference method is not easy to extract complete moving object information, and a cavity is formed when targets are overlapped, so that the segmentation result is not communicated, and the accuracy rate is low during subsequent target identification.
Therefore, the present disclosure provides a moving object detection method, which obtains an initial background image according to an image of a predetermined number of frames before a frame image to be detected, then performs a difference between the frame image to be detected and the initial background image to obtain a difference image of the frame image to be detected, obtains a binary image of the difference image of the frame image to be detected when it is determined that the frame image to be detected contains a moving object based on the difference image of the frame image to be detected, then performs an expansion convolution process on the binary image of the difference image of the frame image to be detected to obtain a restored image of the difference image of the frame image to be detected, and then identifies the moving object based on the restored image of the difference image of the frame image to be detected, thereby improving the accuracy of the moving object detection method suitable for real-time detection.
Fig. 1 illustrates an exemplary system architecture 10 to which the moving object detection method or moving object detection apparatus of the present disclosure may be applied.
As shown in fig. 1, system architecture 10 may include a terminal device 102, a network 104, a server 106, and a database 108. The terminal device 102 may be various electronic devices having a display screen and supporting input and output, including but not limited to a smart phone, a tablet computer, a laptop computer, a desktop computer, a wearable device, a virtual reality device, a smart home, and the like, and may also be an image acquiring device, a video acquiring device, such as a camera, and the like, for acquiring an image to be detected. Network 104 is the medium used to provide communication links between terminal device 102 and server 106. Network 104 may include various connection types, such as wired, wireless communication links, or fiber optic cables, to name a few. The server 106 may be a server or a cluster of servers, etc. that provide various services. The database 108 may be a large database software installed on a server or a small database software installed on a computer for storing data.
The terminal device 102 may interact with the server 106 and the database 108 via the network 104 to receive or transmit data and the like. For example, the terminal device 102 captures an image, and uploads the captured multi-frame image to the server 106 through the network 104 for object detection, or transmits the captured multi-frame image to the server 106 through the network 104 for storage. For another example, the user downloads the target detection result from the server 106 to the terminal device 102 through the network 104, and then processes the target alarm through the monitoring software on the terminal device 102.
Data may also be received from database 108 or sent to database 108, etc. at server 106 via network 104. For example, server 106 may be a background processing server for retrieving a plurality of frames of images from database 108 over network 104 to synthesize an initial background image. Also for example, the server 106 may be configured to transmit the images of undetected moving objects to the database 108 for storage for updating the initial background image.
It should be understood that the number of terminal devices, networks, servers, and databases in fig. 1 is merely illustrative. There may be any number of terminal devices, networks, servers, and databases, as desired for implementation.
FIG. 2 is a flow diagram illustrating a moving object detection method according to an exemplary embodiment. The method shown in fig. 2 may be applied to, for example, a server side of the system, and may also be applied to a terminal device of the system.
Referring to fig. 2, a method 20 provided by an embodiment of the present disclosure may include the following steps.
In step S202, a frame image to be detected and an image a predetermined number of frames before the frame image to be detected are acquired. In order to detect the moving target in the scene, the multi-frame images can be analyzed, and the moving target detection is carried out according to the change of pixels in the multi-frame images. The sizes of all the frames of the multi-frame video are the same. When a plurality of frame images of the shot video are imported, the images of the preset frame number before the frame image to be detected can be imported to obtain an initial background image, and then the frame image to be detected is imported for further analysis; the frame image to be detected and the image of the predetermined frame number before the frame image to be detected may also be imported simultaneously, and the corresponding image is selected when processing is performed, which does not limit the present disclosure.
In some embodiments, since the resolution of the input video is different, the processing of the whole image may cause resource waste, so that when the video image is imported, each frame of image is preprocessed to form a changed region, and then the region is converted into a window size which needs to be processed. The method can be adaptive to windows with different sizes, and one or more windows can be selected from the image after preprocessing for subsequent processing.
In step S204, an initial background image is obtained from an image a predetermined number of frames before the frame image to be detected. In order to detect a moving object in a frame image to be detected, the frame image to be detected can be compared with an initial background image, the initial background image can be obtained according to continuous frame images of a preset number of frames before the frame image to be detected, each frame image in the continuous frame images can be calculated, and a proper amount of images can be selected for calculation by skipping frames from the continuous frame images when the frame rate is high.
In some embodiments, for example, the initial background image may be obtained by performing moving average on pixel values of consecutive frame images a predetermined number of frames before the frame image to be detected, and specific embodiments may refer to fig. 3.
In other embodiments, for example, the initial background image may be obtained by moving averaging the pixel values of the consecutive frame images a predetermined number of frames before the frame image to be detected, i.e., B may be calculated on the basis of fig. 3 i And weighting and adding according to the exponential decay weight to obtain an initial background image.
In step S206, the frame image to be detected and the initial background image are differentiated to obtain a difference image of the frame image to be detected. The moving target does not appear at the same position in the multi-frame image because of moving, so that the initial background image obtained by averaging the multi-frame images before the frame image to be detected can be regarded as having no moving target, and the difference between the frame image to be detected and the initial background image can be used for judging whether the moving target exists in the frame image to be detected.
In some embodiments, for example, the grayscale image of the frame image to be detected and the grayscale image of the initial background image may be obtained, and the grayscale difference image of the frame image to be detected is obtained by differentiating the grayscale image of the frame image to be detected and the grayscale image of the initial background image. The frame image to be detected is the n +1 frame image, n is a positive integer greater than 1, and B (n) ═ B nRed ,B nGreen ,B nBlue The pixel value matrix of the initial background image of the previous n frames of images is used as the pixel value matrix, and the gray level image B of the initial background image can be obtained by performing gray level processing on the RGB three-channel image of the initial background image nGray
B nGray =(B nRed *299+B nGreen *587+B nBlue *114+500)/1000 (1)
In the formula B nGray Is a background image after graying, B nRed 、B nGreen And B nBlue Is RGB three-channel data of the initial background image b (n). Extracting a frame image B (n +1) to be detected, and performing gray processing by using a method of formula (1) to obtain a gray image B (n+)Gray Subtracting the two gray level images, and then calculating by adopting a background subtraction method to obtain the gray level difference between the frame image to be detected and the initial background image as follows:
B diff =B (n+1)Gray -B nGray (2)
wherein B is diff Is a grayscale difference image representing the difference between two grayscale images.
In step S208, when it is determined that the frame image to be detected contains a moving target based on the difference image of the frame image to be detected, a binarized image of the difference image of the frame image to be detected is acquired. If the frame image to be detected contains the moving target, the difference image of the frame image to be detected and the initial background image can display the outline of the moving target. The difference image of the frame image to be detected can be binarized to eliminate noise in the difference image and identify the moving target.
In some embodiments, for example, when it is determined that the frame image to be detected includes a moving target based on the grayscale difference image of the frame image to be detected, a threshold processing may be performed on the grayscale difference image of the frame image to be detected to obtain a binary image, and a specific implementation manner may refer to fig. 4.
In some embodiments, for example, when it is determined that the frame image to be detected does not contain a moving target based on the difference image of the frame image to be detected, the initial background image may be updated from a plurality of frame images and the frame image to be detected, which are a predetermined number of frames before the frame image to be detected. And then, when the next frame image of the frame image to be detected is subjected to moving target detection, the next frame image of the frame image to be detected is subjected to moving target detection based on the updated initial background image. If the gray value of the pixels of the gray difference image is 0 and the image blocks are black, it indicates that no moving object (moving target) exists in the frame image to be detected, the frame image may be added to the background set formed by the previous n frame images to calculate the average value of the previous n +1 frame images and update the initial background image, on the basis, the next frame n +2 frame image is extracted, and the step S204 is returned to determine whether the n +2 frame contains a moving object. The operation of iteratively updating the background makes the background image more and more reliable, and further makes the moving object detection more and more accurate.
In step S210, the binarized image of the difference image of the frame image to be detected is subjected to expansion convolution processing to obtain a restored image of the difference image of the frame image to be detected. The binary image of the difference image of the frame image to be detected can be subjected to expansion convolution processing to obtain the binary image after expansion convolution, redundant edge fragments are removed, edge restoration is carried out on the binary image after expansion convolution, and a restored image of the difference image of the frame image to be detected is obtained. The expansion convolution is to solve the problem of image segmentation, namely, downsampling reduces the resolution of an image and simultaneously loses information, the expansion convolution is carried out on the image, the receptive field of a characteristic region is increased, and the data information is ensured not to be lost by increasing the receptive field.
In some embodiments, the receptive field of the dilated convolution is given by the following equation:
F j+1 =(2 j+2 -1)*(2 j+2 -1) (3)
wherein F j+1 Indicating the receptive field, and j +1 the expansion rate. And repairing the edge of the image fragment to obtain a more complete image, amplifying the image, fusing the fragment edge region, and obtaining a reasonable edge profile of the moving object.
In step S212, a moving object is identified based on the restored image of the difference image of the frame image to be detected. The restored image of the difference image of the frame image to be detected reflects the outline of the moving target, and can be used for carrying out target identification, specifically judging the type of the target and processing.
In some embodiments, the edge of the object in the repaired image may be extracted to obtain a region of interest, and then further determination may be performed, for a specific implementation, refer to fig. 5.
According to the moving target detection method provided by the embodiment of the disclosure, after the initial background image is obtained according to the image of the preset frame number before the frame image to be detected, the difference image of the frame image to be detected is obtained by carrying out difference on the frame image to be detected and the initial background image, so that the background is updated in real time, and the problem of identification error points caused by inaccurate background is solved; when the frame image to be detected is determined to contain the moving target based on the difference image of the frame image to be detected, acquiring a binary image of the difference image of the frame image to be detected, and effectively removing the influence caused by small errors in the detection of moving objects, such as wind or camera shake; and then, carrying out expansion convolution processing on the binary image of the difference image of the frame image to be detected to obtain a restored image of the difference image of the frame image to be detected, processing edge fragments by utilizing a simple expansion convolution kernel, fusing and restoring the edge image of the object, and identifying the moving object based on the restored image of the difference image of the frame image to be detected, so that the moving object is easier to identify, and the accuracy of the moving object detection method suitable for real-time detection can be improved.
Fig. 3 is a schematic diagram illustrating a processing procedure of step S204 shown in fig. 2 in an embodiment. As shown in fig. 3, in the embodiment of the present disclosure, the step S204 may further include the following steps.
Step S2042, the pixel values of each frame image in the images of the predetermined number of frames before the frame image to be detected are respectively obtained. The pixel values may be Red (R), Green (G), Blue (B) three-channel pixel values.
Step S2044, average calculation is performed on the pixel values of the images of the predetermined number of frames before the frame image to be detected, and an initial background image is obtained.
In some embodiments, B may be i ={B iRed ,B iGreen, B iBlue The pixel value matrix of the red, the green and the blue of the image window of the ith frame is stated, the preset frame number is n, n is a positive integer larger than 1, and i belongs to [1, n ∈]And i is a positive integer. The pixel value matrix B of the initial background image can be calculated by n
Figure BDA0002973210200000111
Wherein B (n) ═ B nRed ,B nGreen ,B nBlue And the pixel value matrix of the initial background image of the previous n frames of images is used as the pixel value matrix.
According to the method for obtaining the initial background image, which is provided by the embodiment of the disclosure, the average calculation is performed on the changed pixels among the frames by performing the sliding model moving average method on the multi-frame video image before the frame image to be detected, so that the background image can be updated in real time when the images of different frames are detected, the change of light can be responded, and the sensitivity degree to the environmental noise is reduced.
Fig. 4 is a schematic diagram illustrating a processing procedure of step S208 shown in fig. 2 in an embodiment. As shown in fig. 4, in the embodiment of the present disclosure, the step S208 may further include the following steps.
Step S2082, the number of pixels with the gray value of zero in the gray difference image of the frame image to be detected is obtained. The pixel points with the gray values being zero in the gray difference image of the frame image to be detected represent the pixel points with the same gray values in the frame image to be detected and the initial background image, the number of the pixel points with the same gray values in the frame image to be detected and the initial background image can be reversely deduced by obtaining the number of the pixel points, and the pixel points with the changed gray values in the frame image to be detected can be used for judging the existence of the moving target.
Step S2084, when the frame image to be detected contains the moving target according to the number of pixels with the gray value of zero in the gray difference image of the frame image to be detected, the gray threshold matrix of the gray difference image of the frame image to be detected is obtained. After the number of pixels with the gray values not being zero is obtained according to the number of pixels with the gray values being zero in the gray difference image of the frame image to be detected, the number of pixels with the gray values not being zero can be compared with a preset number threshold, and if the number of pixels is larger than the preset number threshold, the frame image to be detected can be considered to contain the moving target. For example, when the threshold of the gray scale region is 90% of the total number of pixels in the window, that is, the number of pixels with a gray scale value of zero is greater than ninety percent of the number of pixels in the window, the frame image to be detected can be considered asThe image does not contain moving objects. When setting threshold value for the gray difference image, one and B can be established diff Threshold matrix T of the same size:
Figure BDA0002973210200000121
where τ is the threshold, the size of the threshold matrix T is the same as the size of the grayscale difference image.
In some embodiments, for example, when establishing the threshold matrix T, the threshold selection follows the principle of histogram bi-peaking for B diff The gray histogram is used for representing, double peaks represent the background and the target, and double peak valleys are selected as threshold values.
In other embodiments, for example, a local threshold segmentation method may be used to obtain the optimal threshold matrix, and for example, a soval (Sauvola) algorithm may be used to dynamically calculate the threshold of each pixel point according to the mean and the standard deviation of the gray level in the neighborhood of the current pixel point, centering on each pixel point of the gray level image, so as to obtain the matrix of the threshold of each pixel point.
Step S2086, carrying out binarization processing on the gray level difference image of the frame image to be detected according to the gray level threshold value matrix of the gray level difference image of the frame image to be detected, and obtaining a binarization image of the difference image of the frame image to be detected. For gray difference image B diff Performing binarization processing, comparing each gray value in the gray difference image with τ, if B diff If the gray value of the pixel is larger than the threshold value, the numerical value is changed to 255, otherwise, the numerical value is changed to 0, and the formula is as follows:
Figure BDA0002973210200000122
wherein B is binary For the binarized image, p (r) is the gray scale value per pixel of the gray scale difference image, and τ is the threshold value. And then outputting the image after the binarization processing to obtain a binarization image of the difference image of the frame image to be detected.
According to the difference image binarization method provided by the embodiment of the disclosure, the threshold processing is performed on the gray difference image when the frame image to be detected contains the moving target, so that a clearer moving target contour can be obtained, and the moving target can be identified.
Fig. 5 is a schematic diagram illustrating a processing procedure of step S212 shown in fig. 2 in an embodiment. As shown in fig. 5, in the embodiment of the present disclosure, the step S212 may further include the following steps.
Step S2122, carrying out edge detection on the repaired image of the difference image of the frame image to be detected to obtain an interested area. When the edge of an object in an image is extracted, for example, a Roberts operator method is used, the position with the strongest gray level change is measured by using the difference between two adjacent pixel values in the diagonal direction, and the edge is detected through local difference calculation, wherein the calculation formula is as follows:
Figure BDA0002973210200000131
wherein g is (x,y) For the image output by Roberts edge detection, f (x, y) is the grayscale difference image after expansion convolution and binarization operations.
And step S2124, identifying the region of interest and determining a moving target. Acquiring identification parameters of the region of interest, wherein the identification parameters comprise at least one of the number of pixel points of the region of interest and the area of the region of interest, and then determining that the moving target is a person, an animal or a vehicle according to the identification parameters of the region of interest. And identifying the image after the edge is extracted, and identifying the moving object as a person or a vehicle according to the parameters such as the number of pixel points, the area of the moving object and the like.
According to the moving target identification method provided by the embodiment of the disclosure, the edge detection is performed on the restored image after the expansion and convolution of the difference image of the frame image to be detected, the region of interest is obtained for target category identification, and the type of the moving target can be effectively identified.
Fig. 6 is a flowchart illustrating a moving object detection system according to fig. 2 to 5. As shown in fig. 6, the moving object detection process may include the following steps.
Step S602, importing multi-frame video images, adopting a sliding model moving average method to self-adapt to windows with different sizes, and carrying out average calculation on pixels changing among multiple frames to obtain an initial background image B n
Step S604, for the initial background image B n Carrying out gray level processing to obtain an initial background gray level image B nGray
Step S606, extracting a new frame image, and performing gray processing to obtain a gray image B of the (n +1) th frame (n+1)Gray
Step S608, difference processing is performed to calculate B (n+1)Gray And B nGray Difference of (2), outputting a difference image B diff
Step S610, determining whether the difference image includes a moving object, if yes, continuing to step S612; and if not, adding the n +1 frame image into a background image set formed by the multiple frames of video images, updating the initial background image, and returning to the step S606.
Step S612, setting threshold value to differential image B diff Performing binarization operation, and outputting threshold-segmented binarized image B binary
Step S614, the binary image B binary And performing expansion convolution operation to fuse fragment edges.
And step S616, carrying out local difference calculation on the image after the expansion convolution by utilizing a Roberts operator to detect an edge and marking the edge.
Step 618, according to the parameters such as the number and the area of the internal pixel points, the moving target with the marked edge is identified
Step S620, judging whether the detection is finished or not, if not, returning to the step S606; if yes, the detection process is ended. The condition for ending the detection may be specifically set, for example, the image collected after a certain time point is not detected, or the image collected after a certain number of images is identified is not detected, and the like.
The specific implementation of each step in fig. 6 can refer to the content in the methods in fig. 2 to fig. 5, and is not described herein again.
The following describes an application of the embodiments shown in fig. 2 to 6 of the present disclosure in a practical scenario. In a moving object detection system of a department in a city, objects to be identified are a human body, a motor vehicle and a non-motor vehicle. Fig. 7 is a schematic view of a monitoring interface of the moving object detecting system. As shown in fig. 7, the main body of the interface is a map of a more valuable area, showing the coordinates of the camera position corresponding to the map, and showing the alarm list in the right area. The alarm information can be audited, and the 'normal' or 'abnormal' is selected to carry out artificial secondary annotation on the alarm information. And at 2-5 points at night, if the video shows that people move, the vehicle moves and the non-motor vehicle moves, the alarm is triggered, the alarm information can be checked and processed on a large screen, and a short message notification is received.
Fig. 8 is a block diagram illustrating a moving object detecting apparatus according to an exemplary embodiment. The apparatus shown in fig. 8 may be applied to, for example, a server side of the system, and may also be applied to a terminal device of the system.
Referring to fig. 8, the apparatus 80 provided in the embodiment of the present disclosure may include an image obtaining module 802, a background obtaining module 804, a difference obtaining module 806, a binarization module 808, an image repairing module 810, and an object identifying module 812.
The image obtaining module 802 may be configured to obtain a frame image to be detected and a predetermined number of frames of images before the frame image to be detected.
The background obtaining module 804 is configured to obtain an initial background image according to a predetermined number of frames of images before a frame of image to be detected.
The difference obtaining module 806 may be configured to perform a difference between the frame image to be detected and the initial background image to obtain a difference image of the frame image to be detected.
The binarization module 808 may be configured to obtain a binarized image of the difference image of the frame image to be detected when it is determined that the frame image to be detected includes the moving target based on the difference image of the frame image to be detected.
The image restoration module 810 may be configured to perform expansion convolution processing on the binary image of the difference image of the frame image to be detected to obtain a restored image of the difference image of the frame image to be detected.
The target recognition module 812 may be used to recognize a moving target based on a restored image of a difference image of the frame images to be detected.
Fig. 9 is a block diagram illustrating another moving object detecting apparatus according to an exemplary embodiment. The apparatus shown in fig. 9 may be applied to, for example, a server side of the system, and may also be applied to a terminal device of the system.
Referring to fig. 9, the apparatus 90 provided by the embodiment of the present disclosure may include an image acquisition module 902, a background acquisition module 904, a difference acquisition module 906, a background update module 907, a binarization module 908, an image restoration module 910, and an object identification module 912, the background obtaining module 904 may include a pixel value obtaining module 9042 and a background calculating module 9044, the difference obtaining module 906 may include a grayscale image obtaining module 9062 and a difference calculating module 9064, the binarizing module 908 may include a black block obtaining module 9082, a threshold value obtaining module 9084 and a binarizing processing module 9086, the image inpainting module 910 may include an expansion convolution processing module 9102 and an edge inpainting module 9104, the object identifying module 912 may include an edge detecting module 9122 and a region of interest identifying module 9124, and the region of interest identifying module 9124 may include an identifying parameter obtaining module 91242 and a moving object determining module 91244.
The image obtaining module 902 may be configured to obtain a frame image to be detected and a predetermined number of frames of images before the frame image to be detected.
The background obtaining module 904 is operable to obtain an initial background image from a predetermined number of frames of images preceding the frame image to be detected.
The pixel value acquiring module 9042 may be configured to acquire the pixel values of the images of the frames in the images of the predetermined number of frames before the frame image to be detected, respectively.
The background calculation module 9044 may be configured to perform average calculation on pixel values of images of a predetermined number of frames before a frame image to be detected, to obtain an initial background image.
The difference obtaining module 906 may be configured to perform difference between the frame image to be detected and the initial background image to obtain a difference image of the frame image to be detected. The difference image of the frame image to be detected comprises a gray difference image of the frame image to be detected.
The grayscale image obtaining module 9062 may be configured to obtain a grayscale image of the frame image to be detected and a grayscale image of the initial background image.
The difference calculation module 9064 may be configured to perform difference between the grayscale image of the frame image to be detected and the grayscale image of the initial background image to obtain a grayscale difference image of the frame image to be detected.
The background updating module 907 may be configured to update the initial background image according to the image of the predetermined frame number before the frame image to be detected and the frame image to be detected when it is determined that the frame image to be detected does not include the moving target based on the difference image of the frame image to be detected, so that the apparatus performs moving target detection on a next frame image of the frame image to be detected based on the updated initial background image.
The binarization module 908 may be configured to obtain a binarized image of the difference image of the frame image to be detected when it is determined that the frame image to be detected includes the moving target based on the difference image of the frame image to be detected.
The black block obtaining module 9082 obtains the number of pixels with a gray value of zero in the gray difference image of the frame image to be detected.
The threshold obtaining module 9084 obtains a gray threshold matrix of the gray difference image of the frame image to be detected when it is determined that the frame image to be detected contains a moving target according to the number of pixels with a gray value of zero in the gray difference image of the frame image to be detected.
The binarization processing module 9086 performs binarization processing on the grayscale difference image of the frame image to be detected according to the grayscale threshold matrix of the grayscale difference image of the frame image to be detected, so as to obtain a binarized image of the difference image of the frame image to be detected.
The image restoration module 910 may be configured to perform an expansion convolution process on the binary image of the difference image of the frame image to be detected to obtain a restored image of the difference image of the frame image to be detected.
The expansion convolution processing module 9102 may be configured to perform expansion convolution processing on the binary image of the difference image of the frame image to be detected, so as to obtain the binary image after the expansion convolution.
The edge repairing module 9104 may be configured to perform edge repairing on the expanded and convolved binary image to obtain a repaired image of the difference image of the frame image to be detected.
The target recognition module 912 may be configured to recognize a moving target based on a restored image of a difference image of the frame images to be detected.
The edge detection module 9122 may be configured to perform edge detection on a repaired image of a difference image of a frame image to be detected, so as to obtain an area of interest.
The region of interest identification module 9124 can be used to identify regions of interest and determine moving objects.
The identification parameter obtaining module 91242 may be configured to obtain an identification parameter of the region of interest, where the identification parameter includes at least one of the number of pixels of the region of interest and the area of the region of interest.
The moving object determining module 91244 may be used to determine that the moving object is a person, an animal or a vehicle based on the identification parameters of the region of interest.
The specific implementation of each module in the apparatus provided in the embodiment of the present disclosure may refer to the content in the foregoing method, and is not described herein again.
Fig. 10 shows a schematic structural diagram of an electronic device in an embodiment of the present disclosure. It should be noted that the apparatus shown in fig. 10 is only an example of a computer system, and should not bring any limitation to the function and the scope of the application of the embodiments of the present disclosure.
As shown in fig. 10, the apparatus 1000 includes a Central Processing Unit (CPU)1001 that can perform various appropriate actions and processes in accordance with a program stored in a Read Only Memory (ROM)1002 or a program loaded from a storage section 1008 into a Random Access Memory (RAM) 1003. In the RAM 1003, various programs and data necessary for the operation of the apparatus 1000 are also stored. The CPU1001, ROM 1002, and RAM 1003 are connected to each other via a bus 1004. An input/output (I/O) interface 1005 is also connected to bus 1004.
The following components are connected to the I/O interface 1005: an input section 1006 including a keyboard, a mouse, and the like; an output portion 1007 including a display such as a Cathode Ray Tube (CRT), a Liquid Crystal Display (LCD), and the like, and a speaker; a storage portion 1008 including a hard disk and the like; and a communication section 1009 including a network interface card such as a LAN card, a modem, or the like. The communication section 1009 performs communication processing via a network such as the internet. The driver 1010 is also connected to the I/O interface 1005 as necessary. A removable medium 1011 such as a magnetic disk, an optical disk, a magneto-optical disk, a semiconductor memory, or the like is mounted on the drive 1010 as necessary, so that a computer program read out therefrom is mounted into the storage section 1008 as necessary.
In particular, according to an embodiment of the present disclosure, the processes described above with reference to the flowcharts may be implemented as computer software programs. For example, embodiments of the present disclosure include a computer program product comprising a computer program embodied on a computer-readable medium, the computer program comprising program code for performing the method illustrated by the flow chart. In such an embodiment, the computer program may be downloaded and installed from a network through the communication part 1009 and/or installed from the removable medium 1011. The above-described functions defined in the system of the present disclosure are executed when the computer program is executed by a Central Processing Unit (CPU) 1001.
It should be noted that the computer readable media shown in the present disclosure may be computer readable signal media or computer readable storage media or any combination of the two. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples of the computer readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the present disclosure, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In contrast, in the present disclosure, a computer-readable signal medium may include a propagated data signal with computer-readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: wireless, wire, fiber optic cable, RF, etc., or any suitable combination of the foregoing.
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams or flowchart illustration, and combinations of blocks in the block diagrams or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The modules described in the embodiments of the present disclosure may be implemented by software or hardware. The described modules may also be provided in a processor, which may be described as: a processor comprises an image acquisition module, a background acquisition module, a difference acquisition module, a binarization module, an image restoration module and a target identification module. The names of these modules do not limit the module itself in some cases, and for example, the image capturing module may also be described as a "module for capturing multiple frames of video images".
As another aspect, the present disclosure also provides a computer-readable medium, which may be contained in the apparatus described in the above embodiments; or may be separate and not incorporated into the device. The computer readable medium carries one or more programs which, when executed by a device, cause the device to comprise:
acquiring a frame image to be detected and an image of a preset frame number before the frame image to be detected; obtaining an initial background image according to the image of a preset frame number before the frame image to be detected; carrying out difference on the frame image to be detected and the initial background image to obtain a difference image of the frame image to be detected; when the frame image to be detected is determined to contain the moving target based on the difference image of the frame image to be detected, acquiring a binary image of the difference image of the frame image to be detected; performing expansion convolution processing on the binary image of the difference image of the frame image to be detected to obtain a repaired image of the difference image of the frame image to be detected; and identifying the moving target based on the repaired image of the difference image of the frame image to be detected.
Exemplary embodiments of the present disclosure are specifically illustrated and described above. It is to be understood that the present disclosure is not limited to the precise arrangements, instrumentalities, or instrumentalities described herein; on the contrary, the disclosure is intended to cover various modifications and equivalent arrangements included within the spirit and scope of the appended claims.

Claims (10)

1. A moving object detection method, comprising:
acquiring a frame image to be detected and an image of a preset frame number before the frame image to be detected;
obtaining an initial background image according to the image of a preset frame number before the frame image to be detected;
carrying out difference on the frame image to be detected and the initial background image to obtain a difference image of the frame image to be detected;
when the frame image to be detected contains a moving target based on the difference image of the frame image to be detected, acquiring a binary image of the difference image of the frame image to be detected;
performing expansion convolution processing on the binary image of the difference image of the frame image to be detected to obtain a repaired image of the difference image of the frame image to be detected;
and identifying the moving target based on the repaired image of the difference image of the frame image to be detected.
2. The method according to claim 1, wherein the obtaining an initial background image from a predetermined number of frames of images before the frame image to be detected comprises:
respectively acquiring the pixel value of each frame image in the images of the preset frame number before the frame image to be detected;
and carrying out average calculation on the pixel values of the images of the preset frame number before the frame image to be detected to obtain the initial background image.
3. The method according to claim 1, wherein the difference image of the frame image to be detected comprises a grayscale difference image of the frame image to be detected;
the step of obtaining a difference image of the frame image to be detected by differentiating the frame image to be detected and the initial background image comprises:
acquiring a gray level image of a frame image to be detected and a gray level image of an initial background image;
carrying out difference on the gray level image of the frame image to be detected and the gray level image of the initial background image to obtain a gray level difference image of the frame image to be detected;
when the frame image to be detected is determined to contain the moving target based on the difference image of the frame image to be detected, acquiring the binary image of the difference image of the frame image to be detected comprises the following steps:
obtaining the number of pixels with zero gray value in the gray difference image of the frame image to be detected;
when the frame image to be detected contains a moving target according to the number of pixels with zero gray values in the gray difference image of the frame image to be detected, acquiring a gray threshold matrix of the gray difference image of the frame image to be detected;
and carrying out binarization processing on the gray level difference image of the frame image to be detected according to the gray level threshold matrix of the gray level difference image of the frame image to be detected to obtain a binarized image of the difference image of the frame image to be detected.
4. The method of claim 1, further comprising:
when the frame image to be detected does not contain a moving target based on the difference image of the frame image to be detected, updating an initial background image according to the image of the preset frame number before the frame image to be detected and the frame image to be detected;
and detecting a moving target of the next frame image of the frame images to be detected based on the updated initial background image.
5. The method according to claim 1, wherein the performing the dilation convolution processing on the binarized image of the difference image of the frame image to be detected to obtain the restored image of the difference image of the frame image to be detected comprises:
performing expansion convolution processing on the binary image of the difference image of the frame image to be detected to obtain an expanded convolution binary image;
and performing edge restoration on the expanded and convolved binary image to obtain a restored image of the difference image of the frame image to be detected.
6. The method according to claim 1, wherein the identifying the moving object based on the repaired image of the difference image of the frame images to be detected comprises:
carrying out edge detection on the repaired image of the difference image of the frame image to be detected to obtain an interested area;
and identifying the region of interest and determining the moving target.
7. The method of claim 6, wherein the identifying the region of interest and the determining the moving object comprises:
acquiring identification parameters of the region of interest, wherein the identification parameters comprise at least one of the number of pixel points of the region of interest and the area of the region of interest;
and determining the moving target to be a person, an animal or a vehicle according to the identification parameters of the region of interest.
8. A moving object detecting apparatus, comprising:
the image acquisition module is used for acquiring a frame image to be detected and images of a preset frame number before the frame image to be detected;
the background acquisition module is used for acquiring an initial background image according to the image of a preset frame number before the frame image to be detected;
the difference obtaining module is used for carrying out difference on the frame image to be detected and the initial background image to obtain a difference image of the frame image to be detected;
the binarization module is used for acquiring a binarization image of the difference image of the frame image to be detected when the frame image to be detected is determined to contain a moving target based on the difference image of the frame image to be detected;
the image restoration module is used for performing expansion convolution processing on the binary image of the difference image of the frame image to be detected so as to obtain a restored image of the difference image of the frame image to be detected;
and the target identification module is used for identifying the moving target based on the repaired image of the difference image of the frame image to be detected.
9. An apparatus, comprising: memory, processor and executable instructions stored in the memory and executable in the processor, characterized in that the processor implements the method according to any of claims 1-7 when executing the executable instructions.
10. A computer-readable storage medium having stored thereon computer-executable instructions, which when executed by a processor, implement the method of any one of claims 1-7.
CN202110268735.2A 2021-03-12 2021-03-12 Moving object detection method, device, equipment and storage medium Pending CN115083008A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110268735.2A CN115083008A (en) 2021-03-12 2021-03-12 Moving object detection method, device, equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110268735.2A CN115083008A (en) 2021-03-12 2021-03-12 Moving object detection method, device, equipment and storage medium

Publications (1)

Publication Number Publication Date
CN115083008A true CN115083008A (en) 2022-09-20

Family

ID=83241575

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110268735.2A Pending CN115083008A (en) 2021-03-12 2021-03-12 Moving object detection method, device, equipment and storage medium

Country Status (1)

Country Link
CN (1) CN115083008A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115910274A (en) * 2022-11-17 2023-04-04 珠海迪尔生物工程股份有限公司 Liquid dropping control method and system, electronic equipment and storage medium thereof
CN117278865A (en) * 2023-11-16 2023-12-22 荣耀终端有限公司 Image processing method and related device

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115910274A (en) * 2022-11-17 2023-04-04 珠海迪尔生物工程股份有限公司 Liquid dropping control method and system, electronic equipment and storage medium thereof
CN117278865A (en) * 2023-11-16 2023-12-22 荣耀终端有限公司 Image processing method and related device

Similar Documents

Publication Publication Date Title
CN110705405B (en) Target labeling method and device
CN111369545B (en) Edge defect detection method, device, model, equipment and readable storage medium
CN107220962B (en) Image detection method and device for tunnel cracks
JPH07302328A (en) Method for extracting area of moving object based upon background difference
CN111242097A (en) Face recognition method and device, computer readable medium and electronic equipment
CN115083008A (en) Moving object detection method, device, equipment and storage medium
CN113780110A (en) Method and device for detecting weak and small targets in image sequence in real time
CN114926766A (en) Identification method and device, equipment and computer readable storage medium
CN117037103A (en) Road detection method and device
CN115049954A (en) Target identification method, device, electronic equipment and medium
CN115311623A (en) Equipment oil leakage detection method and system based on infrared thermal imaging
CN113158773B (en) Training method and training device for living body detection model
CN110765875B (en) Method, equipment and device for detecting boundary of traffic target
CN106778822B (en) Image straight line detection method based on funnel transformation
WO2024016632A1 (en) Bright spot location method, bright spot location apparatus, electronic device and storage medium
CN111754491A (en) Picture definition judging method and device
CN111667419A (en) Moving target ghost eliminating method and system based on Vibe algorithm
CN114821513B (en) Image processing method and device based on multilayer network and electronic equipment
CN116129195A (en) Image quality evaluation device, image quality evaluation method, electronic device, and storage medium
CN116002480A (en) Automatic detection method and system for accidental falling of passengers in elevator car
CN113033510B (en) Training and detecting method, device and storage medium for image change detection model
CN114359332A (en) Target tracking method, device, equipment and medium based on depth image
CN115861321B (en) Production environment detection method and system applied to industrial Internet
CN112085683A (en) Depth map reliability detection method in significance detection
CN111507324B (en) Card frame recognition method, device, equipment and computer storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination