CN113674322A - Motion state detection method and related device - Google Patents

Motion state detection method and related device Download PDF

Info

Publication number
CN113674322A
CN113674322A CN202110981891.3A CN202110981891A CN113674322A CN 113674322 A CN113674322 A CN 113674322A CN 202110981891 A CN202110981891 A CN 202110981891A CN 113674322 A CN113674322 A CN 113674322A
Authority
CN
China
Prior art keywords
image
detected
pixel points
difference
determining
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110981891.3A
Other languages
Chinese (zh)
Inventor
赵兴科
任馨怡
王枫
熊剑平
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang Dahua Technology Co Ltd
Original Assignee
Zhejiang Dahua Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang Dahua Technology Co Ltd filed Critical Zhejiang Dahua Technology Co Ltd
Priority to CN202110981891.3A priority Critical patent/CN113674322A/en
Publication of CN113674322A publication Critical patent/CN113674322A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/246Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/20Image enhancement or restoration using local operators
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/90Dynamic range modification of images or parts thereof
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/215Motion-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20112Image segmentation details
    • G06T2207/20132Image cropping

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Biophysics (AREA)
  • Evolutionary Computation (AREA)
  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Biomedical Technology (AREA)
  • Multimedia (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Image Analysis (AREA)

Abstract

The embodiment of the application provides a motion state detection method and a related device, and the maximum pixel value and the minimum pixel value of a preset number of images to be detected at the same position are determined by comparing the pixel values of pixel points at the same position in the images to be detected. And obtaining a difference value corresponding to each position by making a difference between the maximum pixel value and the minimum pixel value corresponding to each position. The difference is the maximum difference of the pixel points of the images to be detected at the same position. Therefore, the recognition of image difference is improved, and the tiny change of the detection target is easier to capture. And finally, comparing the corresponding difference value of each position with a preset threshold value, and determining the motion state of the detection target according to the comparison result. Therefore, the robustness to a new scene and the accuracy to a detection result are improved.

Description

Motion state detection method and related device
Technical Field
The present invention relates to the field of image processing, and in particular, to a motion state detection method and related apparatus.
Background
With the development of science and technology, automation equipment and devices in various industries are continuously pushed out and applied, and the innovation and progress of production are greatly promoted. The conveyor belt is used as a common carrier for transporting materials, and the detection of the motion state of the conveyor belt is an important step for realizing automatic management. The detection difficulty is that the image characteristics of the surface of the unloaded conveyor belt are rare, and the difference between the moving conveyor belt and the static conveyor belt is not obvious.
In the prior art, the motion state of a transmission belt is identified and detected through a deep learning model. The detection mode lacks robustness on a new scene, the detection mode has higher requirements on sample data, and the selection of the sample directly influences the accuracy of a detection result.
Disclosure of Invention
The embodiment of the application provides a motion state detection method and a related device, wherein the motion state of a detection target is determined according to a comparison result of a difference value of each position and a preset threshold value by obtaining the difference value of a maximum pixel value and a minimum pixel value of each image to be detected at the same position, so that the robustness of a new scene and the accuracy of a detection result are improved.
In a first aspect, an embodiment of the present application provides a motion state detection method, where the method includes:
intercepting a preset number of images to be detected from a video stream containing a detection target;
comparing pixel values of pixel points at the same position in each image to be detected, and determining the difference value between the maximum pixel value and the minimum pixel value at the same position;
and comparing the difference value corresponding to each position with a first threshold value, and determining the motion state of the detection target according to the comparison result.
The image detection method and the device for the image detection are used for intercepting the preset number of images to be detected from the video stream containing the detection target, and comparing the pixel values of the pixel points at the same position in each image to be detected to determine the maximum pixel value and the minimum pixel value of the preset number of images to be detected at the same position. And obtaining a difference value corresponding to each position by making a difference between the maximum pixel value and the minimum pixel value corresponding to each position. The difference is the maximum difference of the pixel points of the images to be detected at the same position. Therefore, the recognition of image difference is improved, and the tiny change of the detection target is easier to capture. And finally, comparing the corresponding difference value of each position with a preset threshold value, and determining the motion state of the detection target according to the comparison result.
In some possible embodiments, the intercepting a preset number of images to be detected from a video stream containing a detection target includes:
and intercepting a preset number of images to be detected from the video stream containing the detection target based on a preset interception interval.
The image detection method and the device have the advantages that the preset number of images to be detected are intercepted from the video stream based on the preset intercepting intervals, so that the image feature difference contained in each image to be detected can meet the operation requirement, and the accuracy of the detection result is improved.
In some possible embodiments, before comparing the pixel values of the pixel points at the same position in each image to be detected, the method further includes:
adopting a trained neural network model to identify a target area where a detection target in each image to be detected is located;
and aiming at each image to be detected, performing cutting operation with preset size on the target area so as to enable each cut image to be detected to have the same size and contain the target area.
According to the embodiment of the application, the target area where the monitoring target is located is identified by adopting the pre-trained neural network model, and the target area is cut. The integrity of a detection target in the image to be detected is ensured, and meanwhile, a large number of regions which do not contain the detection target in the image to be detected can be removed, so that the operation amount is reduced.
In some possible embodiments, the comparing the difference value corresponding to each position with a first threshold value and determining the motion state of the detection target according to the comparison result includes:
determining the number of first-class pixel points and second-class pixel points based on the comparison result of the difference value corresponding to each position and a first threshold value, and determining the motion state of the detection target according to the number; the first type pixel points belong to detection targets in motion states, and the second type pixel points belong to detection targets in static states.
According to the embodiment of the application, the difference value corresponding to each position is compared with the first threshold value, the category of the pixel point of each position is determined according to the comparison result, the first type of pixel point represents that the detection target is in a moving state, and the second type of pixel point represents that the detection target is in a static state. The motion state of the detection target is determined based on the category of the pixel point corresponding to each position, so that the accuracy of the detection result can be improved.
In some possible embodiments, the determining the number of the first type of pixel points and the second type of pixel points based on the comparison result of the difference value corresponding to each position and the first threshold includes:
for each position, if the position corresponding difference is not smaller than the first threshold, determining the pixel point corresponding to the position as a first-class pixel point;
and if the position corresponding difference is smaller than the first threshold, determining the pixel points corresponding to the positions as second-class pixel points.
According to the embodiment of the application, the pixel points not less than the first threshold are classified into the first type of pixel points, the pixel points less than the first threshold are classified into the second type of pixel points, the pixel points are uniformly divided into two types, and the motion state of the detection target can be determined based on the number of the two types of pixel points.
In some possible embodiments, the determining the motion state of the detection target according to the number includes:
if the ratio of the number of the first type pixel points to the total number is larger than a second threshold value, determining that the detection target is in a motion state; wherein the total number is the sum of the number of the first type pixel points and the second type pixel points;
and if the ratio of the number of the first type pixel points to the total number is smaller than or equal to the second threshold value, determining that the detection target is in a static state.
According to the embodiment of the application, the motion state of the detection target is determined according to the comparison result of the proportional relation between the first-class pixel points and the total pixel points and the second threshold, so that the accuracy of the detection result is improved.
In some possible embodiments, before comparing the pixel values of the pixel points at the same position in each image to be detected, the method further includes:
and converting the image to be detected into a gray image, and filtering the converted image to be detected.
The embodiment of the application converts the image to be detected into the gray image so as to carry out differential operation, and can carry out filtering processing on the converted image to be detected so as to improve the accuracy of an operation result.
In a second aspect, an embodiment of the present application provides a motion state detection apparatus, including:
the image acquisition module is configured to intercept a preset number of images to be detected from a video stream containing a detection target;
the difference value determining module is configured to compare pixel values of pixel points at the same position in each image to be detected, and determine a difference value between a maximum pixel value and a minimum pixel value at the same position;
and the state confirmation module is configured to compare the difference value corresponding to each position with a first threshold value and determine the motion state of the detection target according to the comparison result.
In some possible embodiments, the intercepting of the preset number of images to be detected from the video stream containing the detection target is performed, and the image acquisition module is configured to:
and intercepting a preset number of images to be detected from the video stream containing the detection target based on a preset interception interval.
In some possible embodiments, before performing the comparison of the pixel values of the pixel points at the same position in each image to be detected, the difference determination module is further configured to:
adopting a trained neural network model to identify a target area where a detection target in each image to be detected is located;
and aiming at each image to be detected, performing cutting operation with preset size on the target area so as to enable each cut image to be detected to have the same size and contain the target area.
In some possible embodiments, the comparing the difference value corresponding to each position with a first threshold is performed, and the motion state of the detection target is determined according to a comparison result, and the state confirmation module is configured to:
determining the number of first-class pixel points and second-class pixel points based on the comparison result of the difference value corresponding to each position and a first threshold value, and determining the motion state of the detection target according to the number; the first type pixel points belong to detection targets in motion states, and the second type pixel points belong to detection targets in static states.
In some possible embodiments, the determining the number of the first type of pixel points and the second type of pixel points based on the comparison result of the difference value corresponding to each position and the first threshold is performed, and the status confirmation module is configured to:
for each position, if the position corresponding difference is not smaller than the first threshold, determining the pixel point corresponding to the position as a first-class pixel point;
and if the position corresponding difference is smaller than the first threshold, determining the pixel points corresponding to the positions as second-class pixel points.
In some possible embodiments, said determining the motion state of the detection target according to said number is performed, said state confirmation module being configured to:
if the ratio of the number of the first type pixel points to the total number is larger than a second threshold value, determining that the detection target is in a motion state; wherein the total number is the sum of the number of the first type pixel points and the second type pixel points;
and if the ratio of the number of the first type pixel points to the total number is smaller than or equal to the second threshold value, determining that the detection target is in a static state.
In some possible embodiments, before comparing the pixel values of the pixel points at the same position in each image to be detected, the difference determination module is further configured to:
and converting the image to be detected into a gray image, and filtering the converted image to be detected.
In a third aspect, an embodiment of the present application further provides an electronic device, including:
a processor;
a memory for storing the processor-executable instructions;
wherein the processor is configured to execute the instructions to implement any of the methods as provided in the first aspect of the application.
In a fourth aspect, embodiments of the present application further provide a computer-readable storage medium, where instructions, when executed by a processor of an electronic device, enable the electronic device to perform any one of the methods as provided in the first aspect of the present application.
Additional features and advantages of the application will be set forth in the description which follows, and in part will be obvious from the description, or may be learned by the practice of the application. The objectives and other advantages of the application may be realized and attained by the structure particularly pointed out in the written description and claims hereof as well as the appended drawings.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings needed to be used in the embodiments of the present application will be briefly described below, and it is obvious that the drawings described below are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
FIG. 1a is a schematic diagram of an adjacent frame image according to an embodiment of the present disclosure;
fig. 1b is a schematic diagram of a binarized image obtained based on a difference result according to an embodiment of the present application;
FIG. 2 is a schematic diagram of an application environment according to an embodiment of the present application;
fig. 3a is a flowchart illustrating an overall motion state detection method according to an embodiment of the present application;
FIG. 3b is a schematic diagram of 5 images to be detected according to an embodiment of the present application;
fig. 3c is a schematic diagram illustrating the image to be detected in fig. 3b after being cropped according to an embodiment of the application;
FIG. 3d is a schematic diagram of a first image and a second image shown in an embodiment of the present application;
fig. 3e is a schematic diagram of a binarized image according to an embodiment of the present disclosure;
fig. 4 is a block diagram of a motion state detection apparatus 400 according to an embodiment of the present application;
fig. 5 is a schematic diagram of an electronic device according to an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be described in detail and clearly with reference to the accompanying drawings. In the description of the embodiments of the present application, "/" means "or" unless otherwise specified, for example, a/B may mean a or B; "and/or" in the text is only an association relationship describing an associated object, and means that three relationships may exist, for example, a and/or B may mean: three cases of a alone, a and B both, and B alone exist, and in addition, "a plurality" means two or more than two in the description of the embodiments of the present application.
In the description of the embodiments of the present application, the term "plurality" means two or more unless otherwise specified, and other terms and the like should be understood similarly, and the preferred embodiments described herein are only for the purpose of illustrating and explaining the present application, and are not intended to limit the present application, and features in the embodiments and examples of the present application may be combined with each other without conflict.
To further illustrate the technical solutions provided by the embodiments of the present application, the following detailed description is made with reference to the accompanying drawings and the detailed description. Although the embodiments of the present application provide method steps as shown in the following embodiments or figures, more or fewer steps may be included in the method based on conventional or non-inventive efforts. In steps where no necessary causal relationship exists logically, the order of execution of the steps is not limited to that provided by the embodiments of the present application. In the actual process or the control device, the processes may be executed in sequence or in parallel according to the method shown in the embodiment or the drawings.
The method has the advantages that the deep learning model needs to perform learning training on massive samples, and after the model is trained, if a detection scene changes, the characteristic reading is inaccurate, so that the accuracy of a detection result is reduced. Therefore, the detection mode lacks robustness on a new scene, the detection mode has higher requirements on sample data, and the selection of the sample directly influences the accuracy of a detection result. In view of the above, the detection of the motion state of the target object may be implemented based on a detection method of the target object in the video stream.
In the prior art, methods for detecting a target object in a video stream generally include three methods, namely a background modeling method, an optical flow method and an inter-frame difference method. Background modeling is to segment a target object by comparing an input current frame image with a background image and by changing statistical information such as gray scale. The method needs to estimate a background model without a target object in advance, and is greatly influenced by the selection of the background model. The optical flow method detects and segments a target object based on an estimation of an optical flow, which includes both information representing a motion state of the target object and structural information of the light. The calculation is complex and time-consuming, and the requirement of real-time detection is difficult to meet.
Based on the consideration of detection effect and real-time performance, the method adopts an interframe difference method to detect the motion state of the no-load conveyor belt. The interframe difference method is a method for obtaining a moving target area by performing difference operation on adjacent frame images of a video image sequence. When the motion state of the detection target changes, a relatively obvious difference can appear between two adjacent frames of images, and the absolute value of the pixel value difference of the corresponding positions of the images is obtained by subtracting the pixel values of the same positions of the two frames of images. And determining the motion state of the detection target through the comparison result of the absolute value and a preset threshold value. The expression formula of the interframe difference method is shown in the following formula (1):
Figure BDA0003229448470000081
d (x, y) is an image obtained by difference of adjacent frame images; i (t) and I (t-1) are adjacent frame images at t and t-1 time respectively; t is a threshold value selected during binarization of the differential image; d (x, y) represents a part with changed pixel gray values between adjacent frame images, namely, represents that the detection target is in a motion state; d (x, y) ═ 0 represents a portion where the pixel gradation value has not changed between adjacent frame images, that is, represents that the detection target is in a stationary state.
The motion state of the empty-load conveyor belt cannot be effectively identified in actual detection by adopting a traditional interframe difference method, and specifically, as shown in fig. 1a and 1b, fig. 1a is two adjacent frames of gray images of the empty-load conveyor belt in the motion state; fig. 1b is a binarized image based on a result obtained by performing inter-frame difference on the two frames of images shown in fig. 1a, wherein black pixel points in the binarized image represent that the conveyor belt is in a static state, and white pixel points represent that the conveyor belt is in a moving state. The empty conveyor belt shown in fig. 1a is actually in a moving state, however, the white pixel points in fig. 1b account for very few pixels, which indicates that the detection result of the moving state of the conveyor belt does not conform to the actual situation, i.e. the detection result is not accurate. The reasons for this are mainly the low image contrast of the images to be detected, including the empty conveyor belt, and the insignificant difference in image characteristics between moving and stationary conveyor belts. This results in less difference characteristics when calculating the image difference between adjacent frame images using the inter-frame difference, and thus the motion state of the belt cannot be accurately detected.
In order to solve the technical problems, the invention conception of the application is as follows: the maximum pixel value and the minimum pixel value of the preset number of images to be detected at the same position are determined by comparing the pixel values of the pixel points at the same position in the images to be detected. And obtaining a difference value corresponding to each position by making a difference between the maximum pixel value and the minimum pixel value corresponding to each position. The difference is the maximum difference of the pixel points of the images to be detected at the same position. Therefore, the recognition of image difference is improved, and the tiny change of the detection target is easier to capture. And finally, comparing the corresponding difference value of each position with a preset threshold value, and determining the motion state of the detection target according to the comparison result. Therefore, the robustness to a new scene and the accuracy to a detection result are improved.
A motion state detection method and a related apparatus provided in the embodiments of the present application are described in detail below with reference to the accompanying drawings.
Referring to fig. 2, a schematic diagram of an application environment according to an embodiment of the present application is shown. As shown in fig. 2, the application environment may include, for example, a network 10, a server 20, a conveyor belt 30, and an image capturing device 40.
In the construction site, steel plant and other scenes, the conveyor belt 30 is used as a common carrying tool in material transportation, and the motion state of the conveyor belt in the transportation process needs to be detected in real time. The video stream of the conveyor belt 30 may be captured in real time by the image capture device 40 at the time of detection and transmitted to the server 20 via the network 10. The server 20 cuts out the adjacent frame images from the video stream, and calculates the image difference of the adjacent frame images by using an inter-frame difference method.
In some possible embodiments, when the server 20 calculates the image difference between the adjacent frame images by using the inter-frame difference method, the difference image is obtained by subtracting the pixel values of the adjacent frame images at the same position. And comparing each pixel value in the difference image with a preset threshold value, and determining a binary image according to the comparison result. The white pixel points in the binarized image represent that the conveyor belt 30 is moving, and the black pixel points represent that the conveyor belt 30 is currently stationary. The motion state of the conveyor belt 30 is determined based on the proportional relationship of the number of black and white pixels.
It should be noted that the structure shown in fig. 2 is only an example, and the structure is not limited in the embodiment of the present application.
To facilitate understanding of the technical solution provided by the embodiment of the present application, the following takes detecting the motion state of the empty conveyor belt as an example, and details a motion state detecting method provided by the embodiment of the present application are described with reference to the accompanying drawings, specifically as shown in fig. 3a, including the following steps:
step 301: and intercepting a preset number of images to be detected from a video stream containing a detection target.
In consideration of the fact that in the prior art, difference calculation is carried out on adjacent frame images in a video stream, and for a detection target with less image features, the image feature difference between the adjacent frame images is small. This directly affects the accuracy of the detection result. Therefore, more feature difference information can be acquired by enlarging the capture interval and increasing the number of captured images.
In practice, a predetermined number of images to be detected may be intercepted from a video stream containing a no-load conveyor belt based on a predetermined interception interval. Specifically, as shown in fig. 3b, in a clipping manner of clipping 10 frames at a clipping interval for a video stream including an empty conveyor belt, one image to be detected is clipped every 10 frames, so as to clip 5 images to be detected.
In order to reduce the memory consumption of the difference algorithm and improve the operation speed, the embodiment of the application adopts a large number of sample images containing the conveyor belts to train the neural network model in advance, so that the trained neural network model can identify the conveyor belts in each scene. After the trained neural network model is adopted to identify the target area where the conveyor belt is located in each image to be detected, the target area is cut by a preset size aiming at each image to be detected, so that each cut image to be detected has the same size and contains the target area.
Specifically, taking the 5 frames of images to be detected shown in fig. 3b as an example, the 5 frames of images to be detected are all captured from the same video stream, so that the image size and the number of pixels are the same, and the positions of the conveyor belts in the 5 frames of images are also necessarily the same, so that each of the cropped images to be detected has the same size and includes the target area. The 5 frames of the image to be detected shown in fig. 3b may be cropped as shown in fig. 3 c. In this way, large areas of the image to be detected, which do not contain a conveyor belt, can be clipped, as shown in fig. 3c, and the amount of calculation is reduced.
Step 302: comparing pixel values of pixel points at the same position in each image to be detected, and determining the difference value between the maximum pixel value and the minimum pixel value at the same position;
since various noises (such as salt and pepper noises, gaussian noises, etc.) exist in the video image, these noises are easily recognized as the conveyor belt in a moving state when the conveyor belt is in a static state, and the accuracy of the detection result is directly affected. Considering that the image to be detected may be a multi-channel color image, in order to facilitate the difference calculation to be performed next, the image to be detected may be converted into a gray image before step 302 is performed, and the converted image to be detected may be subjected to filtering processing.
Step 303: and comparing the difference value corresponding to each position with a first threshold value, and determining the motion state of the detection target according to the comparison result.
During implementation, the number of the first-class pixel points and the second-class pixel points is determined based on the comparison result of the difference value corresponding to each position and the first threshold, and the motion state of the detection target is determined according to the number; the first type of pixel points belong to a detection target in a motion state, and the second type of pixel points belong to a detection target in a static state.
Specifically, taking the 5-frame clipped image shown in fig. 3c as an example, the pixel values of the pixels at the same position in the 5-frame image are traversed from top to bottom and from left to right. And the maximum and minimum of the 5 pixel values are filtered out for each position. Constructing a first image based on the maximum pixel value at each position, see left side of fig. 3 d; and a second image is constructed based on the minimum pixel value at each position, see the right side of fig. 3 d. Therefore, the pixel values of the first image and the second image at the same position are subtracted, and the difference image corresponding to the 5 frames of images to be detected can be obtained, and the difference image represents the maximum image characteristic difference of the change of the motion state of the conveyor belt in the 5 frames of images to be detected.
And setting a first threshold value aiming at the differential image, and if the pixel value of a pixel point in the differential image exceeds the first threshold value, representing that the object of the pixel point is in a motion state. Correspondingly, if the first threshold is not exceeded, the affiliated target of the pixel point is represented to be in a static state. Specifically, the first threshold may be set to 10, and for each pixel in the difference image, the pixel with the pixel value greater than 10 is used as the first-type pixel, and the pixel with the pixel value not greater than 10 is used as the second-type pixel.
The difference image can be converted into a binarized image as shown in fig. 3e by setting the pixel value of the first type pixel to 255 and the pixel value of the second type pixel to 0. The black pixel points in the image are first-class pixel points, and the white pixel points are second-class pixel points.
The occupation condition of the first type pixel points in the binary image in the image can reflect the motion state of the conveyor belt. During implementation, the number of the first-class pixel points and the number of the second-class pixel points can be counted respectively, and if the ratio of the number of the first-class pixel points to the total number of the first-class pixel points is larger than a second threshold value, the conveyor belt is determined to be in a moving state. Correspondingly, if the ratio of the number of the first type pixel points to the total number of the first type pixel points is smaller than or equal to a second threshold value, the conveyor belt is determined to be in a static state. The total number is the sum of the first type of pixel points and the second type of pixel points.
It is to be understood that the second threshold may be set higher, for example 20%, when the accuracy requirement of the detection result for detecting the moving state of the object is low, and may be set lower, for example 10%, when the accuracy requirement of the detection result for detecting the moving state of the object is low. The value of the second threshold is not limited in the embodiment of the present application.
Based on the same inventive concept, an embodiment of the present application further provides a motion state detection apparatus 400, specifically as shown in fig. 4, including:
an image acquisition module 401 configured to perform capturing a preset number of images to be detected from a video stream containing a detection target;
a difference determining module 402 configured to compare pixel values of pixel points at the same position in each to-be-detected image, and determine a difference between a maximum pixel value and a minimum pixel value at the same position;
a state confirmation module 403 configured to perform comparison between the difference value corresponding to each position and a first threshold, and determine a motion state of the detection target according to a comparison result.
In some possible embodiments, the capturing of the preset number of images to be detected from the video stream containing the detection target is performed, and the image obtaining module 401 is configured to:
and intercepting a preset number of images to be detected from the video stream containing the detection target based on a preset interception interval.
In some possible embodiments, before performing the comparison of the pixel values of the pixel points at the same position in each image to be detected, the difference determination module 402 is further configured to:
adopting a trained neural network model to identify a target area where a detection target in each image to be detected is located;
and aiming at each image to be detected, performing cutting operation with preset size on the target area so as to enable each cut image to be detected to have the same size and contain the target area.
In some possible embodiments, the comparing the difference value corresponding to each position with the first threshold is performed, and the motion state of the detection target is determined according to the comparison result, and the state confirmation module 403 is configured to:
determining the number of first-class pixel points and second-class pixel points based on the comparison result of the difference value corresponding to each position and a first threshold value, and determining the motion state of the detection target according to the number; the first type pixel points belong to detection targets in motion states, and the second type pixel points belong to detection targets in static states.
In some possible embodiments, the determining the number of the first type pixel points and the second type pixel points based on the comparison result of the difference value corresponding to each position and the first threshold is performed, and the status confirmation module 403 is configured to:
for each position, if the position corresponding difference is not smaller than the first threshold, determining the pixel point corresponding to the position as a first-class pixel point;
and if the position corresponding difference is smaller than the first threshold, determining the pixel points corresponding to the positions as second-class pixel points.
In some possible embodiments, said determining the motion state of the detection target according to the number is performed, and the state confirmation module 403 is configured to:
if the ratio of the number of the first type pixel points to the total number is larger than a second threshold value, determining that the detection target is in a motion state; wherein the total number is the sum of the number of the first type pixel points and the second type pixel points;
and if the ratio of the number of the first type pixel points to the total number is smaller than or equal to the second threshold value, determining that the detection target is in a static state.
In some possible embodiments, before comparing the pixel values of the pixel points at the same position in each image to be detected, the difference determining module 402 is further configured to:
and converting the image to be detected into a gray image, and filtering the converted image to be detected.
The electronic device 130 according to this embodiment of the present application is described below with reference to fig. 5. The electronic device 130 shown in fig. 5 is only an example, and should not bring any limitation to the functions and the scope of use of the embodiments of the present application.
As shown in fig. 5, the electronic device 130 is represented in the form of a general electronic device. The components of the electronic device 130 may include, but are not limited to: the at least one processor 131, the at least one memory 132, and a bus 133 that connects the various system components (including the memory 132 and the processor 131).
Bus 133 represents one or more of any of several types of bus structures, including a memory bus or memory controller, a peripheral bus, a processor, or a local bus using any of a variety of bus architectures.
The memory 132 may include readable media in the form of volatile memory, such as Random Access Memory (RAM)1321 and/or cache memory 1322, and may further include Read Only Memory (ROM) 1323.
Memory 132 may also include a program/utility 1325 having a set (at least one) of program modules 1324, such program modules 1324 including, but not limited to: an operating system, one or more application programs, other program modules, and program data, each of which, or some combination thereof, may comprise an implementation of a network environment.
The electronic device 130 may also communicate with one or more external devices 134 (e.g., keyboard, pointing device, etc.), with one or more devices that enable a user to interact with the electronic device 130, and/or with any devices (e.g., router, modem, etc.) that enable the electronic device 130 to communicate with one or more other electronic devices. Such communication may occur via input/output (I/O) interfaces 135. Also, the electronic device 130 may communicate with one or more networks (e.g., a Local Area Network (LAN), a Wide Area Network (WAN), and/or a public network, such as the internet) via the network adapter 136. As shown, network adapter 136 communicates with other modules for electronic device 130 over bus 133. It should be understood that although not shown in the figures, other hardware and/or software modules may be used in conjunction with electronic device 130, including but not limited to: microcode, device drivers, redundant processors, external disk drive arrays, RAID systems, tape drives, and data backup storage systems, among others.
In an exemplary embodiment, a computer-readable storage medium comprising instructions, such as the memory 132 comprising instructions, executable by the processor 131 to perform the above-described method is also provided. Alternatively, the computer readable storage medium may be a ROM, a Random Access Memory (RAM), a CD-ROM, a magnetic tape, a floppy disk, an optical data storage device, and the like.
In an exemplary embodiment, there is also provided a computer program product comprising computer programs/instructions which, when executed by the processor 131, implement the motion state detection method as provided herein.
In an exemplary embodiment, aspects of a motion state detection method provided by the present application may also be implemented in the form of a program product including program code for causing a computer device to perform the steps of a video recommendation switching method according to various exemplary embodiments of the present application described above in this specification when the program product is run on the computer device.
The program product may employ any combination of one or more readable media. The readable medium may be a readable signal medium or a readable storage medium. A readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples (a non-exhaustive list) of the readable storage medium include: an electrical connection having one or more wires, a portable disk, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
The program product for image scaling of embodiments of the present application may employ a portable compact disc read only memory (CD-ROM) and include program code, and may be run on an electronic device. However, the program product of the present application is not limited thereto, and in this document, a readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device.
A readable signal medium may include a propagated data signal with readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take any of a variety of forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A readable signal medium may also be any readable medium that is not a readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device.
Program code embodied on a readable medium may be transmitted using any appropriate medium, including but not limited to wireless, wireline, optical fiber cable, RF, etc., or any suitable combination of the foregoing.
Program code for carrying out operations of the present application may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, C + + or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the consumer electronic device, partly on the consumer electronic device, as a stand-alone software package, partly on the consumer electronic device and partly on a remote electronic device, or entirely on the remote electronic device or server. In the case of remote electronic devices, the remote electronic devices may be connected to the consumer electronic device through any kind of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or may be connected to an external electronic device (e.g., through the internet using an internet service provider).
It should be noted that although several units or sub-units of the apparatus are mentioned in the above detailed description, such division is merely exemplary and not mandatory. Indeed, the features and functions of two or more units described above may be embodied in one unit, according to embodiments of the application. Conversely, the features and functions of one unit described above may be further divided into embodiments by a plurality of units.
Further, while the operations of the methods of the present application are depicted in the drawings in a particular order, this does not require or imply that these operations must be performed in this particular order, or that all of the illustrated operations must be performed, to achieve desirable results. Additionally or alternatively, certain steps may be omitted, multiple steps combined into one step execution, and/or one step broken down into multiple step executions.
As will be appreciated by one skilled in the art, embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the application. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable image scaling apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable image scaling apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable image scaling apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable image scaling device to cause a series of operational steps to be performed on the computer or other programmable device to produce a computer implemented process such that the instructions which execute on the computer or other programmable device provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
While the preferred embodiments of the present application have been described, additional variations and modifications in those embodiments may occur to those skilled in the art once they learn of the basic inventive concepts. Therefore, it is intended that the appended claims be interpreted as including preferred embodiments and all alterations and modifications as fall within the scope of the application.
It will be apparent to those skilled in the art that various changes and modifications may be made in the present application without departing from the spirit and scope of the application. Thus, if such modifications and variations of the present application fall within the scope of the claims of the present application and their equivalents, the present application is intended to include such modifications and variations as well.

Claims (10)

1. A motion state detection method, characterized in that the method comprises:
intercepting a preset number of images to be detected from a video stream containing a detection target;
comparing pixel values of pixel points at the same position in each image to be detected, and determining the difference value between the maximum pixel value and the minimum pixel value at the same position;
and comparing the difference value corresponding to each position with a first threshold value, and determining the motion state of the detection target according to the comparison result.
2. The method according to claim 1, wherein said intercepting a preset number of images to be detected from a video stream containing a detection target comprises:
and intercepting a preset number of images to be detected from the video stream containing the detection target based on a preset interception interval.
3. The method according to claim 1, wherein before comparing the pixel values of the pixel points at the same position in each image to be detected, the method further comprises:
adopting a trained neural network model to identify a target area where a detection target in each image to be detected is located;
and aiming at each image to be detected, performing cutting operation with preset size on the target area so as to enable each cut image to be detected to have the same size and contain the target area.
4. The method according to claim 1, wherein the comparing the difference value corresponding to each position with a first threshold value and determining the motion state of the detection target according to the comparison result comprises:
determining the number of first-class pixel points and second-class pixel points based on the comparison result of the difference value corresponding to each position and a first threshold value, and determining the motion state of the detection target according to the number; the first type pixel points belong to detection targets in motion states, and the second type pixel points belong to detection targets in static states.
5. The method of claim 4, wherein determining the number of first type pixels and second type pixels based on the comparison of the difference corresponding to each location with a first threshold comprises:
for each position, if the position corresponding difference is not smaller than the first threshold, determining the pixel point corresponding to the position as a first-class pixel point;
and if the position corresponding difference is smaller than the first threshold, determining the pixel points corresponding to the positions as second-class pixel points.
6. The method according to claim 4 or 5, wherein the determining the motion state of the detection target according to the number comprises:
if the ratio of the number of the first type pixel points to the total number is larger than a second threshold value, determining that the detection target is in a motion state; wherein the total number is the sum of the number of the first type pixel points and the second type pixel points;
and if the ratio of the number of the first type pixel points to the total number is smaller than or equal to the second threshold value, determining that the detection target is in a static state.
7. The method according to any one of claims 1 to 3, wherein before comparing the pixel values of the pixel points at the same position in each image to be detected, the method further comprises:
and converting the image to be detected into a gray image, and filtering the converted image to be detected.
8. An apparatus for detecting a state of motion, the apparatus comprising:
the image acquisition module is configured to intercept a preset number of images to be detected from a video stream containing a detection target;
the difference value determining module is configured to compare pixel values of pixel points at the same position in each image to be detected, and determine a difference value between a maximum pixel value and a minimum pixel value at the same position;
and the state confirmation module is configured to compare the difference value corresponding to each position with a first threshold value and determine the motion state of the detection target according to the comparison result.
9. An electronic device, comprising:
a processor;
a memory for storing the processor-executable instructions;
wherein the processor is configured to execute the instructions to implement the method of motion state detection according to any one of claims 1 to 7.
10. A computer-readable storage medium, wherein instructions in the computer-readable storage medium, when executed by a processor of an electronic device, enable the electronic device to perform the motion state detection method of any of claims 1-7.
CN202110981891.3A 2021-08-25 2021-08-25 Motion state detection method and related device Pending CN113674322A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110981891.3A CN113674322A (en) 2021-08-25 2021-08-25 Motion state detection method and related device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110981891.3A CN113674322A (en) 2021-08-25 2021-08-25 Motion state detection method and related device

Publications (1)

Publication Number Publication Date
CN113674322A true CN113674322A (en) 2021-11-19

Family

ID=78546155

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110981891.3A Pending CN113674322A (en) 2021-08-25 2021-08-25 Motion state detection method and related device

Country Status (1)

Country Link
CN (1) CN113674322A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114140362A (en) * 2022-01-29 2022-03-04 杭州微影软件有限公司 Thermal imaging image correction method and device
CN117326290A (en) * 2023-11-27 2024-01-02 煤炭科学研究总院有限公司 Belt state monitoring method and device based on dense optical flow

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103295016A (en) * 2013-06-26 2013-09-11 天津理工大学 Behavior recognition method based on depth and RGB information and multi-scale and multidirectional rank and level characteristics
CN103473530A (en) * 2013-08-30 2013-12-25 天津理工大学 Adaptive action recognition method based on multi-view and multi-mode characteristics
CN103905816A (en) * 2014-03-31 2014-07-02 华南理工大学 Surveillance video tampering blind detection method based on ENF correlation coefficients
CN107944431A (en) * 2017-12-19 2018-04-20 陈明光 A kind of intelligent identification Method based on motion change
CN110751678A (en) * 2018-12-12 2020-02-04 北京嘀嘀无限科技发展有限公司 Moving object detection method and device and electronic equipment
CN111723634A (en) * 2019-12-17 2020-09-29 中国科学院上海微系统与信息技术研究所 Image detection method and device, electronic equipment and storage medium

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103295016A (en) * 2013-06-26 2013-09-11 天津理工大学 Behavior recognition method based on depth and RGB information and multi-scale and multidirectional rank and level characteristics
CN103473530A (en) * 2013-08-30 2013-12-25 天津理工大学 Adaptive action recognition method based on multi-view and multi-mode characteristics
CN103905816A (en) * 2014-03-31 2014-07-02 华南理工大学 Surveillance video tampering blind detection method based on ENF correlation coefficients
CN107944431A (en) * 2017-12-19 2018-04-20 陈明光 A kind of intelligent identification Method based on motion change
CN110751678A (en) * 2018-12-12 2020-02-04 北京嘀嘀无限科技发展有限公司 Moving object detection method and device and electronic equipment
CN111723634A (en) * 2019-12-17 2020-09-29 中国科学院上海微系统与信息技术研究所 Image detection method and device, electronic equipment and storage medium

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
唐超: "基于深度图像的人体行为识别", 系统仿真学报, 8 May 2018 (2018-05-08), pages 3 *
李清瑶;邹皓;赵群;王建颖;刘智超;杨进华;: "基于帧间差分自适应法的车辆抛洒物检测", 长春理工大学学报(自然科学版), no. 04, 15 August 2018 (2018-08-15) *

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114140362A (en) * 2022-01-29 2022-03-04 杭州微影软件有限公司 Thermal imaging image correction method and device
CN114140362B (en) * 2022-01-29 2022-07-05 杭州微影软件有限公司 Thermal imaging image correction method and device
CN117326290A (en) * 2023-11-27 2024-01-02 煤炭科学研究总院有限公司 Belt state monitoring method and device based on dense optical flow
CN117326290B (en) * 2023-11-27 2024-02-02 煤炭科学研究总院有限公司 Belt state monitoring method and device based on dense optical flow

Similar Documents

Publication Publication Date Title
US11380232B2 (en) Display screen quality detection method, apparatus, electronic device and storage medium
CN111415106A (en) Truck loading rate identification method, device, equipment and storage medium
CN110598620B (en) Deep neural network model-based recommendation method and device
CN106327488B (en) Self-adaptive foreground detection method and detection device thereof
CN113674322A (en) Motion state detection method and related device
EP3343376B1 (en) Disk capacity prediction method, device and apparatus
CN111160469A (en) Active learning method of target detection system
CN110059761A (en) A kind of human body behavior prediction method and device
CN113642474A (en) Hazardous area personnel monitoring method based on YOLOV5
CN114772208B (en) Non-contact belt tearing detection system and method based on image segmentation
CN113610772B (en) Method, system, device and storage medium for detecting spraying code defect at bottom of pop can bottle
CN113469025A (en) Target detection method and device applied to vehicle-road cooperation, road side equipment and vehicle
CN115294332B (en) Image processing method, device, equipment and storage medium
CN113780287A (en) Optimal selection method and system for multi-depth learning model
CN112561885B (en) YOLOv 4-tiny-based gate valve opening detection method
CN112966687B (en) Image segmentation model training method and device and communication equipment
CN116977334B (en) Optical cable surface flaw detection method and device
CN116843831B (en) Agricultural product storage fresh-keeping warehouse twin data management method and system
CN112560863A (en) Method, system, device and medium for detecting ground cleanliness of garbage recycling station
CN111259926A (en) Meat freshness detection method and device, computing equipment and storage medium
CN110580706A (en) Method and device for extracting video background model
CN113762027B (en) Abnormal behavior identification method, device, equipment and storage medium
CN115546539A (en) Magnetic flip plate liquid level reading method and device based on machine vision and readable medium
CN114445751A (en) Method and device for extracting video key frame image contour features
CN107316313A (en) Scene Segmentation and equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination