CN114399539A - Method, apparatus and storage medium for detecting moving object - Google Patents

Method, apparatus and storage medium for detecting moving object Download PDF

Info

Publication number
CN114399539A
CN114399539A CN202210043527.7A CN202210043527A CN114399539A CN 114399539 A CN114399539 A CN 114399539A CN 202210043527 A CN202210043527 A CN 202210043527A CN 114399539 A CN114399539 A CN 114399539A
Authority
CN
China
Prior art keywords
optical flow
image
pixels
gradient
matrix
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210043527.7A
Other languages
Chinese (zh)
Inventor
王浩
王帅
戚栋栋
桑贤超
刘仲印
薛庆鑫
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Infiray Technologies Co Ltd
Original Assignee
Infiray Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Infiray Technologies Co Ltd filed Critical Infiray Technologies Co Ltd
Priority to CN202210043527.7A priority Critical patent/CN114399539A/en
Publication of CN114399539A publication Critical patent/CN114399539A/en
Priority to PCT/CN2022/098898 priority patent/WO2023134114A1/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/269Analysis of motion using gradient-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)

Abstract

The application provides a moving object detection method, a moving object detection device and a storage medium, wherein gradient calculation is carried out according to pixels in corresponding first detection windows in a current image and a previous image, an original optical flow matrix of the current image is determined according to a gradient calculation result, the optical flow calculation method is relatively simplified, and the occupied computational resource is small. In addition, the original optical flow matrixes are merged by using the set merging step length to obtain the stable representation of the optical flow field, so that the moving target positioning performed according to the change area of the optical flow field determined by the merged optical flow matrix has higher precision.

Description

Method, apparatus and storage medium for detecting moving object
Technical Field
The present application relates to the field of image processing and computing, and in particular, to a method and an apparatus for detecting a moving object, and a storage medium.
Background
The traditional dynamic target tracking method mainly aims at the detection of the moving target under a static background, such as a background difference method, a frame difference method and an optical flow method. The background difference method is a method for detecting a motion region by storing a background image in advance and then performing difference operation with an observation image. Although the background subtraction method can obtain complete moving target information, the background image must be continuously updated along with the change of external conditions such as illumination, and the acquisition and update of the background model are troublesome. The frame difference method is a method for extracting a motion region in a sequence image by directly comparing the difference of the gray values of corresponding image points of two adjacent frames of images and then by using a threshold value. The frame difference method is high in updating speed, simple and easy to implement, high in adaptability and free of obtaining background images, but gray difference needs to be generated between the background and the moving target to a certain degree, otherwise, holes may be generated in the target, and the moving target cannot be completely extracted.
The traditional moving target detection method aiming at the static background has great limitation on the dynamic target tracking under the dynamic background, and on some handheld mobile devices, the requirement is higher if the dynamic target tracking under the dynamic background is realized, not only the tracking effect but also the limited performance of a machine need to be considered, which is a great challenge for the realization by the traditional method.
At present, a method for detecting a dynamic target under a dynamic background is implemented by using a motion estimation method and using a feature point matching algorithm, that is, a unique matching point of each feature point of a reference image is found in an image, a displacement vector of the unique matching point is obtained through the change of the position of the two frame intermediate points, all motion information is brought into a motion model to obtain a global motion parameter of the background, finally, the previous frame image and the next frame image are stabilized by using the global motion parameter, and then a frame-to-frame difference method is performed to find the dynamic target. Another conventional implementation method for dynamic target detection based on a dynamic background is a dynamic target detection method based on deep learning, which mainly uses an SVM (Support Vector Machines) or a neural network to label and train a dynamic target in an image, so as to achieve the effect of tracking the dynamic target. However, the global motion parameters calculated by using the matching point algorithm have high complexity and occupy a large part of calculation resources, and for the deep learning method, because not all devices support deep learning acceleration, the application range is limited. In addition, based on the characteristics of infrared imaging, the existing identification method for the dynamic target on the infrared equipment is to default that the dynamic target has higher temperature compared with the surrounding environment, and the target is tracked by utilizing the highest point of temperature. Although simple to implement, the method of target tracking with temperature peaks in infrared scenes, some objects in some scenes (e.g. rocks illuminated by sunlight) tend to have higher temperatures than the target. Therefore, for a dynamic target on the infrared device, the method of detecting only the highest point of temperature has a great limitation.
Disclosure of Invention
In order to solve the existing technical problems, the application provides a method, equipment and a storage medium for detecting a moving target, which can reduce the occupation of computational resources.
A moving object detection method, comprising:
acquiring a current image and a previous image associated with the current image;
performing gradient calculation according to pixels in corresponding first detection windows in the current image and the previous image, and determining an original optical flow matrix of the current image according to a gradient calculation result;
merging the original optical flow matrixes according to a set merging step length to obtain merged optical flow matrixes;
and determining an optical flow field change area according to the merged optical flow matrix, and determining the position of the moving object in the current image according to the optical flow field change area.
A detection device of a moving object comprises a memory and a processor;
the processor, when executing the computer program instructions stored in the memory, performs the steps of the detection method.
A computer readable storage medium having stored thereon computer program instructions;
the computer program instructions, when executed by a processor, implement the steps of the detection method.
As can be seen from the above, in the moving object detection method provided by the present application, gradient calculation is performed according to pixels in the corresponding first detection windows in the current image and the previous image, and the original optical flow matrix of the current image is determined according to a gradient calculation result, so that the method for calculating the optical flow is relatively simplified, and the computational power resource occupied by the method is relatively small. In addition, the original optical flow matrixes are merged by using the set merging step length to obtain the stable representation of the optical flow field, so that the moving target positioning performed according to the change area of the optical flow field determined by the merged optical flow matrix has higher precision.
Drawings
The drawings are only for purposes of illustrating embodiments and are not to be construed as limiting the invention. Also, like reference numerals are used to refer to like parts throughout the drawings. In the drawings:
FIG. 1 is a schematic diagram of a moving object detection based on an optical flow field;
FIG. 2 is a flow chart illustrating a method for detecting a moving object according to some embodiments of the present disclosure;
FIG. 3 is a schematic diagram of a first detection window at a corresponding sliding step according to some embodiments of the present disclosure;
FIG. 4 is a flow chart illustrating a method for detecting a moving object according to another embodiment of the present application;
FIG. 5 is a schematic diagram of a moving object detection apparatus according to some embodiments of the present disclosure;
fig. 6 is a schematic structural diagram of a device for detecting a moving object according to some embodiments of the present disclosure.
Detailed Description
The technical solution of the present application is further described in detail with reference to the drawings and specific embodiments of the specification.
Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this application belongs. The terminology used herein in the description of the present application is for the purpose of describing particular embodiments only and is not intended to be limiting of implementations of the present application. As used herein, the term "and/or" includes any and all combinations of one or more of the associated listed items.
In the description of the present application, it is to be understood that the terms "center", "upper", "lower", "front", "rear", "left", "right", "vertical", "row", "top", "bottom", "inner", "outer", and the like indicate orientations or positional relationships based on those shown in the drawings, and are only for convenience in describing the present application and simplifying the description, but do not indicate or imply that the referred device or element must have a particular orientation, be constructed in a particular orientation, and be operated, and thus should not be construed as limiting the present application. In the description of the present application, "a plurality" means two or more unless otherwise specified.
In the description of the present application, it is to be noted that, unless otherwise explicitly specified or limited, the terms "mounted," "connected," and "connected" are to be construed broadly, e.g., as meaning either a fixed connection, a removable connection, or an integral connection; they may be connected directly or indirectly through intervening media, or they may be interconnected between two elements. The specific meaning of the above terms in the present application can be understood by those of ordinary skill in the art as appropriate.
In order to solve the problems of high complexity of the existing moving target detection algorithm and limitation when the existing moving target detection algorithm is applied to infrared image detection, the application provides a new moving target detection method. The novel moving target detection method provided by some embodiments of the application can solve the problem of infrared moving target detection in a dynamic scene, can detect the position of a moving target under the condition that a static object with similar or higher temperature exists in the dynamic scene, and marks the position to enable a user to observe the moving target more easily. The new moving object detection method provided by some embodiments of the present application can be applied to detection of a moving object in an infrared image, and can also be applied to detection of a moving object with obvious moving characteristics in a visible light image.
The moving object detection method provided by some embodiments of the present application is mainly implemented based on an optical flow method, which is a method for finding a correspondence existing between a previous image and a current image by using a change of a pixel in an image sequence in a time domain and a correlation between adjacent frames, so as to calculate motion information of an object between the previous image and the current image. In general, optical flow is due to movement of the foreground objects themselves in the scene, motion of the camera, or both. The optical flow on the image is a motion track which can be regarded as an object or a scene, and the size and the direction exist at a certain moment, and the optical flow is a two-dimensional vector. The optical flow is defined based on pixel points, and the set of all optical flows is called an optical flow field. The optical flow field is a two-dimensional vector field which reflects the change trend of the gray scale of each point on the image, can be regarded as an instantaneous velocity field generated by the movement of a pixel point with gray scale on an image plane, and contains information, namely the instantaneous motion velocity vector information of each image point.
The Horn-Schunck optical flow calculation algorithm (hereinafter referred to as HS optical flow calculation algorithm) is an optical flow calculation method obtained based on two assumptions of the optical characteristics of the motion of an object. The two assumptions are:
1. the gray scale of the moving object is kept unchanged in a short interval time;
2. the velocity vector field within a given neighborhood varies slowly.
Based on the above assumptions, as shown in fig. 1, in a scene without dynamic objects, if the scene moves as a whole, the magnitude and direction of the optical flow on the image should be consistent, and if there are objects in the scene, as long as the objects and the scene are displaced relatively, the optical flow vectors of the objects are inconsistent with the optical flow vectors of the whole scene, so that the positions of the moving objects can be found by finding inconsistent optical flow vector areas (changed areas of the optical flow fields).
The following introduces the derivation of the HS optical flow calculation algorithm:
from the first assumption above, the following equation (1) can be derived:
I(x,y,t)=I(x+δx,y+δy,t+δt) (1)
in equation (1), I (x, y, t) represents a gray scale function of a moving object, and δ x and δ y are displacements of a pixel in x and y directions within δ t time, respectively. Taylor expansion at x, y is performed on the right side of equation (1), which is given by the following equation (2):
Figure BDA0003471291020000051
in equation (2), ε is the second and higher order terms including δ x, δ y and δ t, and I (x, y, t) is subtracted from both sides of equation (2) and divided by δ t to obtain the following equation (3):
Figure BDA0003471291020000052
in equation (3), O (δ t) is a small quantity containing δ t, and when δ t- >0, that is, δ t goes toward 0 from the left and right sides of 0 in the limit, equation (3) may be changed to the following equation (4):
Figure BDA0003471291020000053
further, in addition
Figure BDA0003471291020000054
And
Figure BDA0003471291020000055
equation (4) becomes the following equation (5):
Ixu+Iyv+It=0 (5)
equation (5) is an optical flow constraint equation, which reflects a correspondence between gray scale and velocity. When there are two variables u and v in equation (5), it is obvious that the velocity cannot be solved, so it is necessary to introduce the second assumption above, i.e. the global smoothing constraint of optical flow, and introduce the second assumption into the smoothing term, as shown in equation (6) below:
Figure BDA0003471291020000056
the sum of equation (5) and equation (6) needs to be satisfied for all pixel points to be minimum, and thus, the following equation (7) can be established:
ζ2=∫∫α2ζ2+(Ixu+Iyv+It)2dxdy (7)
α in equation (7) represents a smoothing weight coefficient, and represents the weight occupied by the velocity smoothing term, and since the sum of equation (5) and equation (6) needs to be minimized, the following equation (8) needs to be minimized:
Figure BDA0003471291020000057
using the variation calculation for equation (8), from the euler equation, the following equations (9), (10) can be obtained:
Figure BDA0003471291020000061
Figure BDA0003471291020000062
and further converting the operation by equations (9), (10) to obtain the following equations (11), (12):
Figure BDA0003471291020000063
Figure BDA0003471291020000064
the laplacian in equations (11) and (12) is approximated by the difference between the velocity at a certain point and the average value of the velocities around it, and the following equations (13) and (14) are obtained:
Figure BDA0003471291020000065
Figure BDA0003471291020000066
by simplifying equations (13) and (14), the following equations (15) and (16) can be obtained:
Figure BDA0003471291020000067
Figure BDA0003471291020000068
and calculating corresponding u and v according to the equations (15) and (16) by using an iterative algorithm so as to obtain the optical flow field of the image.
In the research process, the inventor of the present application considers that the detection of the moving object is performed according to the existing HS optical flow calculation algorithm, iterative calculation is required in the detection process, a large part of calculation resources are occupied, and the detection method is not suitable for mobile devices which use calculation and storage resources but are not too much, such as an FPGA (Field Programmable Gate Array).
Based on the above problem, a method flow diagram of the moving object detection method provided in some embodiments of the present application is shown in fig. 2, and the moving object detection method provided in this embodiment includes S2, S4, S6, and S8. The moving object detection method provided by the present embodiment is implemented on the moving object detection apparatus shown in fig. 5 or the moving object detection device shown in fig. 6.
S2: a current image and a prior image associated with the current image are acquired.
Specifically, S2 may be implemented by the image acquisition module 11 in the moving object detection apparatus shown in fig. 5, or by the memory 21 in the moving object detection device shown in fig. 6 storing an image acquisition program, and then by the processor 22 in the moving object detection device shown in fig. 6 executing the image acquisition program stored in the memory 21.
In some embodiments, the current image and the previous image of the current image may be obtained from a video image sequence as two images to be detected. In other embodiments, the previous image and the current image may also be images captured by an image capturing device (e.g., a camera) in sequence. The previous image may be a previous image adjacent to the current image or a previous image spaced apart from the current image by a moving time interval.
The optical flow method is based on motion information in two different images on a time axis, so that an optical flow field is calculated, and a current image and the previous image need to be acquired. Therefore, in some embodiments, before a current frame image and a previous frame image are obtained from a video image sequence according to the moving object detection method provided by the present application, a buffer frame needs to be defined for storing the previous frame image. Specifically, the previous frame image is an adjacent previous frame image of the current frame image. At the very beginning of the execution of the moving object detection method, the buffer frame is blank, but since data of the buffer frame is required in the calculation of the optical flow, the buffer frame needs to be initialized when the current frame image is detected as the first frame in the video image sequence. The method for initializing the buffer frame is to copy the current frame image of the first frame as the initial value of the buffer frame. Then, in the subsequent cycle process, the content in the buffer frame needs to be updated, the updating mode is that the current frame image is used as the buffer frame, when the next frame starts, the previous current frame image is used as the previous frame image of the next frame image, and the next frame image is used as the current frame image.
S4: and performing gradient calculation according to pixels in the corresponding first detection windows in the current image and the previous image, and determining an original optical flow matrix of the current image according to a gradient calculation result.
S4 may be implemented by the raw optical flow calculation module 12 in the moving object detection apparatus shown in fig. 5, or by the memory 21 in the moving object detection device shown in fig. 6 storing a raw optical flow calculation program, and then by the processor 22 in the moving object detection device shown in fig. 6 when executing the raw optical flow calculation program stored in the memory 21.
Specifically, in S4, the first detection window is used to acquire the sub-images (the area framed by the first detection window) at the corresponding positions of the current image and the previous image, then the gradient between the sub-images at the corresponding positions of the current image and the previous image is calculated, and the optical flow matrix of the current image and the original image is determined according to the calculation result of the gradient, that is, one optical flow vector in the original optical flow matrix can be calculated according to the gradient value between each pair of sub-images (the sub-images corresponding to the current image and the previous image).
Specifically, S4 further includes:
s41: and respectively and sequentially traversing the current image and the previous image by adopting a first detection window with a set size in a sliding manner.
S42: and performing gradient calculation according to pixels contained in the first detection window in the current image and the previous image corresponding to each sliding step of the first detection window.
S43: and determining an optical flow vector corresponding to each sliding step of the first detection window according to the gradient calculation result, and further determining an original optical flow matrix of the current image.
In some embodiments, the first detection window is equal in length and width, and traversing the current image fr2 and the prior image fr1 using the first detection window may be simultaneously and synchronously sequentially traversing the current image fr2 and the prior image fr1 using the first detection window. The sequential sliding traversal means that the traversal paths of the first detection window in the current image fr2 and the previous image fr1 are the same, for example, the traversal can be performed from left to right and from top to bottom from the first pixel point of the current image fr2 and the previous image fr1, or from right to left and from bottom to top from the last pixel point of the current image fr2 and the previous image fr 1.
In other embodiments, the time for the first detection window to traverse the current image fr2 is not synchronized with the time for traversing the previous image fr1, which may be a sequential relationship, and the corresponding position of the first detection window in the previous image needs to be determined according to the position of the first detection window in the current image fr2 before performing the gradient calculation according to the pixels in the corresponding first detection window. I.e. the first and second sets of pixels are corresponding pixels in the first detection window in the same sliding position in the current image fr2 and the previous image fr 1.
The calculation of the gradient from the first and second groups of pixels means that the gradient of pixels between the first detection window in the current image fr2 and the first detection window in the previous image fr1 is calculated from the first and second groups of pixels. At each sliding step of the first detection window, the first detection window in the previous image fr1 then corresponds to a first sub-image fr11 of the previous image fr1, the first detection window in the current image fr2 corresponds to a second sub-image fr21 in the current image fr2, the position of the second sub-image fr21 in the current image fr2 is the same as the position of the first sub-image fr11 in the previous image fr 1. The calculation of the gradient from the first and second sets of pixels is the calculation of the image gradient between the second sub-image fr21 and the first sub-image fr 11. After the image gradients corresponding to the first detection window in each sliding step are obtained, the corresponding optical flow vectors can be calculated based on a gradient method. In calculating an optical flow vector based on an image gradient, as in the HS optical flow calculation method, it is necessary to set two constraint conditions as follows:
1. the gray scale of the moving object is kept unchanged in a short interval time;
2. the velocity vector field within a given neighborhood varies slowly.
And calculating a corresponding optical flow vector based on the calculation result of the image gradient and the two constraint conditions. Determining the original optical flow matrix of the current image according to the optical flow vectors corresponding to the respective sliding steps of the first detection window means that each element in the original optical flow matrix is the optical flow vector, and the position of the corresponding optical flow vector in the original optical flow matrix is determined according to the position of the corresponding first detection window in the current image. The first detection window may select a matrix with the same number of rows and columns, and may be set as a 2 × 2 minimum detection unit matrix in order to reduce computational calculation power in the moving object detection process.
In addition, it should be noted that, the optical flow vector calculated above includes a first component u in the x-axis direction and a second component v in the y-axis direction, and the optical flow vector includes u and v. In some embodiments, each of the original optical-flow vectors in the original optical-flow matrix may be (u, v), i.e., the original optical-flow matrix is a matrix including a first component and a second component. In other embodiments, the raw optical flow matrix includes a first component raw matrix and a second component raw matrix. Each element in the first component original matrix is a corresponding first component, and each element in the second component original matrix is a corresponding second component. Therefore, in some embodiments, before determining the original optical flow matrix, the motion detection method further needs to define a first component original matrix U and a second component original matrix V, respectively, and then fill the first component and the second component obtained by calculation into the first component original matrix U and the second component original matrix V, respectively, to determine the original optical flow matrix.
In some embodiments provided by the present application, the current image fr2 and the previous image fr1 are respectively traversed by sliding a first detection window of a preset size, each of the first detection window at each sliding step of the current image fr2 and the first detection window at a corresponding position in the previous image fr1 can respectively intercept the second sub-image fr21 of the current image fr2 and the first sub-image fr11 of the previous image fr1, then calculate an image gradient according to the second sub-image fr21 and the first sub-image fr11, and calculate an optical flow vector corresponding to each sliding step according to the image gradient. Because the calculation of the image gradient of each sub-image is relatively simple, the calculation process of the optical flow vector based on the image gradient between each sub-image occupies less calculation resources, and can be realized on mobile equipment (such as FPGA) with less calculation and storage resources.
S6: and merging the original optical flow matrixes according to the original optical flow matrixes and the set merging step length to obtain merged optical flow matrixes.
Specifically, S6 may be implemented by the merged optical flow calculation module 13 in the moving object detection apparatus shown in fig. 5, or by the memory 21 in the moving object detection device shown in fig. 6 storing the merged optical flow calculation program, and by the processor 22 in the moving object detection device shown in fig. 6 executing the merged optical flow calculation program stored in the memory 21.
In order to avoid the problem that the original optical flow matrix is susceptible to image change due to high sensitivity and causes a large error in the calculation of a subsequent optical flow field change area, the original optical flow field matrix needs to be merged to obtain a stable optical flow matrix representation. Before the merging, a merging step needs to be defined. The step here includes a first step1 merged corresponding to the row direction (x-axis direction) and a second step2 merged corresponding to the column direction (y-axis direction). And then merging the original optical flow matrixes according to a preset step size, wherein the row number of the obtained merged optical flow matrix is the division of the row number of the original optical flow matrix by step1, and the column number of the merged optical flow matrix is the division of the column number of the original optical flow matrix by step 2. In addition, if each element in the original optical flow matrix corresponds to (u, v), each element in the merged optical flow matrix also corresponds to (u, v). If the original optical flow matrix is composed of the first component original matrix U and the second component original matrix V, the merged optical flow matrix also includes the first component merged matrix Us and the second component merged matrix Vs. Where Us is the number of columns divided by step1, Us is the number of columns divided by step2, Vs is the number of rows divided by step1, and Vs is the number of columns divided by step 2. The elements in the merged optical flow matrix may be the average of the sources in the original optical flow matrix in the respective merge step.
The original optical flow field is merged to obtain a more stable optical flow vector, so that the detection accuracy is improved, and meanwhile, the calculation amount of a detection algorithm can be reduced.
S8: and determining an optical flow field change area according to the merged optical flow matrix, and determining the position of the moving object in the current image according to the optical flow field change area.
S8 may be implemented by the moving object position determining module 14 in the moving object detecting apparatus shown in fig. 5, or by the memory 21 in the moving object detecting device shown in fig. 6 storing a moving object determining program, and then by the processor 22 in the moving object detecting device shown in fig. 6 executing the original moving object determining program stored in the memory 21.
The moving object detection schematic diagram based on the optical flow method shown in fig. 1 can be used to determine the position of the moving object according to the change area of the optical flow field and then the change area of the optical flow field after the representation (merged optical flow matrix) of the optical flow field corresponding to the current image is determined. The moving target detection method for determining the position of the moving target according to the change of the optical flow field can be suitable for detecting the moving target in the infrared image, and overcomes the limitation that the moving target is positioned by the existing method only for detecting the temperature peak in the infrared image.
As can be seen from the above, in the moving object detection method provided by the present application, gradient calculation is performed according to pixels in the corresponding first detection windows in the current image and the previous image, and the original optical flow matrix of the current image is determined according to a gradient calculation result, so that the method for calculating the optical flow is relatively simplified, and the computational power resource occupied by the method is relatively small. In addition, the original optical flow matrixes are merged by using the set merging step length to obtain the stable representation of the optical flow field, so that the moving target positioning performed according to the change area of the optical flow field determined by the merged optical flow matrix has higher precision.
In some embodiments, in order to accelerate the traversal speed of the current image fr2 and the previous image fr1 in S4, the amount of operation data of the moving object detection is reduced. The current image fr2 and the previous image fr1 are not source images captured by an image capturing apparatus, but thumbnail images corresponding to original images. S2 specifically includes: the method comprises the steps of acquiring source images acquired by an image shooting device in sequence, and then carrying out thumbnail processing on each source image to obtain a thumbnail current image fr2 and a previous image fr1 related to the current image. For example, the corresponding source images are reduced to 1/2 to obtain the corresponding current image fr1 and the previous image fr 2. In some embodiments, the source image corresponding to the previous image is an adjacent previous image to the source image corresponding to the current image.
In some embodiments, S42 further includes:
s421: and acquiring a first group of pixels contained in the first detection window in the current image and a second group of pixels contained in the first detection window in the previous image corresponding to each sliding step in the synchronous sliding process of the first detection window in the current image and the previous image.
In the present embodiment, the current image fr2 and the previous image fr1 are traversed simultaneously by the first detection window, i.e. the first detection window slides synchronously in the current image fr2 and the previous image fr1, i.e. the first detection window in the current image fr2 slides simultaneously with the first detection window in the previous image fr1, with the same frequency, and the first detection window in the current image fr2 and the first detection window in the previous image fr1 are located at the same position in the respective images every sliding step.
The obtaining of the first group of pixels and the second group of pixels corresponding to each sliding step means that the pixels in the first detection window in the current image fr2 and the first detection window in the previous image fr1 corresponding to each sliding step are taken out, and the pixels are numbered according to the pixel coordinates of the taken out pixels in the image to which the pixels belong. The first group of pixels corresponds to the respective pixels in the second sub-image fr21, and the second group of pixels corresponds to the respective pixels in the first sub-image fr 11.
In order to describe S421 and the subsequent related steps more clearly, the present application exemplifies each step by taking the first detection window as a 2 × 2 matrix, but the size of the first detection window is not limited to 2 × 2 in other embodiments.
As shown in fig. 3, the left window is the first detection window in the previous image fr1, i.e. the first sub-image fr11, for a sliding step, and the right window is the first detection window in the current image fr2, i.e. the second sub-image fr21, for a sliding step corresponding to the left. The pixels in the first sub-image fr11 are the interlaced positions of the ith row, the ith +1 th row and the jth column and the jth +1 th column in the prior image fr1, and the pixels in the first group of pixels and the second group of pixels are numbered and identified according to the corresponding positions of the pixels in the image. For example, the first group of pixels comprises the pixel t1 of the ith row and the jth column, the pixel t3 of the ith row and the jth +1 column, the pixel t5 of the ith +1 row and the jth column, and the pixel t7 of the ith +1 row and the jth +1 column in the previous image fr 1; the second group of pixels comprises a pixel t2 of the ith row and jth column, a pixel t4 of the ith row and jth +1 column, a pixel t6 of the (i + 1) th row and jth column, and a pixel t8 of the (i + 1) th row and jth +1 column in the current image fr 2.
S422: and respectively carrying out image gradient calculation, line gradient calculation and column gradient calculation according to the first group of pixels and the second group of pixels so as to respectively determine corresponding time direction gradient, line direction gradient and column direction gradient.
After the first group of pixels corresponding to the second sub-image fr21 and the second group of pixels corresponding to the first sub-image fr11 are obtained, inter-image gradient calculation, inter-row gradient calculation and inter-column gradient calculation can be performed according to the corresponding pixels, so as to obtain each sub-image fr21 and each sub-image fr11 respectivelyImage-corresponding image gradients including a temporal directional gradient g corresponding to an inter-image gradient between the second sub-image fr21 and the first sub-image fr11iA line direction gradient g corresponding to an interline gradient between lines of the second sub-image fr21 and the first sub-image fr11hAnd a column-direction gradient g corresponding to an inter-column gradient between columns of the second sub-image fr21 and the first sub-image fr11w
In some embodiments, the specific step of S422 is further described based on the specific example of the first detection window provided above, which further includes S4221, S4222, and S4223. However, the order of S4221, S4222 and S4223 is not particularly limited in this application, and the order of the other steps in this application is not particularly limited unless otherwise specified.
S4221: calculating an average pixel difference value between corresponding respective pixels in the first set of pixels and the second set of pixels to obtain the temporal directional gradient.
As can be seen from the above description of the example, the position of the pixel t1 in the first group of pixels in the current image fr2 is the same as the position of the pixel t2 in the second group of pixels in the previous image fr1, and then the pixel t1 in the first group of pixels and the pixel t2 in the second group of pixels are corresponding pixels. Similarly, the pixel t3 in the first group of pixels and the pixel t4 in the second group of pixels are corresponding pixels, the pixel t5 in the first group of pixels and the pixel t6 in the second group of pixels are corresponding pixels, and the pixel t7 in the first group of pixels and the pixel t8 in the second group of pixels are corresponding pixels. The gradient g in the time directioniThe calculation is performed according to the following equation (17):
gi=[(t2-t1)+(t4-t3)+(t6-t5)+(t8-t7)]/4 (17)
s4222: and calculating the average pixel difference value between each pixel of the next row and each pixel corresponding to the previous row in the first group of pixels and the second group of pixels to obtain the row direction gradient.
As can be seen from the above example, the pixels of the first group of pixelsthe t5 and the t7 and the pixels t6 and t8 in the second group of pixels are all the pixels of the (i + 1) th row in the image to which the pixels belong, and the pixels t1 and t3 in the first group of pixels and the pixels t2 and t4 in the second group of pixels are all the pixels of the (i) th row in the image to which the pixels belong. In addition, if the pixels t1 and t5 belong to the same column, the pixel in the previous row corresponding to t5 is t1, and similarly, the pixel in the previous row corresponding to t7 is t3, the pixel in the previous row corresponding to t6 is t2, and the pixel in the previous row corresponding to t8 is t 4. The line direction gradient ghThe calculation is made according to equation (18) as follows:
gh=[(t5-t1)+(t7-t3)+(t6-t2)+(t8-t4)]/4 (18)
s4223: and calculating the average pixel difference value between each pixel of the latter column and each pixel corresponding to the former column in the first group of pixels and the second group of pixels to obtain the column direction gradient.
As can be seen from the above description of the example, the pixels t3 and t7 in the first group of pixels and the pixels t4 and t8 in the second group of pixels are all the pixels in the j +1 th column of the image to which they belong, and the pixels t1 and t5 in the first group of pixels and the pixels t2 and t6 in the second group of pixels are all the pixels in the j th column of the image to which they belong. In addition, if the pixels T1 and T3 belong to the same row, the pixel in the previous column corresponding to T3 is T1, and similarly, the pixel in the previous column corresponding to T7 is T5, the pixel in the previous column corresponding to T4 is T2, and the pixel in the previous column corresponding to T8 is T6. The line direction gradient gwThe calculation is made according to equation (19) as follows:
gw=[(t3-t1)+(t7-t5)+(t4-t2)+(t8-t6)]/4 (19)
as can be obtained from the above description of the HS optical flow calculation method, the HS optical flow algorithm needs to perform complex iterative calculations according to equations (15) and (16) to obtain the optical flow vector, and such calculation method occupies a large computational resource. On the other hand, the inventors of the present application have proposed a simplified HS optical flow algorithm based on that (I) is eliminated when calculating optical flow vectors based on the temporal gradient, the row gradient, and the column gradientx*u+Iy*v+It) The inner Ix u + Iy v, only I is retainedtAs parameters, and the iterative process is eliminated, i.e. the first term and the second term of equations (15), (16) are omitted, so that equations (15), (16) are converted into the following equations (20), (21), respectively:
Figure BDA0003471291020000141
Figure BDA0003471291020000142
i in equations (20), (21)tTime direction gradient g equivalent to the above descriptioni,IxA gradient g in the row direction equivalent to that described aboveh,IyColumn direction gradient g equivalent to that described abovew
Thus, in some embodiments, the simplified HS optical flow computation method provided herein is utilized based on the temporal gradient giThe gradient g in the row directionhAnd the column direction gradient gwThe step of calculating the optical flow vector corresponding to each sliding step and determining the original optical flow matrix, i.e. S43, further includes S431, S432 and S433.
S431: and calculating according to the time direction gradient, the row direction gradient and the column direction gradient to obtain a first intermediate parameter, wherein the size of the first intermediate parameter is in direct proportion with the time direction gradient and in inverse proportion with a second intermediate parameter, and the second intermediate parameter is the sum of a preset scaling coefficient, the square of the row direction gradient and the square of the column direction gradient.
Specifically, the first intermediate parameter δ may be obtained by the following equation (22):
δ=gi/(α+gh 2+gw 2) (22)
where α in the equation is a scaling factor for fixing the subsequently obtained optical flow vector in a suitable interval, which is determined from empirical values according to the actual situation, g in equation (22)h 2+gw 2Is the second intermediate component.
S432: and calculating according to the first intermediate parameter and the row direction gradient to obtain a first component of the optical flow vector in the row direction, wherein the magnitude of the first component is proportional to the product of the first intermediate parameter and the row direction gradient.
In some embodiments, the first component u (i, j) corresponding to each sliding step is obtained according to equation (20) above, which is calculated as shown in equation (23) below:
Figure BDA0003471291020000151
obviously, the above equation (23) corresponds to equation (20) obtained by the simplified HS calculation method.
S433: and calculating according to the first intermediate parameter and the column direction gradient to obtain a second component of the optical flow vector in the column direction, wherein the magnitude of the second component is proportional to the product of the first intermediate parameter and the column direction gradient.
In some embodiments, the second component v (i, j) corresponding to each sliding step is obtained according to equation (21) above, which is calculated as shown in equation (24) below:
Figure BDA0003471291020000152
obviously, the above equation (24) corresponds to equation (21) obtained by the simplified HS calculation method.
Because some embodiments of the present application are based on the optical flow vector obtained by the simplified HS optical flow algorithm, the sensitivity is high, and the optical flow vector is easily affected by image changes, and if the optical flow vector is directly used for subsequently determining an optical flow field change area, a large error is generated, so that the position of the finally detected moving target is inaccurate. Therefore, the original optical flow matrices obtained in S4 need to be merged to obtain the corresponding stable optical flow field representation of the current image, i.e., the merged optical flow matrix obtained in S6.
In some embodiments, S6 further includes: traversing the original optical flow matrix by using a second detection window with a set size, determining a merged optical flow vector corresponding to each sliding step of the second detection window according to the average value of each optical flow vector in the second detection window corresponding to each sliding step of the second detection window, and determining a merged optical flow matrix according to the merged optical flow vector.
Specifically, before merging the original optical flow matrices, a merging step is defined, that is, a second detection window with a predetermined size is defined. The number of rows of the second detection window is step1, the number of columns is step2, and the second preset window is a matrix of step1 × step 2. In some embodiments, step1 and step2 may be set equal, and in other embodiments step1 and step2 may not be equal.
And according to the average value of each optical flow vector in the second detection window corresponding to each sliding step of the second detection window, adding the corresponding optical flow vectors in the second detection window and then averaging to obtain a merged optical flow vector, and storing the calculated average value in a corresponding position in a new matrix to obtain the merged optical flow matrix.
If the original optical flow matrix is composed of the first component original matrix U and the second component original matrix V, the second detection window is required to traverse the first component original matrix U and the second component original matrix V, and then the first component merged matrix Us and the second component merged matrix Vs are determined according to the average value of each optical flow vector in the second detection window in the first component original matrix U and the average value of each optical flow vector in the second detection window in the second component original matrix U, which correspond to each sliding step of the second detection window. Where Us is the number of columns divided by step1, Us is the number of columns divided by step2, Vs is the number of rows divided by step1, and Vs is the number of columns divided by step 2. The elements in the merged optical flow matrix may be the average of the sources in the original optical flow matrix in the respective merge step.
As shown in the schematic diagram of fig. 1 for determining the position of a moving object based on an optical flow field change region, if the entire scene moves, the optical flow vectors of static objects in the scene are different, but theoretically, the approximate directions and sizes of the optical flow vectors are not different greatly; the dynamic object causes great disturbance to the surrounding optical flow vector due to the movement of the dynamic object and the deformation of the dynamic object on the image. Therefore, in some embodiments, a simple variance method may be utilized to determine the location where the optical-flow field changes most severely, i.e. the determining the optical-flow field change area according to the merged optical-flow matrix in S8 (S81) further comprises: traversing the merged optical flow matrix by using a third detection window with a set size, and determining an optical flow field change area according to the variance between the intermediate merged optical flow vector in the third detection window corresponding to each sliding step of the third detection window and each adjacent merged optical flow vector.
Further, the determining the optical flow field variation region according to the magnitude of the variance (S811) in S81 includes: and judging whether the variance is larger than or equal to a preset threshold value, if so, marking a position point corresponding to the intermediate merged optical flow vector in a blank matrix with the same size as the merged optical flow matrix to obtain a marked matrix, and determining the optical flow field change area according to the area where the marked point in the marked matrix is located.
Taking the merged optical flow matrix comprising the first component merged matrix Us and the second component merged matrix Vs, and the third detection box being a 3 × 3 matrix as an example, the process of determining the optical flow field change region based on the variance method in S61 is further clearly described as follows:
us, Vs are traversed using a 3 × 3 third detection window, respectively. Taking Us traversal as an example, when the third detection window is slid once, calculating whether the variance of the intermediate point (i, j) in the third detection window and the 8 vector components around the intermediate point is greater than a preset threshold, and if so, considering that the current intermediate point is likely to be a candidate point of the dynamic target. The current intermediate point is marked in a blank matrix tmp of the same size as Us to form a first component marking matrix corresponding to the first component combining matrix. For Vs traversal, the corresponding intermediate points as candidate points are also marked in a blank matrix tmp of the same size as Vs to form a second component marking matrix corresponding to the second component combining matrix. The area determined by each mark point in the mark matrix is the change area of the optical flow field.
In some embodiments, the determining the position of the moving object in the current image according to the optical flow field change region in S8 (S82) specifically includes S821 and S822.
S821: and communicating the optical flow field change area by using an image expansion algorithm.
After traversing all the points in the combined matrix, the areas where the corresponding mark points in the mark matrix are located are most likely to be the areas where the dynamic objects are located, and the blanks of the areas are filled up through an image expansion algorithm, so that the discontinuous areas can be basically connected to obtain a communicated optical flow field change area.
S822: and searching the boundary of the communicated optical flow field change area by using a space communication domain searching algorithm, drawing a corresponding boundary frame, and determining the position of the moving target in the current image according to the position of the boundary frame.
And the position of the boundary frame of the optical flow field change area is the final output position representing the moving target. According to the moving object detection algorithm provided by the application, the finally obtained bounding box representing the position of the moving object is not necessarily a rectangular box, and if the shape of the moving object is more complex, the finally obtained bounding box drawn by the marking area is in an irregular shape with sawteeth.
The sequence of the above-described steps in the moving object detection method provided by the present application is not limited, and in order to make the moving object detection method provided by the present application more clearly understood, a specific embodiment of the moving object detection method provided by the present application is provided below.
As shown in fig. 4, the moving object detection method described in this embodiment successively includes the following steps:
s01: and (5) zooming the image. The method comprises the steps of acquiring a current source image and a previous source image of the current source image which are acquired by an image shooting device in sequence, and respectively carrying out thumbnail processing on the two source images to enable the source images to be subjected to thumbnail processing to 1/2 so as to obtain a current image fr2 and a previous image fr1 of the current image. I.e. the current image fr2 is 1/2 the size of the current source image and the previous image fr1 is 1/2 the size of the previous source image. It is also necessary to define a buffer frame storing the previous source image before proceeding to S01. The image shooting device can be an infrared camera or a visible light camera.
S02: the original optical flow field is calculated. Traversing the current image and the previous image obtained after the thumbnail by using a 2 × 2 first detection window simultaneously, sliding the second detection window once each time, taking out four adjacent pixels in the image for performing subsequent gradient calculation, and taking out the pixels in the first detection window at the corresponding position in the current image and the previous image to form 8 pixels, and numbering the 8 pixels respectively, as shown in fig. 3 as t1, t2, t3, t4, t5, t6, t7, and t 8. Since the above has been described in detail with respect to fig. 3, it will not be described in detail here. Then, the corresponding time direction gradient (inter-image gradient), row direction gradient and column direction gradient are calculated according to the following formula group based on each extracted pixel.
Figure BDA0003471291020000181
The calculated gradient results are then substituted into the following formula set to obtain the corresponding original optical flow vectors, and the original optical flow vectors are stored into a predefined original optical flow matrix. Wherein the original optical flow matrix comprises a first component original optical flow matrix U for storing a first component U of the original optical flow vector and a second component original optical flow matrix V for storing a second component V of the original optical flow vector.
Figure BDA0003471291020000182
S03: the original optical flow fields are merged. This step combines (sums and averages) the vectors of the original optical flow field to obtain a stable vector representation. Before merging the original optical flow field, a step1 × step2 matrix is defined as a second detection window, i.e., the side length of the merged detection window is defined. This Step requires traversing the U and V matrices separately, and the values under the second detection window of Step1 Step2 each are summed and averaged and stored in a predefined new Us, Vs matrix, where Us is the length of U divided by Step1 and Us is the width of U divided by Step 2. Similarly, the length of Vs is the length of V divided by step1, the width of Vs is the width of V divided by step2, step1 is step 2.
S04: and traversing the optical flow field and judging the disturbance degree of the optical flow. The step is to judge the position where the optical flow field changes most severely by using a simple variance method. The Us and the Vs are traversed by using 3 x 3 third detection windows respectively, the variances of the middle points (i and j) in the third detection windows and 8 corresponding components around the middle points are calculated every step of sliding the third detection windows, so that the optical flow disturbance degree is judged according to the variances, and the greater the variance is, the greater the optical flow disturbance is.
S05: marking areas of the optical flow field where the perturbations are large. If the variance of the middle point (i, j) and its surrounding 8 vector components is greater than a threshold value threshold, indicating that the optical flow disturbance degree of the point is large, it is considered that the current point is likely to be a candidate point of the dynamic target, and it needs to be marked in a blank matrix tmp with the same size as Us.
S06: and drawing a boundary frame. After all the points in the merged optical flow matrix are traversed, the areas where the mark points in the mark matrix are located are most likely to be the areas where the dynamic objects are located, then the blanks of the areas are further filled through an image expansion algorithm, and discontinuous areas can be basically connected to obtain a connected optical flow field change area. And then, carrying out a space connected domain searching algorithm on the regions, finding the boundaries of the regions, and drawing out corresponding boundary frames, wherein the positions of the boundary frames are the detection positions of the moving targets.
As can be seen from the above description, in the moving object detection method provided in each embodiment of the present application, the relative motion between the scene and the moving object is skillfully utilized, and the traditional and simplified HS optical flow method is used to detect the moving object, so that the computational resources required by the detection algorithm can be effectively reduced. Furthermore, the original optical flow fields are combined, so that a more stable optical flow vector can be obtained, the detection accuracy is improved, and meanwhile, the calculation amount of a detection algorithm can be reduced. In addition, irregularity of the detection area is measured by using variance according to irregularity of optical flow vectors around the moving target, and the position of the dynamic target is found, so that the moving target detection method provided by the application is suitable for outdoor scenes with complex environmental information and irregular moving target object shapes. In addition, the moving object detection method provided by the application utilizes the reduced image to carry out calculation in the calculation process, and can reduce the calculation amount to the maximum extent on the premise of ensuring the detection accuracy. Therefore, the FPGA with relatively few computational resources can be adopted to realize the moving target detection method provided by the application, and the real-time dynamic target mark can be provided while the cost is reduced.
In some embodiments, the present application further provides a moving object detection device, as shown in fig. 5 in particular, which includes an image acquisition module 11, an original optical flow calculation module 12, a merged optical flow calculation module 13, and a moving object position determination module 14. The image obtaining module 11 is mainly configured to obtain a current image and a previous image associated with the current image. The original optical flow calculation module 12 is mainly configured to sequentially traverse the current image and the previous image by using a first detection window with a set size, perform gradient calculation according to a first group of pixels included in the first detection window in the current image corresponding to each sliding step and a second group of pixels included in the first detection window in the previous image in the sliding process of the first detection window, determine an optical flow vector corresponding to each sliding step of the first detection window according to a result of the gradient calculation, and determine an original optical flow matrix of the current image according to the optical flow vectors corresponding to each sliding step of the first detection window. The merged optical flow calculation module 13 is configured to merge the original optical flow matrix according to the original optical flow matrix and a set merging step length to obtain a merged optical flow matrix. The motion module position determining module 14 is mainly configured to determine an optical flow field change area according to the merged optical flow matrix, and determine a position of a motion object in the current image according to the optical flow field change area.
As shown in fig. 6, in some embodiments, the present application further provides a moving object detection device, which includes a memory 21 and a processor 22. Wherein the processor 22 executes the steps of the detection method provided according to any embodiment of the present application when executing the computer program instructions stored in the memory 21. The processor 22 shown may be implemented as an FPGA processor, since the motion detection method provided herein is relatively computationally inexpensive.
In addition, the present application also provides a computer readable storage medium storing computer program instructions; the computer program instructions, when executed by a processor, implement the steps of the detection method according to any one of the embodiments provided herein.
The processor may be a CPU (Central Processing Unit), or an ASIC (Application Specific Integrated Circuit), or one or more Integrated circuits configured to implement embodiments of the present invention. The detection device of the moving object comprises one or more processors, which can be the same type of processor, such as one or more CPUs; or may be different types of processors such as one or more CPUs and one or more ASICs or FPGAs.
The Memory may include a high-speed RAM (Random Access Memory) and may further include a Non-Volatile Memory (NVM), such as at least one disk Memory.
The above description is only for the specific embodiments of the present application, but the scope of the present application is not limited thereto, and any person skilled in the art can easily conceive of the changes or substitutions within the technical scope of the present application, and shall be covered by the scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.

Claims (11)

1. A moving object detection method, comprising:
acquiring a current image and a previous image associated with the current image;
performing gradient calculation according to pixels in corresponding first detection windows in the current image and the previous image, and determining an original optical flow matrix of the current image according to a gradient calculation result;
merging the original optical flow matrixes according to a set merging step length to obtain merged optical flow matrixes;
and determining an optical flow field change area according to the merged optical flow matrix, and determining the position of the moving object in the current image according to the optical flow field change area.
2. The detection method according to claim 1, wherein performing gradient calculation according to pixels in corresponding first detection windows in the current image and the previous image, and determining the original optical flow matrix of the current image according to the gradient calculation result comprises:
sequentially sliding and traversing the current image and the previous image by adopting a first detection window;
performing gradient calculation according to pixels contained in the first detection window in the current image and the previous image corresponding to each sliding step of the first detection window;
and determining an optical flow vector corresponding to each sliding step of the first detection window according to the gradient calculation result, and further determining an original optical flow matrix of the current image.
3. The method according to claim 2, wherein the performing a gradient calculation according to the pixels included in the first detection window in the current image and the previous image corresponding to each sliding step of the first detection window comprises:
acquiring a first group of pixels contained in the first detection window in the current image and a second group of pixels contained in the first detection window in the previous image corresponding to each sliding step in the synchronous sliding process of the first detection window in the current image and the previous image;
and respectively carrying out image gradient calculation, line gradient calculation and column gradient calculation according to the first group of pixels and the second group of pixels so as to respectively determine corresponding time direction gradient, line direction gradient and column direction gradient.
4. The detection method according to claim 3, wherein the first detection window is a 2 x 2 matrix, and the performing the inter-image gradient calculation, the inter-row gradient calculation, and the inter-column gradient calculation according to the first group of pixels and the second group of pixels respectively to determine the corresponding temporal gradient, the row gradient, and the column gradient respectively comprises:
calculating an average pixel difference value between corresponding individual pixels in the first group of pixels and the second group of pixels to obtain the time direction gradient;
calculating an average pixel difference value between each pixel of a next row and each pixel corresponding to a previous row in the first group of pixels and the second group of pixels to obtain the row direction gradient;
and calculating the average pixel difference value between each pixel of the latter column and each pixel corresponding to the former column in the first group of pixels and the second group of pixels to obtain the column direction gradient.
5. The detection method according to claim 4, wherein said determining an optical flow vector for each sliding step of the first detection window according to the gradient calculation result comprises:
calculating according to the time direction gradient, the row direction gradient and the column direction gradient to obtain a first intermediate parameter, wherein the size of the first intermediate parameter is in direct proportion with the time direction gradient and in inverse proportion with a second intermediate parameter, and the second intermediate parameter is the sum of a preset scaling coefficient, the square of the row direction gradient and the square of the column direction gradient;
calculating according to the first intermediate parameter and the row direction gradient to obtain a first component of the optical flow vector in the row direction, wherein the magnitude of the first component is proportional to the product of the first intermediate parameter and the row direction gradient;
and calculating according to the first intermediate parameter and the column direction gradient to obtain a second component of the optical flow vector in the column direction, wherein the magnitude of the second component is proportional to the product of the first intermediate parameter and the column direction gradient.
6. The detection method according to claim 1, wherein said merging the original optical flow matrices according to a set merging step to obtain a merged optical flow matrix comprises:
traversing the original optical flow matrix by using a second detection window with a set size, determining a merged optical flow vector corresponding to each sliding step of the second detection window according to the average value of each optical flow vector in the second detection window corresponding to each sliding step of the second detection window, and determining a merged optical flow matrix according to the merged optical flow vector.
7. The detection method according to claim 1, wherein said determining an optical flow field change area from said merged optical flow matrix comprises:
traversing the merged optical flow matrix by using a third detection window with a set size, calculating the variance between the intermediate merged optical flow vector in the third detection window corresponding to each sliding step of the third detection window and each adjacent merged optical flow vector, and determining an optical flow field change area according to the variance.
8. The detection method according to claim 6, wherein the determining an optical flow field change area according to the variance comprises:
judging whether the variance is larger than or equal to a preset threshold value, if so, marking a position point corresponding to the intermediate merged optical flow vector in a blank matrix with the same size as the merged optical flow matrix to obtain a marked matrix;
and determining the optical flow field change area according to the area where the mark point in the mark matrix is located.
9. The detection method according to claim 8, wherein the determining the position of the moving object in the current image according to the optical flow field change region comprises:
communicating the optical flow field change area by using an image expansion algorithm;
and searching the boundary of the communicated optical flow field change area by using a space communication domain searching algorithm, drawing a corresponding boundary frame, and determining the position of the moving target in the current image according to the position of the boundary frame.
10. A moving object detection apparatus comprising a memory and a processor;
the processor, when executing computer program instructions stored in the memory, performs the steps of the detection method of any one of claims 1 to 9.
11. A computer-readable storage medium having stored thereon computer program instructions;
the computer program instructions, when executed by a processor, implement the steps of the detection method of any one of claims 1 to 8.
CN202210043527.7A 2022-01-14 2022-01-14 Method, apparatus and storage medium for detecting moving object Pending CN114399539A (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202210043527.7A CN114399539A (en) 2022-01-14 2022-01-14 Method, apparatus and storage medium for detecting moving object
PCT/CN2022/098898 WO2023134114A1 (en) 2022-01-14 2022-06-15 Moving target detection method and detection device, and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210043527.7A CN114399539A (en) 2022-01-14 2022-01-14 Method, apparatus and storage medium for detecting moving object

Publications (1)

Publication Number Publication Date
CN114399539A true CN114399539A (en) 2022-04-26

Family

ID=81230976

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210043527.7A Pending CN114399539A (en) 2022-01-14 2022-01-14 Method, apparatus and storage medium for detecting moving object

Country Status (2)

Country Link
CN (1) CN114399539A (en)
WO (1) WO2023134114A1 (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023134114A1 (en) * 2022-01-14 2023-07-20 合肥英睿系统技术有限公司 Moving target detection method and detection device, and storage medium
CN116672836A (en) * 2023-06-08 2023-09-01 南京林业大学 Automatic control spraying device for building site fence
WO2024060447A1 (en) * 2022-09-21 2024-03-28 深圳创维-Rgb电子有限公司 Dynamic picture detection method and apparatus, display, and storage medium

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2009116730A (en) * 2007-11-08 2009-05-28 Handotai Rikougaku Kenkyu Center:Kk Image processing apparatus and method
CN105046195A (en) * 2015-06-09 2015-11-11 浙江理工大学 Human behavior identification method based on asymmetric generalized Gaussian distribution model (AGGD)
CN112307854A (en) * 2019-08-02 2021-02-02 中移(苏州)软件技术有限公司 Human body action recognition method, device, equipment and storage medium
CN113487646A (en) * 2021-07-22 2021-10-08 合肥英睿系统技术有限公司 Moving target detection method, device, equipment and storage medium
US20210392367A1 (en) * 2019-03-17 2021-12-16 Beijing Bytedance Network Technology Co., Ltd. Calculation of predication refinement based on optical flow

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5912815A (en) * 1997-03-14 1999-06-15 The Regents Of The University Of California Local relaxation method for estimating optical flow
JP4552553B2 (en) * 2004-07-23 2010-09-29 カシオ計算機株式会社 Image processing apparatus and image processing program
SG11202010898TA (en) * 2018-05-22 2020-12-30 Celepixel Tech Co Ltd Optical flow calculation method and computing device
CN110889833B (en) * 2019-11-18 2022-04-19 山东大学 Deep sea plankton detection method and system based on gradient optical flow method
CN113327269A (en) * 2021-05-21 2021-08-31 哈尔滨理工大学 Unmarked cervical vertebra movement detection method
CN113901268A (en) * 2021-10-26 2022-01-07 深研人工智能技术(深圳)有限公司 Video image background acquisition method
CN114399539A (en) * 2022-01-14 2022-04-26 合肥英睿系统技术有限公司 Method, apparatus and storage medium for detecting moving object

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2009116730A (en) * 2007-11-08 2009-05-28 Handotai Rikougaku Kenkyu Center:Kk Image processing apparatus and method
CN105046195A (en) * 2015-06-09 2015-11-11 浙江理工大学 Human behavior identification method based on asymmetric generalized Gaussian distribution model (AGGD)
US20210392367A1 (en) * 2019-03-17 2021-12-16 Beijing Bytedance Network Technology Co., Ltd. Calculation of predication refinement based on optical flow
CN112307854A (en) * 2019-08-02 2021-02-02 中移(苏州)软件技术有限公司 Human body action recognition method, device, equipment and storage medium
CN113487646A (en) * 2021-07-22 2021-10-08 合肥英睿系统技术有限公司 Moving target detection method, device, equipment and storage medium

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
刘国华: "机器视觉技术", 30 November 2021, 武汉:华中科技大学出版社, pages: 273 - 280 *
王帅: "一种基于MeanShift的目标跟踪算法的研究", 中国优秀硕士学位论文全文数据库 (信息科技辑), no. 04, 15 April 2012 (2012-04-15), pages 138 - 1980 *
程石磊: "视频序列中人体行为的特征提取与识别方法研究", 中国博士学位论文全文数据库 (信息科技辑), no. 07, 15 July 2020 (2020-07-15), pages 138 - 16 *

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023134114A1 (en) * 2022-01-14 2023-07-20 合肥英睿系统技术有限公司 Moving target detection method and detection device, and storage medium
WO2024060447A1 (en) * 2022-09-21 2024-03-28 深圳创维-Rgb电子有限公司 Dynamic picture detection method and apparatus, display, and storage medium
CN116672836A (en) * 2023-06-08 2023-09-01 南京林业大学 Automatic control spraying device for building site fence
CN116672836B (en) * 2023-06-08 2024-01-16 南京林业大学 Automatic control spraying device for building site fence

Also Published As

Publication number Publication date
WO2023134114A1 (en) 2023-07-20

Similar Documents

Publication Publication Date Title
CN110702111B (en) Simultaneous localization and map creation (SLAM) using dual event cameras
CN114399539A (en) Method, apparatus and storage medium for detecting moving object
Vedula et al. Three-dimensional scene flow
EP0606385B1 (en) Method for determining sensor motion and scene structure and image processing system therefor
US10395383B2 (en) Method, device and apparatus to estimate an ego-motion of a video apparatus in a SLAM type algorithm
JP6734940B2 (en) Three-dimensional measuring device
KR100544677B1 (en) Apparatus and method for the 3D object tracking using multi-view and depth cameras
Jin et al. A semi-direct approach to structure from motion
KR101776620B1 (en) Apparatus for recognizing location mobile robot using search based correlative matching and method thereof
US8290212B2 (en) Super-resolving moving vehicles in an unregistered set of video frames
KR20150144731A (en) Apparatus for recognizing location mobile robot using edge based refinement and method thereof
CN108090921A (en) Monocular vision and the adaptive indoor orientation method of IMU fusions
US20060050788A1 (en) Method and device for computer-aided motion estimation
Zhu Image Gradient-based Joint Direct Visual Odometry for Stereo Camera.
US10268929B2 (en) Method and device for generating binary descriptors in video frames
CN110390685B (en) Feature point tracking method based on event camera
JP2017117386A (en) Self-motion estimation system, control method and program of self-motion estimation system
JP6061770B2 (en) Camera posture estimation apparatus and program thereof
CN105809664B (en) Method and device for generating three-dimensional image
TWI509568B (en) Method of detecting multiple moving objects
Greene et al. Metrically-scaled monocular slam using learned scale factors
CN112233148A (en) Method and apparatus for estimating motion of object, and computer storage medium
JP6410231B2 (en) Alignment apparatus, alignment method, and computer program for alignment
CN110689554A (en) Background motion estimation method and device for infrared image sequence and storage medium
CN116883897A (en) Low-resolution target identification method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination