CN109919126B - Method and device for detecting moving object and storage medium - Google Patents
Method and device for detecting moving object and storage medium Download PDFInfo
- Publication number
- CN109919126B CN109919126B CN201910209887.8A CN201910209887A CN109919126B CN 109919126 B CN109919126 B CN 109919126B CN 201910209887 A CN201910209887 A CN 201910209887A CN 109919126 B CN109919126 B CN 109919126B
- Authority
- CN
- China
- Prior art keywords
- vector
- determining
- speed vector
- speed
- pixel
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000000034 method Methods 0.000 title claims abstract description 31
- 239000013598 vector Substances 0.000 claims abstract description 325
- 238000001514 detection method Methods 0.000 claims abstract description 50
- 238000012163 sequencing technique Methods 0.000 claims abstract description 35
- 238000003780 insertion Methods 0.000 claims description 21
- 230000037431 insertion Effects 0.000 claims description 21
- 238000004590 computer program Methods 0.000 claims description 9
- 238000012545 processing Methods 0.000 description 17
- 238000010586 diagram Methods 0.000 description 16
- 238000004891 communication Methods 0.000 description 10
- 238000005516 engineering process Methods 0.000 description 7
- 230000006870 function Effects 0.000 description 7
- 230000003287 optical effect Effects 0.000 description 7
- 230000008569 process Effects 0.000 description 6
- 206010039203 Road traffic accident Diseases 0.000 description 4
- 230000005540 biological transmission Effects 0.000 description 4
- 230000005236 sound signal Effects 0.000 description 4
- 230000009466 transformation Effects 0.000 description 4
- 238000003491 array Methods 0.000 description 3
- 238000003384 imaging method Methods 0.000 description 3
- 230000001629 suppression Effects 0.000 description 3
- 230000001133 acceleration Effects 0.000 description 2
- 230000008859 change Effects 0.000 description 2
- 239000000835 fiber Substances 0.000 description 2
- 230000003993 interaction Effects 0.000 description 2
- 230000001902 propagating effect Effects 0.000 description 2
- 230000003068 static effect Effects 0.000 description 2
- 230000001131 transforming effect Effects 0.000 description 2
- RYGMFSIKBFXOCR-UHFFFAOYSA-N Copper Chemical compound [Cu] RYGMFSIKBFXOCR-UHFFFAOYSA-N 0.000 description 1
- 230000009471 action Effects 0.000 description 1
- 238000006243 chemical reaction Methods 0.000 description 1
- 229910052802 copper Inorganic materials 0.000 description 1
- 239000010949 copper Substances 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 238000006073 displacement reaction Methods 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 239000004973 liquid crystal related substance Substances 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000011022 operating instruction Methods 0.000 description 1
- 230000002093 peripheral effect Effects 0.000 description 1
- 239000004065 semiconductor Substances 0.000 description 1
Classifications
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02T—CLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
- Y02T10/00—Road transport of goods or passengers
- Y02T10/10—Internal combustion engine [ICE] based vehicles
- Y02T10/40—Engine management systems
Landscapes
- Image Analysis (AREA)
- Traffic Control Systems (AREA)
Abstract
The present disclosure provides a moving object detection method, apparatus, and storage medium. Wherein the method comprises the following steps: acquiring an image frame of a scene around a target vehicle; determining a speed vector corresponding to at least one feature point of the image frame according to pixel values of adjacent image frames; and determining a sequencing queue of the speed vectors according to the speed vectors, and determining moving objects around the target vehicle according to comparison of head and tail vectors in the sequencing queue. By adopting the embodiment of the invention, the moving objects around the target vehicle can be detected simply, conveniently and accurately.
Description
Technical Field
The disclosure relates to the field of information technology, and in particular, to a method and device for detecting a moving object, and a storage medium.
Background
Along with the development of economy, the living standard of people is continuously improved, and diversified vehicles provide convenience for the travel of people. Among various vehicles, automobiles are becoming a main vehicle for people due to their convenience and rapidness, and the average holding capacity is increasing year by year.
When the number of automobiles is increased, accidents frequently occur on the road, so that the daily life and trip efficiency of people are seriously influenced, and even the life safety of people is endangered. Therefore, the problem of driving safety is gradually focused on. How to detect moving objects around an automobile is a technical problem to be solved at present.
Disclosure of Invention
In view of this, the present disclosure proposes a moving object detection method, apparatus, and storage medium to detect a moving object around a target vehicle simply and accurately.
According to an aspect of the present disclosure, there is provided a moving object detection method including:
acquiring an image frame of a scene around a target vehicle;
determining a speed vector corresponding to at least one feature point of the image frame according to pixel values of adjacent image frames;
and determining a sequencing queue of the speed vectors according to the speed vectors, and determining moving objects around the target vehicle according to comparison of head and tail vectors in the sequencing queue.
In one possible implementation manner, the acquiring an image frame of a scene around the target vehicle includes:
acquiring a first image frame acquired by an image pickup device in a first projection mode;
and if the first projection mode is different from the preset projection mode, converting the first image frame according to the preset projection mode to obtain a second image frame.
In one possible implementation manner, the determining a velocity vector corresponding to at least one feature point of the image frame according to pixel values of adjacent image frames includes:
Determining at least one characteristic point in pixel points of the image frames according to pixel values of the pixel points adjacent to the image frames;
and determining a speed vector corresponding to the characteristic point according to the pixel coordinates of the characteristic point adjacent to the image frame.
In one possible implementation manner, the determining, according to pixel coordinates of the feature points in the adjacent image frames, a velocity vector corresponding to the feature points includes:
acquiring pixel coordinates and brightness of a first pixel point corresponding to a feature point in the adjacent image frames;
acquiring a second pixel point which is distant from the first pixel point by a preset pixel point distance in the adjacent image frame according to the pixel coordinates of the first pixel point;
and determining a speed vector corresponding to the characteristic point according to the brightness of the first pixel point and the brightness of the second pixel point.
In one possible implementation, determining an ordering queue of velocity vectors from the velocity vectors includes:
dividing the image frame into at least one image region;
acquiring a speed vector of a feature point in each image area, and sequencing the speed vector according to at least one vector parameter corresponding to the speed vector to obtain a sequencing result;
And determining a sequencing queue of the speed vector corresponding to each image area according to the sequencing result.
In one possible implementation manner, the obtaining the velocity vector of the feature point in each image area, sorting the velocity vector according to at least one vector parameter corresponding to the velocity vector, and obtaining a sorting result includes:
obtaining a parameter value of a vector parameter according to the speed vector of the first characteristic point in each image area;
determining the insertion position of the speed vector of the first characteristic point in a speed vector ordered list according to the parameter value of the vector parameter, wherein the speed vector ordered list stores the speed vector of a second characteristic point inserted before the speed vector of the first characteristic point;
and inserting the speed vector of the first characteristic point into the insertion position in the speed vector ordered list to obtain an ordered result.
In one possible implementation, the vector parameters include: die length and direction angle.
In one possible implementation, determining moving objects around the target vehicle according to the comparison of the head-to-tail vectors in the ordering queue includes:
Determining a head vector and a tail vector in an ordering queue of the speed vector corresponding to each image area;
in the case where the difference in the parameter values of the vector parameters between the head vector and the tail vector is greater than a parameter threshold, it is determined that a moving object exists around the target vehicle.
According to another aspect of the present disclosure, there is provided a moving object detecting apparatus including:
the acquisition module is used for acquiring image frames of scenes around the target vehicle;
a first determining module, configured to determine a velocity vector corresponding to at least one feature point of the image frame according to pixel values adjacent to the image frame;
and the second determining module is used for determining a sequencing queue of the speed vectors according to the speed vectors and determining moving objects around the target vehicle according to comparison of head and tail vectors in the sequencing queue.
In one possible implementation manner, the acquiring module includes:
the first acquisition sub-module is used for acquiring a first image frame acquired by the camera device in a first projection mode;
and the transformation submodule is used for transforming the first image frame according to the preset projection mode to obtain a second image frame if the first projection mode is different from the preset projection mode.
In one possible implementation manner, the first determining module includes:
a first determining sub-module, configured to determine at least one feature point in pixel points adjacent to the image frame according to pixel values of the pixel points;
and the second determining submodule is used for determining a speed vector corresponding to the characteristic point according to the pixel coordinates of the characteristic point adjacent to the image frame.
In one possible implementation, the second determining submodule includes:
the first acquisition unit is used for acquiring pixel coordinates and brightness of the first pixel points corresponding to the feature points in the adjacent image frames;
the second acquisition unit is used for acquiring a second pixel point which is distant from the first pixel point by a preset pixel point distance in the adjacent image frames according to the pixel coordinates of the first pixel point;
and the determining unit is used for determining the speed vector corresponding to the characteristic point according to the brightness of the first pixel point and the brightness of the second pixel point.
In one possible implementation manner, the second determining module includes:
a dividing sub-module for dividing the image frame into at least one image area;
the sorting sub-module is used for obtaining the speed vector of the feature points in each image area, sorting the speed vector according to at least one vector parameter corresponding to the speed vector, and obtaining a sorting result;
And the third determining submodule is used for determining the sequencing queue of the speed vector corresponding to each image area according to the sequencing result.
In one possible implementation, the sorting submodule includes:
a parameter value obtaining unit, configured to obtain a parameter value of a vector parameter according to a velocity vector of a first feature point in each image area;
a position determining unit configured to determine an insertion position of a velocity vector of the first feature point in a velocity vector ordered list according to a parameter value of the vector parameter, wherein the velocity vector ordered list stores a velocity vector of a second feature point inserted before the velocity vector of the first feature point;
and the inserting unit is used for inserting the speed vector of the first characteristic point into the inserting position in the speed vector sorting list to obtain a sorting result.
In one possible implementation, the vector parameters include: die length and direction angle.
In one possible implementation manner, the second determining module further includes:
the vector determining unit is used for determining a head vector and a tail vector in the sorting queue of the speed vector corresponding to each image area;
A moving object determining unit configured to determine that a moving object exists around the target vehicle in a case where a difference in parameter values of vector parameters between the head vector and the tail vector is greater than a parameter threshold.
According to another aspect of the present disclosure, there is provided a moving object detecting apparatus including: a processor; a memory for storing processor-executable instructions; wherein the processor is configured to perform the above method.
According to another aspect of the present disclosure, there is provided a non-transitory computer readable storage medium having stored thereon computer program instructions, wherein the computer program instructions, when executed by a processor, implement the above-described method.
According to the method and the device for determining the moving object around the target vehicle, through acquiring the image frames of scenes around the target vehicle, the speed vector corresponding to at least one feature point of the image frames can be determined according to the pixel values of the adjacent image frames, the sorting queue of the speed vectors is determined according to the speed vector, and therefore the moving object around the target vehicle can be determined according to the comparison of the head vector and the tail vector in the sorting queue. The mobile object detection scheme provided by the embodiment of the disclosure can simply, conveniently and accurately detect the mobile objects existing around the target vehicle, for example, can detect the mobile objects in the blind area range of the user, provides convenience for the user driving the target vehicle, and reduces traffic accidents.
Other features and aspects of the present disclosure will become apparent from the following detailed description of exemplary embodiments, which proceeds with reference to the accompanying drawings.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate exemplary embodiments, features and aspects of the present disclosure and together with the description, serve to explain the principles of the disclosure.
Fig. 1 shows a flowchart of a moving object detection method according to an embodiment of the present disclosure.
Fig. 2 shows a flowchart of a process of determining a velocity vector corresponding to a feature point according to an embodiment of the present disclosure.
Fig. 3 shows a block diagram of a gaussian pyramid according to an embodiment of the present disclosure.
FIG. 4 illustrates a flow chart of a process of determining a rank queue of speed vectors according to an embodiment of the present disclosure.
Fig. 5 shows a block diagram of a moving object detection device according to an embodiment of the present disclosure.
Fig. 6 illustrates a block diagram of a moving object detecting device according to an embodiment of the present disclosure.
Detailed Description
Various exemplary embodiments, features and aspects of the disclosure will be described in detail below with reference to the drawings. In the drawings, like reference numbers indicate identical or functionally similar elements. Although various aspects of the embodiments are illustrated in the accompanying drawings, the drawings are not necessarily drawn to scale unless specifically indicated.
The word "exemplary" is used herein to mean "serving as an example, embodiment, or illustration. Any embodiment described herein as "exemplary" is not necessarily to be construed as preferred or advantageous over other embodiments.
In addition, numerous specific details are set forth in the following detailed description in order to provide a better understanding of the present disclosure. It will be understood by those skilled in the art that the present disclosure may be practiced without some of these specific details. In some instances, methods, means, elements, and circuits well known to those skilled in the art have not been described in detail in order not to obscure the present disclosure.
According to the moving object detection scheme provided by the embodiment of the disclosure, the image frames of scenes around the target vehicle can be acquired, at least one characteristic point of the image frames is determined according to the pixel values of the adjacent image frames, and the speed vector corresponding to the characteristic point is determined. By the pixel values of the image frames, feature points in the image frames can be quickly identified, reducing the time for image frame processing. Then, a sorting queue of the speed vectors can be determined according to the speed vectors corresponding to the determined feature points, and moving objects around the target vehicle can be determined according to the comparison of the head and tail vectors in the sorting queue, so that the moving objects around the target vehicle can be rapidly and accurately judged, the implementation mode is simple, effective reference is provided for safe driving of users, and traffic accidents are reduced.
The mobile object detection scheme provided by the embodiment of the disclosure can be applied to any scene requiring detection of a mobile object, for example, to a mobile object detection device, a safe driving system of a vehicle and the like. The present disclosure is not limited to specific application scenarios, and any specific example implemented using the mobile object detection scheme provided by the present disclosure is within the scope of protection of the present disclosure.
The following describes in detail a moving object detection scheme provided by the present disclosure with reference to specific embodiments.
Fig. 1 illustrates a flowchart of a moving object detection method according to an embodiment of the present disclosure. The method may be applied to terminal devices, for example, vehicle terminals, mobile object detection (MOD, moving Object Detection) devices, and may also be applied to network devices, for example, to safe driving platforms, and the like. As shown in fig. 1, the moving object detection method includes:
step 11, acquiring image frames of scenes around the target vehicle.
In this embodiment, the image of the scene around the target vehicle may be acquired by the image capturing device, so as to obtain an image frame of the scene around the target vehicle. For example, a plurality of imaging devices may be provided on the body of the target vehicle, and the imaging devices may capture a scene around the target vehicle in real time. The vehicle-mounted terminal can be arranged in the target vehicle, and the image frames of scenes around the target vehicle transmitted by the camera device are acquired in real time.
In one possible implementation, the image frames of the scene surrounding the target vehicle may include a first image frame and a second image frame. When the vehicle-mounted terminal acquires the image frames of scenes around the target vehicle, the vehicle-mounted terminal can acquire the first image frames acquired by the camera device in the first projection mode, and if the first projection mode is different from the preset projection mode, the first image frames can be converted according to the preset projection mode to obtain the second image frames. For example, the image pickup device may collect a first image frame by a first projection manner of a fisheye image, and the collected first image frame is the fisheye image. If the vehicle-mounted terminal judges that the first projection mode of the image pickup device is different from the preset projection mode, the vehicle-mounted terminal can convert the first image frame into a second image frame in the preset projection mode when acquiring the first image frame. The preset projection mode may be a linear projection mode.
Here, when the first image frame acquired in the first projection mode is converted into two image frames in the preset projection mode, some objects in the first image frame may be distorted after the conversion due to different projection modes, so that the objects may be identified. For example, the fish-eye image may be distorted by higher objects such as trees and railings after projective transformation, so that the higher objects in the image may be identified.
And step 12, determining a speed vector corresponding to at least one characteristic point of the image frame according to the pixel values of the adjacent image frames.
In this embodiment, after the image frames of the subject surrounding the target vehicle are acquired, at least one feature point may be determined in the pixel points according to the pixel values of the pixel points of the adjacent image frames, and then the velocity vector corresponding to the feature point may be determined according to the pixel coordinates of the feature point of the adjacent image frame. Here, each feature point may have one velocity vector.
In one possible implementation manner, when at least one feature point is determined in the pixel points according to the pixel values of the pixel points of the adjacent image frames, any pixel point can be used as a detection point, and then the pixel values of the detection point can be compared with the pixel values of the pixel points around the detection point, so that a comparison result is obtained. It can then be determined whether the detection point is a feature point based on the comparison result. For example, if the pixel value of the detection point is greater than or less than the pixel threshold value than the pixel values of the consecutive number of surrounding pixel points, it may be determined that the detection point is a feature point. Otherwise, the detection point is not a feature point.
In one possible implementation manner, when determining the velocity vector corresponding to the feature point according to the pixel coordinates of the feature point of the adjacent image frames, the velocity vector corresponding to the feature point may be determined according to the pixel coordinates of the same feature point in the adjacent image frames and the acquisition time between the adjacent image frames. For example, if the pixel coordinates of the feature point a at the t-th image frame are (x 1, y 1) and the pixel coordinates at the t+1th image frame are (x 2, y 2), the velocity vector of the feature point a (u, v) = (x 2, y 2) - (x 1, y 1).
And step 13, determining a sequencing queue of the speed vectors according to the speed vectors, and determining moving objects around the target vehicle according to comparison of head and tail vectors in the sequencing queue.
In this embodiment, after determining the velocity vector corresponding to the feature point of the image frame, the velocity vector may be ordered according to the vector parameter of the velocity vector of the feature point of the image frame to obtain an ordering queue, and then the head vector and the tail vector in the ordering queue are compared, and the moving object around the target vehicle is determined according to the compared result.
In one possible implementation manner, when determining the sorting queue of the velocity vectors according to the velocity vectors, the image frame may be divided into at least one image area, then the velocity vectors of the feature points in each image area are obtained, the velocity vectors are sorted according to at least one vector parameter corresponding to the velocity vectors, a sorting result is obtained, and then the sorting queue of the velocity vectors corresponding to each image area is determined according to the sorting result. Here, when dividing the image frames into at least one image area, each image frame may be divided into a plurality of image areas according to a preset image area size. In this way, each image region may have a corresponding ordered queue of velocity vectors, so that the image region in which the moving object is present may be determined. The image area size can be reasonably set according to the application scene, if the image area size is set too small, the feature points included in each image area are too small, comparison is carried out with the speed vectors of the feature points, and if the image area size is set too large, the image position of the moving object is difficult to judge. Therefore, the appropriate setting of the image area size can improve the efficiency of moving object detection.
In one possible implementation manner, when determining the moving object around the target vehicle according to the comparison of the head vector and the tail vector in the sorting queue, the head vector and the tail vector in the sorting queue of the speed vector corresponding to each image area may be determined first, then the head vector and the tail vector may be compared, and in the case that the difference between the parameter values of the vector parameters between the head vector and the tail vector is greater than the parameter threshold, it may be determined that the moving object exists in the image area, that is, that the moving object exists around the target vehicle. If the difference in the parameter values of the vector parameters between the head vector and the tail vector is less than or equal to the parameter threshold, it may be determined that no moving object is present in the image area.
Here, the vector parameters of the velocity vector may include a modulo length and a direction angle. When the speed vectors are ordered according to at least one vector parameter corresponding to the speed vectors, the speed vectors can be ordered by selecting a module length or a direction angle, and the speed vectors can be ordered according to the module length and the direction angle respectively. If the velocity vectors are sorted according to the modulo length and the direction angle, respectively, each image region may correspond to a sorting queue sorted by the modulo length and a sorting queue sorted by the direction angle, and then the moving object in the image region may be determined together according to the comparison of the head and tail vectors in the sorting queue of the modulo length and the comparison of the head and tail vectors in the sorting queue of the direction angle. For example, it may be determined that a moving object is present in the image area in which the difference in modulo length of the head and tail vectors in the sorting queue sorted by modulo length is greater than a modulo length threshold and the difference in modulo length of the head and tail vectors in the sorting queue sorted by direction angle is greater than a direction angle threshold. By the method, the judgment errors of the moving object in the image area can be reduced, and the detection accuracy of the moving object is improved.
By the moving object detection method, at least one characteristic point of the acquired image frame can be determined, a sorting queue of the speed vectors is determined according to the speed vectors corresponding to the characteristic point, and moving objects around the target vehicle are determined according to the comparison of the head vectors and the tail vectors in the sorting queue, so that the moving objects around the target vehicle can be rapidly and accurately judged, the implementation mode is simple, effective reference is provided for safe driving of a user, and traffic accidents are reduced.
In the above step 12, a velocity vector corresponding to at least one feature point of the image frame may be determined according to the pixel values of the adjacent image frames, so that the moving object in the image frame may be determined according to the velocity vector. In the following, a description is given of determining a velocity vector corresponding to at least one feature point of an image frame in connection with one possible implementation.
Fig. 2 illustrates a flowchart of a process of determining a velocity vector corresponding to at least one feature point of an image frame, according to an embodiment of the present disclosure, including:
and step 121, determining at least one characteristic point in the pixel points according to the pixel values of the pixel points adjacent to the image frame.
Here, the in-vehicle terminal may determine whether or not the detection point p is a feature point by using any one of the pixel points in the image frame as the detection point p. For the detection point p, a pixel point at a preset pixel point distance from the detection point p may be determined first, then a pixel value of the pixel point is obtained, for example, a circle may be determined by taking the detection point p as a circle center and a radius of 3 pixel point distances, and then 16 pixel points are determined on the circle. Then, the pixel value img [ p ] of the detection point p can be compared with the pixel value img [ i ] of the determined pixel point, and whether the differences between the pixel values of the continuous N pixel points i and the detection point p are larger than or smaller than a pixel threshold value threshold or not is judged, namely whether the following pixel conditions are satisfied or not:
img [ i ] < img [ p ] -threshold, or img [ i ] > img [ p ] +threshold. Wherein img [ i ] is the pixel value of pixel point i, i is a positive integer less than or equal to N, and N is a positive integer.
For example, the judgment condition is satisfied if there are any consecutive 10 pixel points x among the 16 pixel points. If present, the detection point p is a feature point. Otherwise, the detection point p is not a feature point.
In one possible implementation manner, when determining at least one feature point of the image frame, after determining that the detection point p meets the pixel condition, it may further determine whether the detection point p meets the suppression condition, if the detection point p also meets the suppression condition, it may be determined that the detection point p is a feature point, otherwise, the detection point p is not a feature point. Here, the detection point satisfying the pixel condition may be referred to as a candidate feature point, and accordingly, the suppression condition may be that a maximum score of the candidate feature point is maximum in a preset pixel area region of the image frame, wherein the maximum score may be calculated according to a difference between a pixel point at a preset pixel point distance from the candidate feature point and a pixel value of the candidate feature point. For example, candidate feature points satisfying the pixel condition may be determined within a 3×3 preset pixel area, then 16 pixel points distant from the candidate feature points by 3 pixel points may be determined for any candidate feature point, then differences between the candidate feature points and the pixel values of the 16 pixel points are calculated respectively, the absolute values of the obtained differences between the 16 pixel values are summed up to obtain a maximum score of the candidate feature points, and the candidate feature point with the maximum score is used as the feature point of the image frame.
And step 122, determining a speed vector corresponding to the feature point according to the pixel coordinates of the feature point adjacent to the image frame.
In one possible implementation manner, when determining the velocity vector corresponding to the feature point, the pixel coordinates and the brightness of the feature point corresponding to the first pixel point in the adjacent image frame may be obtained, then the second pixel point, which is distant from the first pixel point by a preset pixel point distance, in the adjacent image frame may be obtained according to the pixel coordinates of the first pixel point, and then the velocity vector corresponding to the feature point is determined according to the brightness of the first pixel point and the brightness of the second pixel point.
For example, the pixel coordinates of the first pixel corresponding to the feature point in the image frame are (x, y), and if the three-dimensional space is expanded, the pixel coordinates of the first pixel are (x, y, z). If the luminance of the first pixel point in the t-th image frame with time t is I (x, y, z, t), the luminance of the first pixel point in the t+1-th image frame with time t+δt is I (x+δx, y+δy, z+δz, t+δt), where I (x+δx, y+δy, z+δz, t+δt) satisfies the following formula:
where h.o.t. is the higher partial derivative, which is negligible in case the movement of the first image frame is small enough. Since the luminance of the first pixel point in the adjacent image frame may be considered as unchanged, then it may be considered that: i (x, y, z, t) =i (x+δx, y+δy, z+δz, t+δt). And then can be obtained by:
From the above equation transformation, it can be obtained:
wherein V is x ,V y ,V z The optical flow components of x, y, z in I (x, y, z, t) can be represented, respectively.
Assuming that the optical flow (Vx, vy, vz) is a constant in a pixel window of size m x m (m > 1), then from pixel 1 to pixel n, one can get the following set of equations:
wherein,, n=m×m×ix n Is the luminance component of pixel n in x direction, iy n Is the luminance component of pixel n in y direction, iz n Is the luminance component of pixel n in the z direction.
The above equation set may be expressed as:
it can be noted that:according to the least squares method: />Furthermore, the velocity vector of the feature point can be obtained>
Further, if the feature point is in a case where the relative displacement of the first image frame and the second image frame is large, a gaussian pyramid may be established. The gaussian pyramid may comprise multiple layers, the top layer may represent a first image frame and the bottom layer may represent a second image frame. Then, the pixel positions of the feature points in the next image frame are estimated from the top layer of the Gaussian pyramid, the pixel positions of the feature points in each layer are used as the initial pixel positions of the feature points of the next layer, and the image frame is searched downwards along each layer of the Gaussian pyramid until reaching the bottom layer of the Gaussian pyramid. Fig. 3 shows a block diagram of a gaussian pyramid, which may include 3 layers, h1 may represent velocity vectors of feature points in image frames corresponding to the first layer and the second layer, and h2 may represent velocity vectors of feature points in image frames corresponding to the second layer and the third layer.
In the above step 13, the sorting queue of the velocity vectors may be determined according to the velocity vectors, so that the moving object in the image frame may be determined according to the comparison of the head and tail vectors in the sorting queue. The process of determining the rank queue of velocity vectors is described below in connection with one possible implementation.
FIG. 4 illustrates a flow chart of a process of determining an ordered queue of speed vectors, according to one embodiment of the disclosure, including:
step 131, dividing the image frame into at least one image area.
Here, in determining the sorting queue of the velocity vectors from the velocity vectors, each image frame may be divided into a plurality of image areas according to a preset image area size. In this way, each image region may have a corresponding ordered queue of velocity vectors, so that the image region in which the moving object is present may be determined.
Step 132, obtaining vector parameters according to the velocity vector of the first feature point in each image area.
In one possible implementation, when sorting the velocity vectors of the feature points according to any vector parameter of the velocity vectors, sorting of the velocity vectors may be achieved by inserting the velocity vector of each feature point into a sorted list of velocity vectors. The image region may include a first feature point and a second feature point, wherein a velocity vector of the first feature point may be a current velocity vector to be ordered, and a velocity vector of the second feature point may be a velocity vector to be ordered in order in a velocity vector ordered class table. When the speed vectors of the first feature points are ordered, the parameter values of the vector parameters can be obtained according to the speed vectors of the first feature points. Here, the vector parameters of the velocity vector may include a modulo length and a direction angle.
And step 133, determining the insertion position of the velocity vector of the first feature point in a velocity vector ordered list according to the parameter value of the vector parameter, wherein the velocity vector ordered list stores the velocity vector of the second feature point inserted before the velocity vector of the first feature point.
Here, after obtaining the parameter values of the velocity vector of the first feature point, the insertion position of the velocity vector of the first feature point may be determined in the velocity vector sorted list sorted by the corresponding vector parameters, based on the parameter values of the velocity vector. The speed vector ordered list may store speed vectors of second feature points inserted before the speed vector of the first feature point, and the speed vectors of the second feature points may be arranged from large to small or from small to large according to parameter values of vector parameters. For example, the velocity vectors of the second feature points may be ordered in terms of magnitude of the modulo length or direction angle. When determining the insertion position of the velocity vector of the first feature point, the insertion position of the velocity vector of the first feature point may be determined in a half-sorted manner. For example, if the velocity vectors are sorted from small to large in the velocity vector sorting list by the parameter values of vector parameters, the head vector and the tail vector of the velocity vector sorting list may be denoted as v [ low ] and v [ high ], the parameter value of the velocity vector of the first feature point may be compared with the parameter value of the reference vector v [ m ] at the middle position in the velocity vector sorting list, if the velocity vector of the first feature point is smaller than the parameter value of the reference vector v [ m ], the insertion position of the velocity vector of the first feature point is determined to be in the position interval of the velocity vectors v [ low ] to v [ m-1], otherwise, the insertion position of the velocity vector of the first feature point is determined to be in the position interval of the velocity vectors v [ m+1] to v [ high ].
The head and tail vectors of the determined location area may then be denoted as v low and v high, the reference vector of the intermediate position of the location area may be denoted as v m, the parameter value of the velocity vector of the first feature point may then be compared with the parameter value of the velocity vector v m, the location area of the first feature point insertion location may be redetermined, and this is repeated until the parameter value of v low is greater than the parameter value of v high, and the insertion location of the velocity vector of the first feature point is determined as the storage location where the velocity vector v high+1 is located.
And step 134, inserting the speed vector of the first feature point into the insertion position in the speed vector ordered list to obtain an ordered result.
Here, after determining the insertion position of the velocity vector of the first feature point, the velocity vector of the first feature point may be inserted into the insertion position determined in the velocity vector ordered list. And moving the speed vector of the insertion position and the speed vector after the insertion position by one storage unit backwards in the speed vector ordered list. After inserting the feature points in the image area into the velocity vector ordered list, the ordering result of the feature points may be determined according to the arrangement positions of the velocity vectors in the velocity vector ordered list.
By the method for determining the speed vector sorting queue, the speed vectors of the feature points can be sorted rapidly, so that the speed of moving objects around the target vehicle can be determined by using the sorting queue of the vectors, and the moving objects around the target vehicle can be detected rapidly.
In accordance with the above description of the moving object detection method, fig. 5 shows a block diagram of a moving object detection apparatus 50 provided in an embodiment of the present disclosure. The apparatus 50 of the present embodiment may be used to implement the operations of the steps in the moving object detection method, and various specific examples and advantageous effects thereof may be referred to the above description of the moving object detection method, and the description thereof will not be repeated here for the sake of brevity.
As shown in fig. 5, a moving object detection device 50 provided in an embodiment of the present disclosure includes: an acquisition module 51 for acquiring an image frame of a subject surrounding the target vehicle; a first determining module 52, configured to determine a velocity vector corresponding to at least one feature point of the image frame according to pixel values of adjacent image frames; a second determining module 53, configured to determine an ordering queue of velocity vectors according to the velocity vectors, and determine moving objects around the target vehicle according to a comparison of head and tail vectors in the ordering queue.
In one example, the acquisition module 51 includes: the first acquisition sub-module is used for acquiring a first image frame acquired by the camera device in a first projection mode; and the transformation submodule is used for transforming the first image frame according to the preset projection mode to obtain a second image frame if the first projection mode is different from the preset projection mode.
In one example, the first determining module 52 includes a first determining sub-module configured to determine at least one feature point in pixel points adjacent to the image frame according to pixel values of the pixel points; and the second determining submodule is used for determining a speed vector corresponding to the characteristic point according to the pixel coordinates of the characteristic point adjacent to the image frame.
In one example, the second determination submodule includes: the first acquisition unit is used for acquiring pixel coordinates and brightness of the first pixel points corresponding to the feature points in the adjacent image frames; the second acquisition unit is used for acquiring a second pixel point which is distant from the first pixel point by a preset pixel point distance in the adjacent image frames according to the pixel coordinates of the first pixel point; and the determining unit is used for determining the speed vector corresponding to the characteristic point according to the brightness of the first pixel point and the brightness of the second pixel point.
In one example, the second determining module 53 includes: a dividing sub-module for dividing the image frame into at least one image area; the sorting sub-module is used for obtaining the speed vector of the feature points in each image area, sorting the speed vector according to at least one vector parameter corresponding to the speed vector, and obtaining a sorting result; and the third determining submodule is used for determining the sequencing queue of the speed vector corresponding to each image area according to the sequencing result.
In one example, the ordering submodule includes: a parameter value obtaining unit, configured to obtain a parameter value of a vector parameter according to a velocity vector of a first feature point in each image area; a position determining unit configured to determine an insertion position of a velocity vector of the first feature point in a velocity vector ordered list according to a parameter value of the vector parameter, wherein the velocity vector ordered list stores a velocity vector of a second feature point inserted before the velocity vector of the first feature point; and the inserting unit is used for inserting the speed vector of the first characteristic point into the inserting position in the speed vector sorting list to obtain a sorting result.
In one example, the vector parameters include: die length and direction angle.
In one example, the second determining module 53 further includes: the vector determining unit is used for determining a head vector and a tail vector in the sorting queue of the speed vector corresponding to each image area; a moving object determining unit configured to determine that a moving object exists around the target vehicle in a case where a difference in parameter values of vector parameters between the head vector and the tail vector is greater than a parameter threshold.
According to the moving object detection device, the image frames of scenes around the target vehicle can be obtained, the speed vector corresponding to at least one feature point of the image frames can be determined according to the pixel values of the adjacent image frames, and the sorting queue of the speed vectors is determined according to the speed vector, so that moving objects around the target vehicle can be determined according to the comparison of the head vector and the tail vector in the sorting queue, driving convenience is provided for a user driving the target vehicle, and traffic accidents are reduced.
Fig. 6 is a block diagram illustrating an apparatus 600 for detecting a moving object according to an exemplary embodiment. For example, the apparatus 600 may be a vehicle terminal, a mobile phone, a computer, a digital broadcast terminal, a messaging device, a game console, a tablet device, a medical device, an exercise device, a personal digital assistant, or the like.
Referring to fig. 6, apparatus 600 may include one or more of the following components: a processing component 602, a memory 604, a power component 606, a multimedia component 608, an audio component 610, an input/output (I/O) interface 612, a sensor component 614, and a communication component 616.
The processing component 602 generally controls overall operation of the apparatus 600, such as operations associated with display, telephone calls, data communications, camera operations, and recording operations. The processing component 602 may include one or more processors 620 to execute instructions to perform all or part of the steps of the methods described above. Further, the processing component 602 can include one or more modules that facilitate interaction between the processing component 602 and other components. For example, the processing component 602 may include a multimedia module to facilitate interaction between the multimedia component 608 and the processing component 602.
The memory 604 is configured to store various types of data to support operations at the apparatus 600. Examples of such data include instructions for any application or method operating on the apparatus 600, contact data, phonebook data, messages, pictures, videos, and the like. The memory 604 may be implemented by any type or combination of volatile or nonvolatile memory devices such as Static Random Access Memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable programmable read-only memory (EPROM), programmable read-only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, magnetic or optical disk.
The power supply component 606 provides power to the various components of the device 600. The power supply components 606 may include a power management system, one or more power supplies, and other components associated with generating, managing, and distributing power for the apparatus 600.
The multimedia component 608 includes a screen between the device 600 and the user that provides an output interface. In some embodiments, the screen may include a Liquid Crystal Display (LCD) and a Touch Panel (TP). If the screen includes a touch panel, the screen may be implemented as a touch screen to receive input signals from a user. The touch panel includes one or more touch sensors to sense touches, swipes, and gestures on the touch panel. The touch sensor may sense not only the boundary of a touch or slide action, but also the duration and pressure associated with the touch or slide operation. In some embodiments, the multimedia component 608 includes a front camera and/or a rear camera. The front camera and/or the rear camera may receive external multimedia data when the apparatus 600 is in an operational mode, such as a photographing mode or a video mode. Each front camera and rear camera may be a fixed optical lens system or have focal length and optical zoom capabilities.
The audio component 610 is configured to output and/or input audio signals. For example, the audio component 610 includes a Microphone (MIC) configured to receive external audio signals when the apparatus 600 is in an operational mode, such as a call mode, a recording mode, and a voice recognition mode. The received audio signals may be further stored in the memory 604 or transmitted via the communication component 616. In some embodiments, audio component 610 further includes a speaker for outputting audio signals.
The I/O interface 612 provides an interface between the processing component 602 and peripheral interface modules, which may be a keyboard, click wheel, buttons, etc. These buttons may include, but are not limited to: homepage button, volume button, start button, and lock button.
The sensor assembly 614 includes one or more sensors for providing status assessment of various aspects of the apparatus 600. For example, the sensor assembly 614 may detect the on/off state of the device 600, the relative positioning of the components, such as the display and keypad of the device 600, the sensor assembly 614 may also detect a change in position of the device 600 or one of the components of the device 600, the presence or absence of user contact with the device 600, the orientation or acceleration/deceleration of the device 600, and a change in temperature of the device 600. The sensor assembly 614 may include a proximity sensor configured to detect the presence of nearby objects in the absence of any physical contact. The sensor assembly 614 may also include a light sensor, such as a CMOS or CCD image sensor, for use in imaging applications. In some embodiments, the sensor assembly 614 may also include an acceleration sensor, a gyroscopic sensor, a magnetic sensor, a pressure sensor, or a temperature sensor.
The communication component 616 is configured to facilitate communication between the apparatus 600 and other devices in a wired or wireless manner. The device 600 may access a wireless network based on a communication standard, such as WiFi,2G or 3G, or a combination thereof. In one exemplary embodiment, the communication component 616 receives broadcast signals or broadcast-related information from an external broadcast management system via a broadcast channel. In one exemplary embodiment, the communication component 616 further includes a Near Field Communication (NFC) module to facilitate short range communications. For example, the NFC module may be implemented based on Radio Frequency Identification (RFID) technology, infrared data association (IrDA) technology, ultra Wideband (UWB) technology, bluetooth (BT) technology, and other technologies.
In an exemplary embodiment, the apparatus 600 may be implemented by one or more Application Specific Integrated Circuits (ASICs), digital Signal Processors (DSPs), digital Signal Processing Devices (DSPDs), programmable Logic Devices (PLDs), field Programmable Gate Arrays (FPGAs), controllers, microcontrollers, microprocessors, or other electronic elements for executing the methods described above.
In one possible implementation, the program may be program code comprising computer operating instructions. The program is particularly useful for implementing the above-described moving object detection method.
The present disclosure may be a system, method, and/or computer program product. The computer program product may include a computer readable storage medium having computer readable program instructions embodied thereon for causing a processor to implement aspects of the present disclosure. The computer program instructions, when executed by the processor, implement the moving object detection method described above.
The computer readable storage medium may be a tangible device that can hold and store instructions for use by an instruction execution device. The computer readable storage medium may be, for example, but not limited to, an electronic storage device, a magnetic storage device, an optical storage device, an electromagnetic storage device, a semiconductor storage device, or any suitable combination of the foregoing. More specific examples (a non-exhaustive list) of the computer-readable storage medium would include the following: portable computer disks, hard disks, random Access Memory (RAM), read-only memory (ROM), erasable programmable read-only memory (EPROM or flash memory), static Random Access Memory (SRAM), portable compact disk read-only memory (CD-ROM), digital Versatile Disks (DVD), memory sticks, floppy disks, mechanical coding devices, punch cards or in-groove structures such as punch cards or grooves having instructions stored thereon, and any suitable combination of the foregoing. Computer-readable storage media, as used herein, are not to be construed as transitory signals per se, such as radio waves or other freely propagating electromagnetic waves, electromagnetic waves propagating through waveguides or other transmission media (e.g., optical pulses through fiber optic cables), or electrical signals transmitted through wires.
The computer readable program instructions described herein may be downloaded from a computer readable storage medium to a respective computing/processing device or to an external computer or external storage device over a network, such as the internet, a local area network, a wide area network, and/or a wireless network. The network may include copper transmission cables, fiber optic transmissions, wireless transmissions, routers, firewalls, switches, gateway computers and/or edge servers. The network interface card or network interface in each computing/processing device receives computer readable program instructions from the network and forwards the computer readable program instructions for storage in a computer readable storage medium in the respective computing/processing device.
Computer program instructions for performing the operations of the present disclosure can be assembly instructions, instruction Set Architecture (ISA) instructions, machine-related instructions, microcode, firmware instructions, state setting data, or source or object code written in any combination of one or more programming languages, including an object oriented programming language such as Smalltalk, c++ or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The computer readable program instructions may be executed entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any kind of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or may be connected to an external computer (for example, through the Internet using an Internet service provider). In some embodiments, aspects of the present disclosure are implemented by personalizing electronic circuitry, such as programmable logic circuitry, field Programmable Gate Arrays (FPGAs), or Programmable Logic Arrays (PLAs), with state information of computer readable program instructions, which can execute the computer readable program instructions.
Various aspects of the present disclosure are described herein with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the disclosure. It will be understood that each block of the flowchart illustrations and/or block diagrams, and combinations of blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer-readable program instructions.
These computer readable program instructions may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks. These computer readable program instructions may also be stored in a computer readable storage medium that can direct a computer, programmable data processing apparatus, and/or other devices to function in a particular manner, such that the computer readable medium having the instructions stored therein includes an article of manufacture including instructions which implement the function/act specified in the flowchart and/or block diagram block or blocks.
The computer readable program instructions may also be loaded onto a computer, other programmable data processing apparatus, or other devices to cause a series of operational steps to be performed on the computer, other programmable apparatus or other devices to produce a computer implemented process such that the instructions which execute on the computer, other programmable apparatus or other devices implement the functions/acts specified in the flowchart and/or block diagram block or blocks.
The flowcharts and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of instructions, which comprises one or more executable instructions for implementing the specified logical function(s). In some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The foregoing description of the embodiments of the present disclosure has been presented for purposes of illustration and description, and is not intended to be exhaustive or limited to the embodiments disclosed. Many modifications and variations will be apparent to those of ordinary skill in the art without departing from the scope and spirit of the various embodiments described. The terminology used herein was chosen in order to best explain the principles of the embodiments, the practical application, or the technical improvement of the technology in the marketplace, or to enable others of ordinary skill in the art to understand the embodiments disclosed herein.
Claims (9)
1. A moving object detection method, characterized by comprising:
acquiring an image frame of a scene around a target vehicle;
determining a speed vector corresponding to at least one feature point of the image frame according to pixel values of adjacent image frames;
determining a sequencing queue of the speed vectors according to the speed vectors, and determining moving objects around the target vehicle according to comparison of head and tail vectors in the sequencing queue, wherein the sequencing queue is sequenced according to at least one vector parameter corresponding to the speed vectors;
wherein the determining the sorting queue of the speed vector according to the speed vector comprises: dividing the image frame into at least one image region; acquiring a speed vector of a feature point in each image area, and sequencing the speed vector according to at least one vector parameter corresponding to the speed vector to obtain a sequencing result; determining a sequencing queue of the speed vector corresponding to each image area according to the sequencing result;
The step of obtaining the speed vector of the feature point in each image area, and sequencing the speed vector according to at least one vector parameter corresponding to the speed vector to obtain a sequencing result comprises the following steps: obtaining a parameter value of a vector parameter according to the speed vector of the first characteristic point in each image area; determining the insertion position of the speed vector of the first characteristic point in a speed vector ordered list according to the parameter value of the vector parameter, wherein the speed vector ordered list stores the speed vector of a second characteristic point inserted before the speed vector of the first characteristic point; and inserting the speed vector of the first characteristic point into the insertion position in the speed vector ordered list to obtain an ordered result.
2. The method of claim 1, wherein the acquiring an image frame of a scene surrounding the target vehicle comprises:
acquiring a first image frame acquired by an image pickup device in a first projection mode;
and if the first projection mode is different from the preset projection mode, converting the first image frame according to the preset projection mode to obtain a second image frame.
3. The method of claim 1, wherein determining a velocity vector corresponding to at least one feature point of the image frame based on pixel values of adjacent image frames comprises:
Determining at least one characteristic point in pixel points of the image frames according to pixel values of the pixel points adjacent to the image frames;
and determining a speed vector corresponding to the characteristic point according to the pixel coordinates of the characteristic point adjacent to the image frame.
4. A method according to claim 3, wherein said determining a velocity vector corresponding to said feature point based on pixel coordinates of said feature point in adjacent said image frames comprises:
acquiring pixel coordinates and brightness of a first pixel point corresponding to a feature point in the adjacent image frames;
acquiring a second pixel point which is distant from the first pixel point by a preset pixel point distance in the adjacent image frame according to the pixel coordinates of the first pixel point;
and determining a speed vector corresponding to the characteristic point according to the brightness of the first pixel point and the brightness of the second pixel point.
5. The method of claim 1, wherein the vector parameters comprise: die length and direction angle.
6. The method of claim 1, wherein determining moving objects around the target vehicle based on a comparison of head-to-tail vectors in the ordering queue comprises:
determining a head vector and a tail vector in an ordering queue of the speed vector corresponding to each image area;
In the case where the difference in the parameter values of the vector parameters between the head vector and the tail vector is greater than a parameter threshold, it is determined that a moving object exists around the target vehicle.
7. A moving object detecting device, characterized by comprising:
the acquisition module is used for acquiring image frames of scenes around the target vehicle;
a first determining module, configured to determine a velocity vector corresponding to at least one feature point of the image frame according to pixel values adjacent to the image frame;
the second determining module is used for determining a sequencing queue of the speed vectors according to the speed vectors, and determining moving objects around the target vehicle according to comparison of head vectors and tail vectors in the sequencing queue, wherein the sequencing queue is used for sequencing according to at least one vector parameter corresponding to the speed vectors;
wherein the determining the sorting queue of the speed vector according to the speed vector comprises: dividing the image frame into at least one image region; acquiring a speed vector of a feature point in each image area, and sequencing the speed vector according to at least one vector parameter corresponding to the speed vector to obtain a sequencing result; determining a sequencing queue of the speed vector corresponding to each image area according to the sequencing result;
The step of obtaining the speed vector of the feature point in each image area, and sequencing the speed vector according to at least one vector parameter corresponding to the speed vector to obtain a sequencing result comprises the following steps: obtaining a parameter value of a vector parameter according to the speed vector of the first characteristic point in each image area; determining the insertion position of the speed vector of the first characteristic point in a speed vector ordered list according to the parameter value of the vector parameter, wherein the speed vector ordered list stores the speed vector of a second characteristic point inserted before the speed vector of the first characteristic point; and inserting the speed vector of the first characteristic point into the insertion position in the speed vector ordered list to obtain an ordered result.
8. A moving object detecting device, characterized by comprising:
a processor;
a memory for storing processor-executable instructions;
wherein the processor is configured to perform the method of any one of claims 1 to 6.
9. A non-transitory computer readable storage medium having stored thereon computer program instructions, which when executed by a processor, implement the method of any of claims 1 to 6.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910209887.8A CN109919126B (en) | 2019-03-19 | 2019-03-19 | Method and device for detecting moving object and storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910209887.8A CN109919126B (en) | 2019-03-19 | 2019-03-19 | Method and device for detecting moving object and storage medium |
Publications (2)
Publication Number | Publication Date |
---|---|
CN109919126A CN109919126A (en) | 2019-06-21 |
CN109919126B true CN109919126B (en) | 2023-07-25 |
Family
ID=66965710
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910209887.8A Active CN109919126B (en) | 2019-03-19 | 2019-03-19 | Method and device for detecting moving object and storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN109919126B (en) |
Families Citing this family (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111068323B (en) * | 2019-12-20 | 2023-08-22 | 腾讯科技(深圳)有限公司 | Intelligent speed detection method, intelligent speed detection device, computer equipment and storage medium |
CN114018589B (en) * | 2021-10-25 | 2024-03-15 | 中汽研汽车检验中心(天津)有限公司 | Method and device for determining airbag ejection speed, electronic equipment and medium |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
FR2628864A1 (en) * | 1988-03-21 | 1989-09-22 | France Etat | Segmentation of point velocity vectors in image sequence - segmenting velocity data based on movement prediction to generate representative vector for processing |
JPH0795592A (en) * | 1993-03-15 | 1995-04-07 | Massachusetts Inst Of Technol <Mit> | System for encoding of image data and for changing of said data into plurality of layers expressing coherent motion region and into motion parameter accompanying said layers |
JP2013076615A (en) * | 2011-09-30 | 2013-04-25 | Mitsubishi Space Software Kk | Mobile object detection apparatus, mobile object detection program, mobile object detection method, and flying object |
WO2014064690A1 (en) * | 2012-10-23 | 2014-05-01 | Sivan Ishay | Real time assessment of picture quality |
CN107292266A (en) * | 2017-06-21 | 2017-10-24 | 吉林大学 | A kind of vehicle-mounted pedestrian area estimation method clustered based on light stream |
CN107845104A (en) * | 2016-09-20 | 2018-03-27 | 意法半导体股份有限公司 | A kind of method, associated processing system, passing vehicle detecting system and vehicle for detecting passing vehicle |
-
2019
- 2019-03-19 CN CN201910209887.8A patent/CN109919126B/en active Active
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
FR2628864A1 (en) * | 1988-03-21 | 1989-09-22 | France Etat | Segmentation of point velocity vectors in image sequence - segmenting velocity data based on movement prediction to generate representative vector for processing |
JPH0795592A (en) * | 1993-03-15 | 1995-04-07 | Massachusetts Inst Of Technol <Mit> | System for encoding of image data and for changing of said data into plurality of layers expressing coherent motion region and into motion parameter accompanying said layers |
JP2013076615A (en) * | 2011-09-30 | 2013-04-25 | Mitsubishi Space Software Kk | Mobile object detection apparatus, mobile object detection program, mobile object detection method, and flying object |
WO2014064690A1 (en) * | 2012-10-23 | 2014-05-01 | Sivan Ishay | Real time assessment of picture quality |
CN107845104A (en) * | 2016-09-20 | 2018-03-27 | 意法半导体股份有限公司 | A kind of method, associated processing system, passing vehicle detecting system and vehicle for detecting passing vehicle |
CN107292266A (en) * | 2017-06-21 | 2017-10-24 | 吉林大学 | A kind of vehicle-mounted pedestrian area estimation method clustered based on light stream |
Also Published As
Publication number | Publication date |
---|---|
CN109919126A (en) | 2019-06-21 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109829501B (en) | Image processing method and device, electronic equipment and storage medium | |
CN110348537B (en) | Image processing method and device, electronic equipment and storage medium | |
CN110378976B (en) | Image processing method and device, electronic equipment and storage medium | |
CN109801270B (en) | Anchor point determining method and device, electronic equipment and storage medium | |
CN110287874B (en) | Target tracking method and device, electronic equipment and storage medium | |
CN109829863B (en) | Image processing method and device, electronic equipment and storage medium | |
CN109948494B (en) | Image processing method and device, electronic equipment and storage medium | |
CN108010060B (en) | Target detection method and device | |
CN106778773B (en) | Method and device for positioning target object in picture | |
CN111104920B (en) | Video processing method and device, electronic equipment and storage medium | |
CN113841179A (en) | Image generation method and device, electronic device and storage medium | |
CN111680646B (en) | Action detection method and device, electronic equipment and storage medium | |
CN112270288A (en) | Living body identification method, access control device control method, living body identification device, access control device and electronic device | |
CN112572281A (en) | Light intensity adjusting method and device, electronic equipment and storage medium | |
CN113486759B (en) | Dangerous action recognition method and device, electronic equipment and storage medium | |
CN112598676B (en) | Image segmentation method and device, electronic equipment and storage medium | |
CN113261011B (en) | Image processing method and device, electronic equipment and storage medium | |
CN110543849A (en) | detector configuration method and device, electronic equipment and storage medium | |
CN109919126B (en) | Method and device for detecting moving object and storage medium | |
CN114187498A (en) | Occlusion detection method and device, electronic equipment and storage medium | |
CN112330717B (en) | Target tracking method and device, electronic equipment and storage medium | |
CN109829393B (en) | Moving object detection method and device and storage medium | |
CN109889693B (en) | Video processing method and device, electronic equipment and storage medium | |
CN112860061A (en) | Scene image display method and device, electronic equipment and storage medium | |
CN111832338A (en) | Object detection method and device, electronic equipment and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |