CN105096338A - Moving object extraction method and device - Google Patents

Moving object extraction method and device Download PDF

Info

Publication number
CN105096338A
CN105096338A CN201410842471.7A CN201410842471A CN105096338A CN 105096338 A CN105096338 A CN 105096338A CN 201410842471 A CN201410842471 A CN 201410842471A CN 105096338 A CN105096338 A CN 105096338A
Authority
CN
China
Prior art keywords
image
moving target
displacement vector
region
bounding box
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201410842471.7A
Other languages
Chinese (zh)
Other versions
CN105096338B (en
Inventor
张增
秦凡
伍小洁
杨鹤猛
赵恩伟
王森
张巍
吴新桥
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
China South Power Grid International Co ltd
Tianjin Aerospace Zhongwei Date Systems Technology Co Ltd
Original Assignee
China South Power Grid International Co ltd
Tianjin Aerospace Zhongwei Date Systems Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by China South Power Grid International Co ltd, Tianjin Aerospace Zhongwei Date Systems Technology Co Ltd filed Critical China South Power Grid International Co ltd
Priority to CN201410842471.7A priority Critical patent/CN105096338B/en
Publication of CN105096338A publication Critical patent/CN105096338A/en
Application granted granted Critical
Publication of CN105096338B publication Critical patent/CN105096338B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Image Analysis (AREA)

Abstract

The invention discloses a moving target extraction method and a device, wherein the method comprises the following steps: reading two frames of images from continuous multi-frame images according to a preset frame interval, wherein the two frames of images are a first image and a second image; dividing the first image and the second image into a plurality of partitions, projecting the partitions to determine displacement vectors of the partitions, and determining a global displacement vector of the first image relative to the second image according to the displacement vectors of the partitions; determining ORB characteristic points of the first image and the second image to obtain matching point pairs of the two frames of images; eliminating background points of the first image and the second image according to the matching point pairs and the global displacement vector to obtain foreground points of the first image and the second image; and determining an initial segmentation bounding box of a watershed algorithm according to the distribution of the foreground points and extracting a moving object image according to the initial segmentation bounding box. By the method and the device, the problem of incomplete segmentation of the moving target under the dynamic background is solved, and the characteristic points on the target are distinguished from the characteristic points on the background.

Description

Extracting of Moving Object and device
Technical field
The present invention relates to image processing field, in particular to a kind of Extracting of Moving Object and device.
Background technology
Moving object detection, extractive technique are the hot research problems of digital image processing field, have important application at security protection, field of traffic.Moving target recognition is exactly by the moving Object Segmentation that exists in video or image sequence out.In correlation technique, the method that can realize moving target recognition comprises background subtraction method, frame differential method and optical flow method etc.; Conventional velocity to moving target measuring system can realize based on video camera, magnetic test coil, UWBSAR, pulse clawback signal.
A kind of method is described in paper " adopting the passive optical velocity measuring technique of sequence image ", the method utilizes and is installed on collected by camera sequence image perpendicular to the ground immediately below unmanned plane, the measurement of unmanned plane speed is completed by the SIFT feature Point matching between sequence image, this paper only considers the situation that camera is vertically placed, the speed measured is the speed of unmanned plane self, do not consider the tilted-putted situation of camera, do not relate to the measurement of Moving Targets in Sequent Images speed.
In sum, at least there is following problem in the disposal route in correlation technique: 1) for the situation exploitation that camera position is fixing, range of application is restricted; 2) to the moving target recognition under dynamic background, common background subtraction method can not use, and single moving object detection is easily multiple moving target by frame differential method, and the problem of optical flow method is to distinguish target light flow point and bias light flow point.
Summary of the invention
The invention provides a kind of Extracting of Moving Object and device, at least to solve the inaccurate problem of prior art moving target recognition.
According to an aspect of the present invention, provide a kind of Extracting of Moving Object, comprising:
From continuous multiple frames image, read two two field pictures according to predetermined frame interval, wherein, two two field pictures are the first image and the second image;
First image and the second image are divided into multiple subregion, multiple subregion are projected and determines the displacement vector of multiple subregion, determine the global displacement vector of the first image relative to the second image according to the displacement vector of multiple subregion;
Determine the ORB unique point of the first image and the second image, obtain the matching double points of two two field pictures;
Eliminate the background dot of the first image and the second image according to matching double points and global displacement vector, obtain the foreground point of the first image and the second image;
The initialize partition bounding box of watershed algorithm is determined according to the distribution of foreground point;
Watershed algorithm is adopted to extract movement destination image according to initialize partition bounding box.
Further, multiple subregion comprises the overall subregion of the first image and the second integral image.
Further, determine the initialize partition bounding box of watershed algorithm according to the distribution of foreground point, comprising:
Select the point that foreground point range image four direction frontier distance is nearest, wherein, the nearest point of left and right frontier distance is respectively A, B, and the point nearest apart from upper and lower frontier distance is respectively C, D;
Do the vertical line of image up-and-down boundary at 2 respectively through A, B, do the vertical line of image right boundary at 2 through C, D, four formation rectangular areas that directly cross, wherein, the left and right width of rectangular area is a, upper-lower height is b;
Left and right for rectangular area width is respectively expanded a/2, and upper and lower height respectively expands b/2, obtains initialize partition bounding box.
Further, adopt watershed algorithm to extract movement destination image according to initialize partition bounding box, comprising:
Watershed algorithm is adopted to be multiple region by the region segmentation in initialize partition bounding box;
The region determining to meet in multiple region first condition or second condition is the region on moving target, wherein, first condition is that region comprises foreground point and do not contact with initialize partition bounding box, and second condition is that region contacts with the region on moving target and do not contact with initialize partition bounding box;
The region merged on moving target obtains movement destination image.
Further, said method also comprises:
Determine the first boundary rectangle of the movement destination image of the first image, the second boundary rectangle of the movement destination image of the second image;
Determine that the first image is in the first boundary rectangle and the prospect matching double points of the second image in the second boundary rectangle;
According to the displacement vector of the displacement vector determination moving target between prospect matching double points;
According to the displacement vector of the determined moving target of continuous multiple frames image, determine the 3rd image of the maximum absolute value of the displacement vector of moving target in continuous multiple frames image and the difference of global displacement vector;
According to the speed of the state determination moving target of the 3rd image, video camera and range finder using laser.
According to another aspect of the present invention, provide a kind of moving target recognition device, comprising:
Read module, for reading two two field pictures according to predetermined frame interval from continuous multiple frames image, wherein, two two field pictures are the first image and the second image;
Global displacement vector determination module, for the first image and the second image are divided into multiple subregion, multiple subregion is projected and determines the displacement vector of multiple subregion, determine the global displacement vector of the first image relative to the second image according to the displacement vector of multiple subregion;
Match point determination module, for determining the ORB unique point of the first image and the second image, obtains the matching double points of two two field pictures;
Foreground point determination module, for eliminating the background dot of the first image and the second image according to matching double points and global displacement vector, obtains the foreground point of the first image and the second image;
Initialization module, for determining the initialize partition bounding box of watershed algorithm according to the distribution of foreground point;
Moving target recognition module, adopts watershed algorithm to extract movement destination image according to initialize partition bounding box.
Further, multiple subregion comprises the overall subregion of the first image and the second integral image.
Further, initialization module, comprising:
Selection unit, for the point selecting foreground point range image four direction frontier distance nearest, wherein, the nearest point of left and right frontier distance is respectively A, B, and the point nearest apart from upper and lower frontier distance is respectively C, D;
Processing unit, for doing the vertical line of image up-and-down boundary at 2 respectively through A, B, does the vertical line of image right boundary at 2 through C, D, four formation rectangular areas that directly cross, and wherein, the left and right width of rectangular area is a, upper-lower height is b;
Expanding element, for left and right for rectangular area width is respectively expanded a/2, upper and lower height respectively expands b/2, obtains initialize partition bounding box.
Further, moving target recognition module, comprising:
Region segmentation in initialize partition bounding box is multiple region for adopting watershed algorithm by cutting unit;
Determining unit, be region on moving target for determining to meet in multiple region the region of first condition or second condition, wherein, first condition is that region comprises foreground point and do not contact with initialize partition bounding box, and second condition is that region contacts with the region on moving target and do not contact with initialize partition bounding box;
Merge cells, obtains movement destination image for the region merged on moving target.
Further, said apparatus also comprises: movement velocity determination module, and wherein, movement velocity determination module comprises:
First determining unit, for determining the first boundary rectangle of the movement destination image of the first image, the second boundary rectangle of the movement destination image of the second image;
Second determining unit, for determining that the first image is in the first boundary rectangle and the prospect matching double points of the second image in the second boundary rectangle;
3rd determining unit, for the displacement vector according to the displacement vector determination moving target between prospect matching double points;
4th determining unit, for the displacement vector according to the determined moving target of continuous multiple frames image, determines the 3rd image of the maximum absolute value of the displacement vector of moving target in continuous multiple frames image and the difference of global displacement vector;
5th determining unit, according to the speed of the state determination moving target of the 3rd image, video camera and range finder using laser.
By the present invention, the remote moving target recognition achieved under dynamic background is separated with background characteristics point with velocity survey, foreground features point, and the complete and auto Segmentation of moving target, the impact that global displacement vector is subject to target local displacement is little.
Accompanying drawing explanation
Accompanying drawing described herein is used to provide a further understanding of the present invention, and form a application's part, schematic description and description of the present invention, for explaining the present invention, does not form inappropriate limitation of the present invention.In the accompanying drawings:
Fig. 1 is the process flow diagram of the Extracting of Moving Object according to the embodiment of the present invention;
Fig. 2 is the structured flowchart of the moving target recognition device according to the embodiment of the present invention;
Fig. 3 is the process flow diagram according to the optional velocity to moving target defining method of the embodiment of the present invention;
Fig. 4 is the schematic diagram one according to the optional picture portion of the embodiment of the present invention;
Fig. 5 is the schematic diagram two according to the optional picture portion of the embodiment of the present invention.
Embodiment
Hereinafter also describe the present invention in detail with reference to accompanying drawing in conjunction with the embodiments.It should be noted that, when not conflicting, the embodiment in the application and the feature in embodiment can combine mutually.
Provide a kind of Extracting of Moving Object in the present embodiment, Fig. 1 is the process flow diagram of the Extracting of Moving Object according to the embodiment of the present invention, and as shown in Figure 1, this flow process comprises the steps:
Step S102, reads two two field pictures according to predetermined frame interval from continuous multiple frames image, and wherein, two two field pictures are the first image and the second image;
Alternatively, above-mentioned predetermined frame is spaced apart 1, and the first image is the image before the second image.
Step S104, is divided into multiple subregion by the first image and the second image, projects determine the displacement vector of multiple subregion to multiple subregion, determines the global displacement vector of the first image relative to the second image according to the displacement vector of multiple subregion;
Alternatively, subregion comprises the partial-partition of image section and the integral image overall subregion as a subregion, thus can realize the combination of subregion and the overall situation.
Step S106, determines the ORB unique point of the first image and the second image, obtains the matching double points of two two field pictures;
Step S108, eliminates the background dot of the first image and the second image, obtains the foreground point of the first image and the second image according to matching double points and global displacement vector;
Step S110, determines the initialize partition bounding box of watershed algorithm according to the distribution of foreground point;
Step S112, adopts watershed algorithm to extract movement destination image according to initialize partition bounding box.
In an Alternate embodiments of the embodiment of the present invention, above-mentioned steps S110 comprises:
A, selects the point that foreground point range image four direction frontier distance is nearest, and wherein, the nearest point of left and right frontier distance is respectively A, B, and the point nearest apart from upper and lower frontier distance is respectively C, D;
B, does the vertical line of image up-and-down boundary at 2 respectively through A, B, do the vertical line of image right boundary at 2 through C, D, four formation rectangular areas that directly cross, and wherein, the left and right width of rectangular area is a, upper-lower height is b;
C, left and right for rectangular area width is respectively expanded a/2, and upper and lower height respectively expands b/2, obtains initialize partition bounding box.
In an Alternate embodiments of the embodiment of the present invention, above-mentioned steps S112, adopts watershed algorithm to extract movement destination image according to initialize partition bounding box, comprising:
1, adopt watershed algorithm to be multiple region by the region segmentation in initialize partition bounding box;
2, the region determining to meet in multiple region first condition or second condition is the region on moving target, wherein, first condition is that region comprises foreground point and do not contact with initialize partition bounding box, and second condition is that region contacts with the region on moving target and do not contact with initialize partition bounding box;
3, the region merged on moving target obtains movement destination image.
In an Alternate embodiments of the embodiment of the present invention, can also based on the speed of said method determination moving target, said method also comprises for this reason:
Determine the first boundary rectangle of the movement destination image of the first image, the second boundary rectangle of the movement destination image of the second image;
Determine that the first image is in the first boundary rectangle and the prospect matching double points of the second image in the second boundary rectangle;
According to the displacement vector of the displacement vector determination moving target between prospect matching double points;
According to the displacement vector of the determined moving target of continuous multiple frames image, determine the 3rd image of the maximum absolute value of the displacement vector of moving target in continuous multiple frames image and the difference of global displacement vector;
According to the speed of the state determination moving target of the 3rd image, video camera and range finder using laser.
Additionally provide a kind of moving target recognition device in the present embodiment, this device is used for realizing above-described embodiment and preferred implementation, has carried out repeating no more of explanation.As used below, term " module " can realize the software of predetermined function and/or the combination of hardware.Although the device described by following examples preferably realizes with software, hardware, or the realization of the combination of software and hardware also may and conceived.
Fig. 2 is the structured flowchart of the moving target recognition device according to the embodiment of the present invention, and as shown in Figure 2, this device comprises:
Read module 10, for reading two two field pictures according to predetermined frame interval from continuous multiple frames image, wherein, two two field pictures are the first image and the second image;
Global displacement vector determination module 20, be connected with read module 10, for the first image and the second image are divided into multiple subregion, multiple subregion is projected and determines the displacement vector of multiple subregion, determine the global displacement vector of the first image relative to the second image according to the displacement vector of multiple subregion;
Match point determination module 30, is connected with global displacement vector determination module 20, for determining the ORB unique point of the first image and the second image, obtains the matching double points of two two field pictures;
Foreground point determination module 40, is connected with match point determination module 30, for eliminating the background dot of the first image and the second image according to matching double points and global displacement vector, obtains the foreground point of the first image and the second image;
Initialization module 50, is connected with foreground point determination module 40, for determining the initialize partition bounding box of watershed algorithm according to the distribution of foreground point;
Moving target recognition module 60, is connected with initialization module 50, adopts watershed algorithm to extract movement destination image according to initialize partition bounding box.
In an Alternate embodiments of the embodiment of the present invention, above-mentioned multiple subregion comprises the overall subregion of the first image and the second integral image.
In an Alternate embodiments of the embodiment of the present invention, initialization module 50 comprises:
Selection unit, for the point selecting foreground point range image four direction frontier distance nearest, wherein, the nearest point of left and right frontier distance is respectively A, B, and the point nearest apart from upper and lower frontier distance is respectively C, D;
Processing unit, for doing the vertical line of image up-and-down boundary at 2 respectively through A, B, does the vertical line of image right boundary at 2 through C, D, four formation rectangular areas that directly cross, and wherein, the left and right width of rectangular area is a, upper-lower height is b;
Expanding element, for left and right for rectangular area width is respectively expanded a/2, upper and lower height respectively expands b/2, obtains initialize partition bounding box.
In an Alternate embodiments of the embodiment of the present invention, moving target recognition module 60 comprises:
Region segmentation in initialize partition bounding box is multiple region for adopting watershed algorithm by cutting unit;
Determining unit, be region on moving target for determining to meet in multiple region the region of first condition or second condition, wherein, first condition is that region comprises foreground point and do not contact with initialize partition bounding box, and second condition is that region contacts with the region on moving target and do not contact with initialize partition bounding box;
Merge cells, obtains movement destination image for the region merged on moving target.
In an Alternate embodiments of the embodiment of the present invention, said apparatus also comprises: movement velocity determination module, and wherein, movement velocity determination module comprises:
First determining unit, for determining the first boundary rectangle of the movement destination image of the first image, the second boundary rectangle of the movement destination image of the second image;
Second determining unit, for determining that the first image is in the first boundary rectangle and the prospect matching double points of the second image in the second boundary rectangle;
3rd determining unit, for the displacement vector according to the displacement vector determination moving target between prospect matching double points;
4th determining unit, for the displacement vector according to the determined moving target of continuous multiple frames image, determines the 3rd image of the maximum absolute value of the displacement vector of moving target in continuous multiple frames image and the difference of global displacement vector;
5th determining unit, according to the speed of the state determination moving target of the 3rd image, video camera and range finder using laser.
Alternate embodiments
In this Alternate embodiments, provide the moving target recognition under a kind of dynamic background and speed measurement method, when aerial platform (having people's helicopter, unmanned plane helicopter, dirigible etc.) hovers by managing the image of sequence video frame or shooting continuously, to extract moving target, and calculate the speed of moving target, as shown in Figure 3, the method specifically comprises the steps:
Step S302, storage figure picture.M two field picture video camera or camera taken continuously is successively stored in f 1(x, y), f 2(x, y), f 3(x, y) ... f m(x, y): as input present frame or image f 0time (x, y), successively by f m-1(x, y) is stored in f m(x, y), f m-2(x, y) is stored in f m-1(x, y) ..., f 1(x, y) is stored in f 2(x, y), f 0(x, y) is stored in f 1(x, y), by f 1(x, y), f 3(x, y) stored in image PicA, in PicB, the cycle of shooting with video-corder of frame of video or the shooting cycle of camera are T.
In this Alternate embodiments, adopt subregion projection and the overall global displacement vector calculation method combined that projects, can reduce the impact of local moving objects, the global displacement vector obtained is more accurate.
Step S304, ORB feature point extraction.Calculate the ORB unique point of PicA, PicB, obtain the matching double points of two two field pictures.
Step S306, global motion vector is estimated.Sciagraphy is used to calculate the global displacement vector of PicA relative to PicB, suppose that picture traverse is w, be highly h, the upper left corner is image origin, and the downward and picture right side is just, the position upper left corner starting point x coordinate of this subregion, upper left corner starting point y coordinate, rectangle width, rectangular elevation represents, as shown in Figures 4 and 5, each subregion A 1, A 2, A 3, A 4, A 5, A 6can be expressed as:
( w 55 , h 55 , 17 w 55 , 26 h 55 ) ;
( 37 w 55 , h 55 , 17 w 55 , 26 h 55 ) ;
( w 55 , 28 h 55 , 17 w 55 , 26 h 55 ) ;
( 37 w 55 , 28 h 55 , 17 w 55 , 26 h 55 ) ;
( 19 w 55 , 14 h 55 , 17 w 55 , 26 h 55 ) ;
( w 55 , h 55 , 53 w 55 , 53 h 55 ) ;
The projection of each subregion is carried out the displacement vector that correlation computations obtains and is respectively: (a, b, c ∈ 1,2,3,4,5 and a ≠ b, b ≠ c, a ≠ c) (1)
When above formula works as S minglobal displacement vector when obtaining minimum value:
Step S308, foreground point is extracted.
The global displacement vector removal of images PicA calculated according to ORB unique point and the step S306 of step S304 coupling and PicB background dot, obtain foreground point.Image PicA and PicB has n to be respectively P to match point a1, P a2p an-1, P anand P b1, P b2p bn-1, P bndisplacement vector be respectively
P = P min + P max 2 - - - ( 5 )
Prospect match point meets following relation:
when
Adopt ORB Feature Points Matching in conjunction with the mode determination foreground features point of global displacement vector, fast operation.
Step S310, initialize partition bounding box.
According to foreground point (point on the moving target) distribution that step S308 tries to achieve, try to achieve the initialize partition bounding box of watershed algorithm, method is as follows:
Select the point that foreground point range image four direction frontier distance is nearest respectively, the nearest point of left and right frontier distance is respectively A, B, the point nearest apart from upper and lower frontier distance is respectively C, D, the vertical line of image up-and-down boundary is done at 2 respectively through A, B, the vertical line of image right boundary is done at 2 through C, D, article four, the rectangular area directly crossed is denoted as RectA, and its left and right width is a, and upper-lower height is b; Left and right for RectA width is respectively expanded a/2, and upper and lower height respectively expands b/2, obtains rectangular area RectB.
In this Alternate embodiments, adopt mode completely automatically to give the initialize partition bounding box of watershed algorithm, loaded down with trivial details manual operation can be avoided.
Step S312, moving target recognition.
Ask for the outer boundary frame of target, adopt watershed algorithm RectB is divided into some regions, certain region conforms with a), b) condition think that this region is the region on moving target first:
A), this region comprises foreground point and does not contact with the bounding box of RectB;
B), this region contacts with the region on moving target and does not contact with the bounding box of RectB.
Region merging technique on moving target is obtained split rear image and be the moving target detected, the boundary rectangle asking for PicA and PicB moving target is designated as RectCA and RectCB respectively, the centre coordinate of note RectCA is PointCA coordinate figure is (CAx, CAy), width is WidthCA pixel, it is highly HeightCA pixel, the centre coordinate of note RectCB is PointCB coordinate figure is (CBx, CBy), width is WidthCB pixel, be highly HeightCB, the minimum dimension detecting moving target is RectS pixel, think when meeting following relation and detect moving target:
WidthCA × HeightCA ≥ RectS | WidthCA × HeightCA - WidthCB × HeightCB | WidthCA × HeightCA ≥ 0.15 × RectS - - - ( 7 )
By this step, can go out moving target by complete parttion, avoiding single moving Object Segmentation is the situation of several target.
Step S314, displacement of targets Vector operation.
The prospect matching double points quantity of statistical picture PicA and PicB in region RectCA, RectCB is designated as PointM, and the displacement vector between match point is followed successively by moving target vector is
In this step, use the ORB unique point in target to calculate velocity to moving target, multiple spot calculates and reduces the error of calculation.
Step S316, velocity to moving target calculates.
Successively by f 3(x, y), f 4(x, y) ... f k(x, y) stored in image PicB, according to step 2)-7) process computation displacement vector (unit picture element),
Work as S cmaximal value S maxtime corresponding be f stored in PicB image z(x, y), then the speed calculation method of moving target is as follows, f 1(x, y) focal length of camera is f 1, range finder using laser range finding is l 1, f n(x, y) focal length of camera is f y, range finder using laser range finding is l y, minister video camera pixel dimension be μ measure velocity to moving target be:
v = S max × μ ( l 1 f 1 - l y f y ) 2 ( z - 1 ) T - - - ( 10 )
Can velocity to moving target be calculated according to the state of video camera and range finder using laser and measured value, aerial remote observation can be adapted to and test the speed.
This Alternate embodiments, solves the incomplete problem of moving Object Segmentation under usual dynamic background, and the unique point in target and the unique point in background is distinguished.The velocity to moving target solving mobile camera measures problem, can be applicable to aerial platform and realizes testing the speed at a distance to moving target.
Obviously, those skilled in the art should be understood that, above-mentioned of the present invention each module or each step can realize with general calculation element, they can concentrate on single calculation element, or be distributed on network that multiple calculation element forms, alternatively, they can realize with the executable program code of calculation element, thus, they can be stored and be performed by calculation element in the storage device, and in some cases, step shown or described by can performing with the order be different from herein, or they are made into each integrated circuit modules respectively, or the multiple module in them or step are made into single integrated circuit module to realize.Like this, the present invention is not restricted to any specific hardware and software combination.
The foregoing is only the preferred embodiments of the present invention, be not limited to the present invention, for a person skilled in the art, the present invention can have various modifications and variations.Within the spirit and principles in the present invention all, any amendment done, equivalent replacement, improvement etc., all should be included within protection scope of the present invention.

Claims (10)

1. an Extracting of Moving Object, is characterized in that, comprising:
From continuous multiple frames image, read two two field pictures according to predetermined frame interval, wherein, described two two field pictures are the first image and the second image;
Described first image and described second image are divided into multiple subregion, described multiple subregion is projected and determines the displacement vector of described multiple subregion, determine the global displacement vector of described first image relative to described second image according to the displacement vector of described multiple subregion;
Determine the ORB unique point of described first image and described second image, obtain the matching double points of two two field pictures; Eliminate the background dot of described first image and described second image according to described matching double points and described global displacement vector, obtain the foreground point of described first image and described second image;
The initialize partition bounding box of watershed algorithm is determined according to the distribution of described foreground point;
Watershed algorithm is adopted to extract movement destination image according to described initialize partition bounding box.
2. method according to claim 1, is characterized in that, described multiple subregion comprises the overall subregion of described first image and described second integral image.
3. method according to claim 1, is characterized in that, determines the initialize partition bounding box of watershed algorithm, comprising according to the distribution of described foreground point:
Select the point that foreground point range image four direction frontier distance is nearest, wherein, the nearest point of left and right frontier distance is respectively A, B, and the point nearest apart from upper and lower frontier distance is respectively C, D;
Do the vertical line of image up-and-down boundary at 2 respectively through A, B, do the vertical line of image right boundary at 2 through C, D, four formation rectangular areas that directly cross, wherein, the left and right width of described rectangular area is a, upper-lower height is b;
Left and right for described rectangular area width is respectively expanded a/2, and upper and lower height respectively expands b/2, obtains initialize partition bounding box.
4. the method according to claim 1 or 3, is characterized in that, adopts watershed algorithm to extract movement destination image according to described initialize partition bounding box, comprising:
Watershed algorithm is adopted to be multiple region by the region segmentation in described initialize partition bounding box;
The region determining to meet in described multiple region first condition or second condition is the region on moving target, wherein, described first condition is that region comprises foreground point and do not contact with described initialize partition bounding box, and described second condition is that region contacts with the region on moving target and do not contact with described initialize partition bounding box;
The region merged on described moving target obtains movement destination image.
5. method according to any one of claim 1 to 4, is characterized in that, also comprises:
Determine the first boundary rectangle of the movement destination image of described first image, the second boundary rectangle of the movement destination image of described second image;
Determine that described first image is in described first boundary rectangle and the prospect matching double points of described second image in described second boundary rectangle;
According to the displacement vector of the displacement vector determination moving target between prospect matching double points;
According to the displacement vector of the determined moving target of described continuous multiple frames image, determine the 3rd image of the maximum absolute value of the displacement vector of moving target and the difference of global displacement vector in described continuous multiple frames image;
According to the speed of the state determination moving target of described 3rd image, video camera and range finder using laser.
6. a moving target recognition device, is characterized in that, comprising:
Read module, for reading two two field pictures according to predetermined frame interval from continuous multiple frames image, wherein, described two two field pictures are the first image and the second image;
Global displacement vector determination module, for described first image and described second image are divided into multiple subregion, described multiple subregion is projected and determines the displacement vector of described multiple subregion, determine the global displacement vector of described first image relative to described second image according to the displacement vector of described multiple subregion;
Match point determination module, for determining the ORB unique point of described first image and described second image, obtains the matching double points of two two field pictures;
Foreground point determination module, for eliminating the background dot of described first image and described second image according to described matching double points and described global displacement vector, obtains the foreground point of described first image and described second image;
Initialization module, for determining the initialize partition bounding box of watershed algorithm according to the distribution of described foreground point;
Moving target recognition module, adopts watershed algorithm to extract movement destination image according to described initialize partition bounding box.
7. device according to claim 6, is characterized in that, described multiple subregion comprises the overall subregion of described first image and described second integral image.
8. device according to claim 6, is characterized in that, described initialization module, comprising:
Selection unit, for the point selecting foreground point range image four direction frontier distance nearest, wherein, the nearest point of left and right frontier distance is respectively A, B, and the point nearest apart from upper and lower frontier distance is respectively C, D;
Processing unit, for doing the vertical line of image up-and-down boundary at 2 respectively through A, B, does the vertical line of image right boundary at 2 through C, D, four formation rectangular areas that directly cross, and wherein, the left and right width of described rectangular area is a, upper-lower height is b;
Expanding element, for left and right for described rectangular area width is respectively expanded a/2, upper and lower height respectively expands b/2, obtains initialize partition bounding box.
9. the device according to claim 6 or 8, is characterized in that, described moving target recognition module, comprising:
Region segmentation in described initialize partition bounding box is multiple region for adopting watershed algorithm by cutting unit;
Determining unit, be region on moving target for determining to meet in described multiple region the region of first condition or second condition, wherein, described first condition is that region comprises foreground point and do not contact with described initialize partition bounding box, and described second condition is that region contacts with the region on moving target and do not contact with described initialize partition bounding box;
Merge cells, obtains movement destination image for the region merged on described moving target.
10. device according to any one of claim 1 to 4, is characterized in that, also comprises: movement velocity determination module, and wherein, described movement velocity determination module comprises:
First determining unit, for determining the first boundary rectangle of the movement destination image of described first image, the second boundary rectangle of the movement destination image of described second image;
Second determining unit, for determining that described first image is in described first boundary rectangle and the prospect matching double points of described second image in described second boundary rectangle;
3rd determining unit, for the displacement vector according to the displacement vector determination moving target between prospect matching double points;
4th determining unit, for the displacement vector according to the determined moving target of described continuous multiple frames image, determines the 3rd image of the maximum absolute value of the displacement vector of moving target and the difference of global displacement vector in described continuous multiple frames image;
5th determining unit, according to the speed of the state determination moving target of described 3rd image, video camera and range finder using laser.
CN201410842471.7A 2014-12-30 2014-12-30 Moving object extraction method and device Active CN105096338B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201410842471.7A CN105096338B (en) 2014-12-30 2014-12-30 Moving object extraction method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201410842471.7A CN105096338B (en) 2014-12-30 2014-12-30 Moving object extraction method and device

Publications (2)

Publication Number Publication Date
CN105096338A true CN105096338A (en) 2015-11-25
CN105096338B CN105096338B (en) 2018-06-22

Family

ID=54576677

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201410842471.7A Active CN105096338B (en) 2014-12-30 2014-12-30 Moving object extraction method and device

Country Status (1)

Country Link
CN (1) CN105096338B (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109460764A (en) * 2018-11-08 2019-03-12 中南大学 A kind of satellite video ship monitoring method of combination brightness and improvement frame differential method
WO2020042156A1 (en) * 2018-08-31 2020-03-05 深圳市道通智能航空技术有限公司 Motion area detection method and device, and unmanned aerial vehicle
CN114596247A (en) * 2020-11-30 2022-06-07 宏碁股份有限公司 Blood vessel detection device and image-based blood vessel detection method
CN115797164A (en) * 2021-09-09 2023-03-14 同方威视技术股份有限公司 Image splicing method, device and system in fixed view field
CN117152199A (en) * 2023-08-30 2023-12-01 成都信息工程大学 Dynamic target motion vector estimation method, system, equipment and storage medium

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102098440A (en) * 2010-12-16 2011-06-15 北京交通大学 Electronic image stabilizing method and electronic image stabilizing system aiming at moving object detection under camera shake
US20120250979A1 (en) * 2011-03-29 2012-10-04 Kazuki Yokoyama Image processing apparatus, method, and program
CN103065126A (en) * 2012-12-30 2013-04-24 信帧电子技术(北京)有限公司 Re-identification method of different scenes on human body images
CN103971114A (en) * 2014-04-23 2014-08-06 天津航天中为数据系统科技有限公司 Forest fire detection method based on aerial remote sensing
CN104200487A (en) * 2014-08-01 2014-12-10 广州中大数字家庭工程技术研究中心有限公司 Target tracking method based on ORB characteristics point matching

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102098440A (en) * 2010-12-16 2011-06-15 北京交通大学 Electronic image stabilizing method and electronic image stabilizing system aiming at moving object detection under camera shake
US20120250979A1 (en) * 2011-03-29 2012-10-04 Kazuki Yokoyama Image processing apparatus, method, and program
CN103065126A (en) * 2012-12-30 2013-04-24 信帧电子技术(北京)有限公司 Re-identification method of different scenes on human body images
CN103971114A (en) * 2014-04-23 2014-08-06 天津航天中为数据系统科技有限公司 Forest fire detection method based on aerial remote sensing
CN104200487A (en) * 2014-08-01 2014-12-10 广州中大数字家庭工程技术研究中心有限公司 Target tracking method based on ORB characteristics point matching

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2020042156A1 (en) * 2018-08-31 2020-03-05 深圳市道通智能航空技术有限公司 Motion area detection method and device, and unmanned aerial vehicle
CN109460764A (en) * 2018-11-08 2019-03-12 中南大学 A kind of satellite video ship monitoring method of combination brightness and improvement frame differential method
CN109460764B (en) * 2018-11-08 2022-02-18 中南大学 Satellite video ship monitoring method combining brightness characteristics and improved interframe difference method
CN114596247A (en) * 2020-11-30 2022-06-07 宏碁股份有限公司 Blood vessel detection device and image-based blood vessel detection method
CN115797164A (en) * 2021-09-09 2023-03-14 同方威视技术股份有限公司 Image splicing method, device and system in fixed view field
CN115797164B (en) * 2021-09-09 2023-12-12 同方威视技术股份有限公司 Image stitching method, device and system in fixed view field
CN117152199A (en) * 2023-08-30 2023-12-01 成都信息工程大学 Dynamic target motion vector estimation method, system, equipment and storage medium
CN117152199B (en) * 2023-08-30 2024-05-31 成都信息工程大学 Dynamic target motion vector estimation method, system, equipment and storage medium

Also Published As

Publication number Publication date
CN105096338B (en) 2018-06-22

Similar Documents

Publication Publication Date Title
CN107272021B (en) Object detection using radar and visually defined image detection areas
US9483839B1 (en) Occlusion-robust visual object fingerprinting using fusion of multiple sub-region signatures
CN110807350B (en) System and method for scan-matching oriented visual SLAM
US7769227B2 (en) Object detector
US9886649B2 (en) Object detection device and vehicle using same
US20190236381A1 (en) Method and system for detecting obstacles by autonomous vehicles in real-time
Toulminet et al. Vehicle detection by means of stereo vision-based obstacles features extraction and monocular pattern analysis
US8867790B2 (en) Object detection device, object detection method, and program
US7899211B2 (en) Object detecting system and object detecting method
US8204278B2 (en) Image recognition method
US8743183B2 (en) Parallax calculation method and parallax calculation device
CN105096338A (en) Moving object extraction method and device
Rashidi et al. Innovative stereo vision-based approach to generate dense depth map of transportation infrastructure
US20060078197A1 (en) Image processing apparatus
CN105627932A (en) Distance measurement method and device based on binocular vision
JP6524529B2 (en) Building limit judging device
WO1998020455A1 (en) Object detector
KR102096230B1 (en) Determining source lane of moving item merging into destination lane
US11465743B2 (en) System and method for selecting an operation mode of a mobile platform
US11132802B2 (en) Method of detecting moving objects from a temporal sequence of images
US11004211B2 (en) Imaging object tracking system and imaging object tracking method
JPH07294251A (en) Obstacle detector
JPH1096607A (en) Object detector and plane estimation method
Sobottka et al. Vision-based driver assistance using range imagery
CN110488320A (en) A method of vehicle distances are detected using stereoscopic vision

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant