CN101170683B - Target moving object tracking device - Google Patents

Target moving object tracking device Download PDF

Info

Publication number
CN101170683B
CN101170683B CN2007101425988A CN200710142598A CN101170683B CN 101170683 B CN101170683 B CN 101170683B CN 2007101425988 A CN2007101425988 A CN 2007101425988A CN 200710142598 A CN200710142598 A CN 200710142598A CN 101170683 B CN101170683 B CN 101170683B
Authority
CN
China
Prior art keywords
described
pixel
contour images
pixel value
image
Prior art date
Application number
CN2007101425988A
Other languages
Chinese (zh)
Other versions
CN101170683A (en
Inventor
古川聪
Original Assignee
松下电工株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority to JP2006-293079 priority Critical
Priority to JP2006293079A priority patent/JP4725490B2/en
Priority to JP2006293078A priority patent/JP4915655B2/en
Priority to JP2006-293078 priority
Priority to JP2007110915A priority patent/JP4867771B2/en
Priority to JP2007-110915 priority
Application filed by 松下电工株式会社 filed Critical 松下电工株式会社
Publication of CN101170683A publication Critical patent/CN101170683A/en
Application granted granted Critical
Publication of CN101170683B publication Critical patent/CN101170683B/en

Links

Abstract

A target moving object tracking device takes a time series of picture images of a target moving object, and track a movement of the moving object in the picture images for displaying an enlarged view of the moving object. The device includes a template memory storing a template image which is compared with each one of time-series outline images given from the picture images to determine a partial area in match with the template image for each of the outline images and extract the partial areas as a moving object outline image. The template image is constantly updated by being replaced with a combination of the previous moving object outline images to accurately reflect the moving object.

Description

Target moves the object tracking equipment

Technical field

The present invention relates to target and move the object tracking equipment, more specifically, relate to a kind of like this equipment that is used to utilize the possible invador of video camera tracking.

Background technology

Exist increase day by day below need: in the restricted area around door or the analog, follow the tracks of people's activity and identification people's activity by means of video camera.For this reason, the tracking equipment of prior art is configured to the differentiation by the time series monitoring picture usually, determines the mobile subject area in each monitoring picture, and amplifies the mobile subject area of so determining.Yet so the mobile subject area of determining is accelerated along with mobile object moves and is become littler, and this causes the resolution of enlarged image of mobile subject area very poor.In order to address this problem, at paper " Human Tracking Using TemporalAveraging Silhouette with an Active Camera " (Transactions of the Instituteof Electronics, Information and Communication Engineers ISSN:09151923, vol.J88-D-II, No.2, pp291-301 published on February 1st, 2005) in, proposed to determine another scheme of mobile subject area.This paper proposes, and based on the motion vector (light stream (optical flow)) that obtains at the Chosen Point in each image of continuous monitoring image, determines mobile subject area.At first, this scheme obtains to differentiate (background differentiation) and detected encirclement by image discriminating and the detection block of detected mobile object by background.Then, this scheme and then obtain motion vector about the zone of selecting respectively at the within and without of the detection block of each image of two consecutive images, mobile subject area is distinguished in analysis based on motion vector from background, and the mobile contours of objects in the extraction present image moves the shape and the center of object to determine this.Although finding this scheme is effectively in expection has only the environment of a mobile object, expection has under the situation more than a mobile object in the visual field of video camera, and it is but quite difficult to discern mobile object.In order to alleviate this shortcoming, rely on template may think effectively, by this template, target can be moved object and other and move object and differentiate.Because mobile contours of objects is defined as having the set of the part of same movement vector, so the accurate shape of mobile object quite is difficult to extract.Therefore, even add template, such scheme is to realizing that the reliable definite of mobile object also can't be satisfactory.

Summary of the invention

Consider above problem, made the present invention, can determine that the target that target moves object moves the object tracking equipment, is used for the identification that target moves object with the accuracy that improves so that provide.

Equipment according to the present invention comprises: picture image memory (20), and it is configured to store video camera (10) about observing the time series of the captured real picture image in district, wherein, observes district's possible target of covering and moves object; And display (30), it is configured to show selectable one or more real picture images in the described real picture image with required magnification ratio.In this equipment, also comprise: contour images (outline image) processor (40), it is configured to provide contour images from described real picture image respectively; And template storage (60), it is configured to store the template image that recognition objective moves object.This equipment further comprises mobile subject positioner (70), it is configured to each contour images in the described contour images is compared with template image, to detect the regional area that mates with template image in each contour images, and, obtain target and move the position data of object in observing the district based on being detected the described regional area that mates with template image.Comprise in this equipment and amplify picture generating apparatus (80), it is used for based on described position data, from the real picture image and the corresponding extracting section enlarged image of regional area with template image coupling of being detected contour images, and on display, show the picture view that amplifies.The invention is characterized in, mobile subject positioner (70) be configured to from the corresponding contour images of regional area that is detected with template image coupling each extract mobile object outline image, and be to provide template renewal device (62), be used for by use current in the described mobile object outline image one with described mobile object outline image in previous one or more combining replace template image, thereby upgrade template image.By means of this configuration, template image is constantly upgraded, in case current and previous contour images is detected and the template image coupling, has just reflected current and previous contour images well.Therefore, for the profile of the changeless a bit part of shape in the human body (that is with compare the part that between the moving period of human body, is difficult for having the shape fluctuation such as other parts such as arm or legs) such as head or shoulder, template image can be accumulated and weighting, so that provide solid foundation for the reliable recognition of mobile object.In addition, the any little omission of the part among in mobile object outline image one of described mobile object can both replenish by in the described mobile object outline image another, thereby make template image move object near target as far as possible, this causes determining accurately that based on the comparison between contour images and the template image target moves object.

Preferably, described contour images processor is configured to provide the contour images by the binary data definition, so that reduce the storage requirement of storage contour images when realizing described equipment.

Alternatively, described contour images processor can be configured to provide the contour images by the discrete gray levels data definition, when realizing described equipment with enough memories, make it possible to more accurately compare, and accurate more template image is provided with template image with box lunch.

In this respect, described contour images processor can be configured to obtain the contrast of template image, contour images by the binary data definition is provided when surpassing predetermined benchmark with convenient contrast, and when contrast is lower than described benchmark, provides by the defined described contour images of gray-scale data.Therefore, described equipment can rely on the contrast of template image of continual renovation and optimally operation, thereby realizes uniform detection that target is moved object.

In order to determine the contrast of template image, the contour images processor preferably detects average pixel value, this average pixel value is the mean value of composing respectively to the pixel value of the pixel in each subregion in a plurality of subregions (division) of template image, and when in the described subregion any one is detected when having the average pixel value that is lower than threshold value, perhaps when at the detected average pixel value of in the described subregion any one than during greater than predetermined extent, just judging that contrast is lower than described benchmark at the low degree of the average pixel value of another detection in the described subregion.

When the contour images of binary data is provided, the contour images processor preferably provides variable thresholding, be used for described real picture image transitions is become the contour images of binary data, and obtain the average gray-level value of template image, reduce described threshold value when being lower than predetermined limits with the described average gray-level value of box lunch.Therefore, even when the template image contrast reduces, also can be provided for successfully detecting mobile contours of objects image.

Preferably, provide mobile object outline video memory, so that store the time series of mobile object outline image.In this respect, the template renewal device is configured to read the previous mobile object outline image of predetermined number from mobile object outline video memory, these contour images are combined with current mobile object outline image, and by upgrading previous template image with described in conjunction with replacing previous template image.By means of this selective binding of contour images, template image can be moved object so that successfully detect target by appropriately weighted.

Realized a kind of preferred weighting scheme in the template renewal device, it moves and upgrades template image when the accumulation of object outline image reaches predetermined number whenever new continuous one group.

The place can realize another weighting scheme at the template renewal device, and it is only in conjunction with being defined as effective mobile object outline image according to predetermined criterion, thereby makes mobile object detection have the accuracy of raising.

For this reason, the template renewal device is configured to the calculating pixel index, this pixel index for comprise in mobile object outline image each and have a number (number) greater than the pixel of zero pixel value.When above-mentioned criterion is defined by difference at the pixel index of the pixel index of current mobile object outline image and previous mobile object outline image greater than predetermined extent, determine that current mobile object outline image is for effectively.

Alternatively, above criterion can have different definition.In this case, the template renewal device is configured to calculate the standard deviation of one pixel value in the real picture image of current mobile object outline image and correspondence.When this criterion is defined by difference when this standard deviation and the standard deviation that is calculated about previous mobile object outline image greater than predetermined extent, determine that current mobile object outline image is for effectively.

And, can according to constitute in the mobile object outline image each in the number of pixel of mobile object outline define above-mentioned criterion.In this case, the template renewal device is configured to the number of calculating pixel, and provide above criterion, this criterion about the number of pixels of current mobile object outline image with about the difference of the number of pixels of previous mobile object outline image during greater than predetermined extent, determine that current mobile object outline image is for effectively.

The present invention also proposes to use coalignment (71), so that reference template image determines that successfully target moves object.Coalignment (71) is configured to: from contour images, collect different regional areas, i.e. and unit area, each unit area has identical size with template image; Calculating is about the correlation of each described zones of different; And the regional area that will have maximum correlation is defined as the mobile object outline image with the template image coupling.In response to determining of mobile object outline image, the template renewal device is operated to obtain the pixel value of each pixel in the mobile object outline image, so that described pixel value is added on each respective pixel in the pixel in the previous mobile object outline image, thereby provide the template image of renewal.

Coalignment (71) is configured to provide above correlation, and this correlation can have different definition.For example, this correlation can be defined as about be selected to each regional area that constitutes in the regional area of contour images in profile the corresponding pixel of pixel, the pixel value that from template image, obtains and or sum of powers (power sum).

And correlation can be weighted appropriately, determines that to improve target moves the accuracy of object.In this case, the contour images processor is configured to provide the contour images of binary data, and wherein, pixel value " 1 " is assigned to the pixel of the profile that constitutes contour images, and pixel value " 0 " is assigned to the rest of pixels of contour images.Coalignment (71) be configured to from described template image to select with the regional area that constitutes contour images in the corresponding pixel of pixel of profile in each regional area, and obtain the number of having around each selected pixel, so that be weighted according to the number of pixels of acquisition like this pixel value to each selected pixel greater than the pixel of the pixel value of " 0 ".Described coalignment (71) further is configured to described correlation is defined as the pixel value sum of the weighting like this of selected pixel in the template image.

Correlation can be endowed different weights.For example, coalignment (71) is configured to obtain: first number of the pixel that meets the following conditions of each regional area of described regional area, described condition are that corresponding pixel both has pixel value " 1 " or bigger pixel value in pixel in the regional area and the template image; And second number that has the pixel of pixel value " 1 " or bigger pixel value in the template image.Then, coalignment (71) definition is used for the above-mentioned correlation of each regional area of described regional area, and it is the ratio of first number to second number.

Coalignment (71) can be configured to obtain: first number of the pixel that meets the following conditions in each regional area in the described regional area, described condition are that corresponding pixel both has pixel value " 1 " or bigger pixel value in pixel in the regional area and the template image; Second number of the pixel that meets the following conditions in each regional area in the described regional area, described condition are that pixel both corresponding in pixel in the regional area and the template image has pixel value " 0 "; And the 3rd number that has the pixel of pixel value " 1 " or bigger pixel value in the template image.In this case, for each regional area in the described regional area, correlation is defined as first number and adds after second number ratio to the 3rd number.

And, coalignment (71) can be configured to obtain maximum pixel value from the pixel set of the selected pixel arrangement around template image, and described selected pixel is corresponding to each pixel in the pixel of the profile in each regional area in the regional area that constitutes contour images.In this case, in coalignment (71), correlation is defined as respectively the maximum sum that obtains at described regional area.

Further, coalignment (71) can be configured to obtain various parameters, is used for based on such parameter-definition correlation.Described parameter comprises: first row index, arrange in its each row for each regional area of contour images and have a number greater than the pixel of the pixel value of " 0 "; First column index, arrange in its each row for each regional area of contour images and have a number greater than the pixel of the pixel value of " 0 "; Second row index, arrange in its each row for template image and have a number greater than the pixel of the pixel value of " 0 "; And the secondary series index, arrange in its each row for template image and have a number greater than the pixel of the pixel value of " 0 ".And it is poor to obtain line number with the difference between first row index and second row index, and it is poor to obtain columns with the difference between first column index and the secondary series index.So coalignment (71) obtains head office's value, it is the line number difference sum that obtains about row respectively; And total train value, it is the columns difference sum that obtains about row respectively, so that at each regional area in the described regional area, correlation is defined as the inverse of head office's value and total train value.

The present invention further proposes contour images is limited to limited region of search, so that move object to detect target the detection time that reduces.For this reason, described equipment comprises position estimation device, is used to estimate the limited region of search that target moves the detection of object that is used in the contour images.In this respect, mobile object extraction device is configured to based on poor with time correlation between two or more continuous contour images, detect at least one possible mobile object, and provide that the size that covers described mobile object reduces at least one cover part.Position estimation device is configured to: when mobile subject positioner provides positional information, just obtain to be stored in the time series data of the position data in the position data memory; Two or more continuous time series data of position-based information are calculated the estimated position that target moves object; The detecting area of preliminary dimension is set around the estimated position; And limited region of search is provided, it is to comprise at least one coverage partly the Minimum Area overlapping with detecting area.As a result, mobile subject positioner is configured to only select regional area in limited region of search, thereby has reduced the time that definite target moves object.

Position estimation device can be configured to: two or more continuous time series data of position-based information, calculate the estimation translational speed of mobile object; And detecting area is provided, the size of described detecting area and the estimating speed of mobile object are proportional.

At the position estimation device place, detecting area can be confirmed as having such size, and this size is the function of template image size.

In order further to limit described limited region of search, position estimation device can be configured to: obtain row index, it is for arranging and have the number of the pixel of pixel value " 1 " or bigger pixel value along each row of limited region of search; Select one group of continuous row, each provisional capital has the row index greater than the predetermined row threshold value; Obtain column index, it is for arranging and have the number of the pixel of pixel value " 1 " or bigger pixel value along each row of limited region of search; Select one group of continuous row, each row all has the column index greater than the predetermined column threshold value; And limited region of search is limited to the zone of being limited by continuous row group of selecting and the continuation column group selected.

In this respect, position estimation device can be configured to: to move a group of estimated position of object effective when having selected two or more continuously during the row group, only made to approach more in described group target; And when having selected two or more continuation column groups, only make and approach target in described group more to move a group of estimated position of object effective.

The another kind that has proposed limited region of search limits, wherein, position estimation device can be configured to: obtain row index, it is for arranging and have the number of the pixel of pixel value " 1 " or bigger pixel value along each row of described limited region of search; And select at least one row group continuously, each provisional capital has the row index greater than the predetermined row threshold value.Move one of described estimated position of object the row group is effectively continuously when having selected two or more continuously during the row group, only made to approach more target.Subsequently, only about calculating by effective continuously row group institute restricted portion obtaining column index, the number of this column index for arranging along each row of limited region of search and have the pixel of pixel value " 1 " or bigger pixel value.Then, select the continuation column group, each row all has the column index greater than the predetermined column threshold value, so that position estimation device further is limited to limited region of search the zone of being limited by continuation column group of selecting and effective row group.The favourable part of this scheme is further to have reduced the quantity of the calculating that is used for definite mobile object.

Alternatively, position estimation device is configured to: at first analyze column index, so that a group in the independent continuation column group is effective; And only select row group continuously with reference to effective row group, be used for further limiting limited region of search.

In conjunction with the accompanying drawings, according to the following description of preferred embodiment, these and other favorable characteristics of the present invention will become more apparent.

Description of drawings

Fig. 1 is that target moves the block diagram of object tracking equipment according to the preferred embodiment of the invention;

Fig. 2 illustrates how reference template image is followed the tracks of this from the continuous profile image of mobile object is moved the schematic diagram of object;

Fig. 3 illustrates the time series data of contour images;

Fig. 4 illustrates template image;

Fig. 5 is the flow chart that illustrates the basic operation of described equipment;

Fig. 6 is the schematic diagram that illustrates the mobile object outline image that extracts from contour images;

Fig. 7 is the schematic diagram that illustrates the template image that the mobile object outline image with Fig. 6 compares;

Fig. 8 A to 8C illustrates that respectively the layout that depends on pixel value comes the schematic diagram of scheme that template image is weighted;

Fig. 9 A and 9B are the schematic diagrames that illustrates the counterpart of the part of regional area of contour images and template image respectively;

Figure 10 A and 10B are the schematic diagrames that illustrates the counterpart of the part of regional area of contour images and template image respectively;

Figure 11 illustrates being used for and regional area that the correlation aspect of template image between distributing with regard to pixel value compares and diagrammatic representation that the pixel value that calculates distributes at contour images;

Figure 12 is the diagrammatic representation of diagram at the pixel value distribution of template image calculating;

Each scheme of the limited region of search that provides in the contour images is provided Figure 13 to 15;

Figure 16 illustrates key diagram how to determine mobile object in limited region of search; And

Figure 17 and 18 illustrates the key diagram that how further to limit limited region of search under the situation of a plurality of mobile objects existing respectively.

Embodiment

With reference to figure 1, the target that shows according to a preferred embodiment of the invention moves the object tracking equipment.This equipment moves object to detect invador or target, follows the tracks of its motion and shows that with the view that amplifies the invador is so that discern this invador as invador's surveillance.

This equipment comprises video camera 10, it covers observes the district to take continuous picture, described picture is converted into the time series numerical data of the real picture image P that covers whole visual field by A/D converter 12, as shown in Figure 2, and is stored in the picture image memory 20.This equipment comprises display 30, and it can show be used for the view of amplification that recognition objective moves the chosen part of object in current picture image and the present image as will discussing after a while.In this equipment, comprise contour images processor 40, be used for generating contour images, and the time series data of consequent contour images is stored in the contour images memory 42 according in the real picture image in the memory 20 each.Contour images or adopt the form of binary picture data perhaps adopts the form of gray-scale data.Mobile object extraction device 50 is provided, has been used for to each contour images extracts regional area that is surrounds the unit area of mobile object, its details will discussed after a while.At first, regional area is fed to template storage 60 and is stored in the template storage 60 as the transition template image, it is compared with a contour images subsequently at mobile subject positioner 70 places, so that localizing objects moves object in the framework of this contour images.As will discussing after a while, in this equipment, comprise template renewal device 62, be used for comprising that by using to be determined at mobile subject positioner 70 places subsequently the combination of regional area that target moves the selected number of object replaces template image T and regular update template image T.This in the contour images is determined and comprises that the regional area that target moves object is called as mobile object outline image hereinafter.For this reason, provide mobile object outline video memory 72, be used for storing up the time series data of mobile object outline image M O1 to MO6, they for example illustrate in Fig. 2, compare with real picture image P and template image T, and shown in Figure 3.The template image T that is noted that renewal in this respect is defined as having the grayscale image of the pixel value of variation.

When mobile object extraction device 50 extracts when determining the original template image that wherein each all comprises two or more parts of possible mobile object, make mobile subject positioner 70 that the continuous profile image of each the candidate's part in the described part with predetermined number compared, with the believable part of determining in these contour images, to occur continuously, so that template renewal device 62 is appointed as the original template image with this believable part.

Mobile subject positioner 70 is configured to: at the mobile object of the framework of each contour images that is arranged in current contour images, obtain position data; And position data is sent to position data memory 74.Position data constantly reads by amplifying picture generating apparatus 80, described amplification picture generating apparatus 80 responds to read current picture image, from wherein selecting a part, and generate the amplification picture image of this part with predetermined magnification ratio, be used for showing this amplification picture image, thereby move object to keeper's notification target with the effect of amplifying at display 30.

This equipment further comprises position estimation device 76, two or more continuous time series datas in its position-based data, calculate the estimated position that target moves object, and in the contour images around this estimated position, provide limited region of search, be used for the detection that target moves object, its details will discussed after a while.

In brief, this equipment repeats in check circulation, and this circulation may further comprise the steps as shown in Figure 5: take real picture image (S1); Generate its contour images (S2); Calculate the estimated position (S3) of mobile object; Estimate limited region of search (S4); Obtain the position data (S5) of mobile object; Upgrade template image (S6); And the view (S7) that shows the amplification of mobile object.

Now, will be described hereinafter the details of this equipment several portions.Contour images processor 40 is configured to: after upgrading template image, obtain the contrast of template image T; And when contrast surpasses predetermined benchmark, provide, otherwise provide contour images by the gray scale definition by the defined contour images of binary data.At the kind time-like of determining contour images, contour images processor 40 detects average pixel value, this average pixel value is a mean value of composing the pixel value of the pixel in each subregion of giving in a plurality of predetermined partition of template image respectively, and, when arbitrary subregion in the described subregion is detected when having the average pixel value that is lower than threshold value, perhaps the detected average pixel value of arbitrary section post is than the low degree of the detected average pixel value of another section post at described subregion during greater than predetermined extent at described subregion, and contour images processor 40 just judges that described contrast is under benchmark.When generating the contour images of binary data, contour images processor 40 is configured to rely on the variable thresholding that is used for the real picture image transitions is become binary data image, and obtain the average gray-level value of template image, when lower, reduce described threshold value with the described average gray-level value of box lunch than the predetermined limits that is used for successfully comparing at mobile subject positioner 70 places and template image.The contour images of binary data also is known as edge image, and it obtains by means of the technology of well-known Sobel filter or correspondence.

Come mobile object extraction device 50 is described now with regard to the function of the mobile contour images of extraction of mobile object extraction device 50.At each contour images that extracts in the time (T), mobile contour images is with reference to two previous contour images that extract at time (T-Δ T2) and (T-Δ T1) respectively and respectively in two contour images acquisitions subsequently of time (T+ Δ T1) and (T+ Δ T2) extraction.The contour images that extracts at time (T-Δ T2) and (T+ Δ T2) is respectively carried out and operation, to provide the first logic product image PT1, simultaneously the contour images that extracts at time (T-Δ T1) and (T+ Δ T1) is respectively carried out and operation, to provide the second logic product image PT2.The first logic product image PT1 is inverted, and carry out and operation with the contour images that extracts in time T subsequently, to provide the 3rd logic product image PT3, it comprises: the mobile contours of objects that occurs in contour images when T, the background profile that when the time (T-Δ T2), is hidden in mobile object back and occurs during in the time (T), and when the time (T-Δ T2), occur and be hidden in the background profile of mobile object back during in the time (T+ Δ T2).Similarly, the second logic product image PT2 is inverted, be carried out subsequently and operate, to provide the 4th logic product image (PT4), it comprises: the mobile contours of objects that occurs in contour images when T, the background profile that when the time (T-Δ T1), is hidden in mobile object back and occurs during in the time (T), and when the time (T-Δ T1), occur and be hidden in the background profile of mobile object back during in the time (T+ Δ T1).At last, the third and fourth logic product image is carried out and operation, to extract mobile contours of objects.Mobile object extraction device with above-mentioned functions has been known in the art, for example, as disclosed among the Japanese patent publication No.2004-265252, therefore unnecessaryly further describes in detail.In this respect, the present invention can utilize the similar mobile object extraction device of various configurations.

The renewal of explanation template image T now.Template renewal device 62 is configured to read the previous mobile object outline image of predetermined number from mobile object outline video memory 72, and these images are combined with the current mobile object outline image that is determined at mobile subject positioner 70 places with the template image coupling, thereby upgrade previous template image T by replacing previous template image T with the image of combination like this.As schematically showing among Fig. 7, this template image T with higher pixel value compose in the human body such as the profile of specific parts such as head or shoulder, described specific part immobilizes more, that is with compare during movement the change that is difficult for producing shape such as other parts such as arm or legs.Utilize this result, template image T becomes and has indicated the major part of mobile object well, so that for by determining reliably that with regard to the comparison of correlation aspect between template image and the mobile contour images and current mobile object outline image mobile object provides solid foundation, as will discussing after a while.And so the template image T that upgrades can move any little omission that part corresponding in the object outline image compensates the part of a mobile object in the mobile object outline image well by one or more other.For example, for a part that is hidden in the belly of the hand back of waving fast in the mobile object images, can be used in other part that moves the correspondence that occurs in the object images replenishes, thereby make template image move object near target as far as possible, this causes based on the comparison between contour images and the template image, determines that accurately target moves object.

Preferably, move the accumulation of object outline image when reaching predetermined number whenever new continuous one group, just upgrade.And template renewal device 62 is configured to only will to be confirmed as effective mobile object outline image according to predetermined criterion and carries out combination.An example of described criterion is based on pixel index, this pixel index be in each of described mobile object outline image, comprising of calculating, have a number greater than the pixel of zero pixel value.When this criterion is defined by difference at the pixel index of the pixel index of current mobile object outline image and previous mobile object outline image greater than predetermined extent, just determine that current mobile object outline image is effective.Another criterion is based on the standard deviation of one pixel value in the real picture image of current mobile object outline image and correspondence, and be defined by when this standard deviation with during greater than predetermined extent, just determine that current mobile object outline image is effective about the difference of the standard deviation of previous mobile object outline image calculation.And, this criterion can be based on the number of the pixel of the mobile contours of objects in the described mobile object outline image of the formation that calculates each, and be defined by when about the described number of pixels of current mobile object outline image with during greater than predetermined extent, just determine that current mobile object outline image is effective about the difference of the described number of pixels of previous mobile object outline image.

Template renewal device 62 can be configured to use following weighted equation, relatively current mobile contour images is weighted with one group that combines previous mobile contour images:

T(x,y)=K·Vn(x,y)+(1-K)·Vp(x,y)

Wherein, T (x, y) pixel value in each pixel of expression template image, Vn (x, y) pixel value in each pixel of the current mobile object outline image of expression, (K then is a weight coefficient to Vp for x, the y) pixel value in each pixel of one group of previous mobile object outline image of expression combination.

Like this, relevant by suitable selection weight coefficient K with one group of described combination previous mobile object outline image, can make template image T reflect current mobile object outline image stronger or more weakly.

Mobile subject positioner 70 comprises coalignment 71, and it is configured to: collect different regional areas in the current contour images of each from contour images, each regional area has identical size with template image; And about the correlation of each regional area calculating in the different regional areas with respect to template image T.Coalignment 71 is operated so that by selecting different regional areas, that is by on the direction of row or column, regional area being moved a pixel, thereby the whole zone of scanning profile image continuously is defined as mobile object outline image with template image T coupling so that will have the regional area of maximum correlation.As schematically showing among Fig. 6, when the mobile object outline image M O of binary data is confirmed as with current template image T coupling, template renewal device 62 responds so that the pixel value of each pixel among the mobile object outline image M O that obtains to mate, so that this pixel value is added on each corresponding pixel in the previous mobile object outline image, thereby provide the template image T of renewal, as schematically showing among Fig. 7, the template image T that upgrades has such pixel, and described pixel has the pixel value of the gray-scale data of accumulation accordingly.

In the present invention, correlation from each definition, suitably selecting from following explanation.An example is that correlation is defined as: for be selected to each regional area that constitutes in the regional area of contour images in the corresponding pixel of pixel of mobile contours of objects, the pixel value that from template image T, obtains and.Another example is that correlation is defined as: for the regional area that constitutes contour images in each regional area in the corresponding pixel of pixel of mobile contours of objects, the sum of powers of the pixel value that from template image T, obtains.Sum of powers is preferably the quadratic sum of pixel value.

And when contour images processor 40 provided the contour images (wherein, pixel value " 1 " is assigned to the pixel that constitutes the contour images profile, and pixel value " 0 " then is assigned to the rest of pixels of contour images) of binary data, correlation can be weighted.In this case, coalignment 71 be configured to from template image T to select with the regional area that constitutes contour images in the corresponding pixel of pixel of profile in each regional area, and obtain the number of having around each selected pixel, so that be weighted according to the number of pixels of acquisition like this pixel value to each selected pixel greater than the pixel of the pixel value of " 0 ".For example, when whole eight (8) individual peripheral pixels around center pixel all had pixel value " 0 ", shown in Fig. 8 A, then center pixel was assigned to little weight " 1 ".When the selecteed pixel in eight peripheral pixels had pixel value greater than " 0 ", shown in Fig. 8 B, then center pixel was assigned to bigger weight " 2 ".When the pixel more than in eight peripheral pixels had pixel value greater than " 0 ", shown in Fig. 8 C, then center pixel was assigned to bigger weight " 4 ".The pixel value " a " at each pixel place be multiply by the weight of so determining, so that the correlation that coalignment 71 is determined as the weighted pixel value sum of the selected pixel among the template image T, so that contour images and template image T are carried out the consistency coupling.Can suitably select the value of weight outside above-mentioned value " 1 ", " 2 " and " 4 ".

Can differently define correlation to the ratio of second number of specific pixel according to first number of specific pixel.First number counts out, satisfies the number of pixel that pixel both corresponding in pixel in the regional area and the template image has the condition of pixel value " 1 " or bigger pixel value in each regional area, second number then is the number of the pixel with pixel value " 1 " or bigger pixel value that counts out in template image.The ratio of the number of " 0 " pixel that has when contour images and template image is during greater than the number of " 1 " or bigger pixel, as and (wherein at Fig. 9 B of template image T at Fig. 9 A of the regional area PA of contour images, black box indication " 0 " pixel, and white square indication pixel value " 1 " or bigger pixel value) in exemplary illustrate like that, correlation is particularly advantageous in contour images and template image is accurately compared thus defined.In the illustrated case, dependency expression is ratio 11/14 (=79%), and wherein, first number is " 11 ", and second number is " 14 ".Obtain correlation at each the regional area PA in the contour images, so that determine to show the regional area or the mobile object outline image of maximum correlation.Utilize correlation thus defined, can make the accurate detection of mobile object relatively avoid not constituting the influence of " 0 " pixel of mobile contours of objects.

Alternatively, it is also conceivable that the number that has the pixel of pixel value " 0 " in regional area PA and template image T defines correlation.In this case, coalignment 71 is configured to obtain:

1) count out at each regional area in the regional area, satisfy first number of pixel that pixel both corresponding in pixel in the regional area and the template image has the condition of pixel value " 1 " or bigger pixel value;

2) count out in the regional area of each in regional area, satisfy second number of pixel that pixel both corresponding in pixel in the regional area and the template image has the condition of pixel value " 0 "; And

3) the 3rd number that in template image, counts out, have the pixel of pixel value " 1 " or bigger pixel value.

Coalignment 71 is at the definition of each regional area in regional area correlation, and this correlation is that first number adds after second number ratio to the 3rd number.When will be thus defined correlation application during in the example of Fig. 9 A and 9B, be 4.1{=(11+47)/14} at the correlation of the regional area of Fig. 9 A.

And coalignment 71 can depend on the peripheral pixel around each specific pixel of considering in the template image and the correlation that defines.Coalignment 71 is configured to: select one group of contour pixel, described one group of contour pixel constitutes the profile in each regional area; Obtain maximum pixel value in the pixel set around the specific pixel from be arranged in template image, described specific pixel is corresponding to each contour pixel in the contour pixel; And correlation is defined as respectively the maximum sum that obtains at described regional area.That is, shown in Figure 10 A and 10B, according to the peripheral pixel of the specific pixel (T3) in the template image T that is have the pixel (Tmax) of maximum " 6 ", estimate each contour pixel (P3) in the contour pixel among the regional area PA.Like this, each regional area is assigned to such correlation, this correlation is the maximum sum that so obtains, and each maximum is to obtain at the contour pixel in the regional area, is defined as mobile object outline image so that coalignment 71 will have the regional area of maximum correlation.

Further, coalignment 71 can depend on and consider respectively histogram (Py, the Px that obtains at each the regional area PA in the regional area and template image T; Ty, Tx) and the definition correlation, shown in Figure 11 and 12.Each regional area PA in the regional area is obtained two histograms, and (Py, Px), one along Y-axis, and another is along X-axis.Similarly, to template image T obtain two histograms (Tx, Ty), respectively along Y-axis and X-axis.Y-axis histogram (Py) is the distribution of first row index, described first row index is to be arranged in each row of regional area PA of contour images and to have number greater than the pixel of the pixel value of " 0 ", X-axis histogram (Px) then is the distribution of first column index, and described first column index is to be arranged in each row of regional area PA of contour images and to have number greater than the pixel of the pixel value of " 0 ".Y-axis histogram (Ty) is the distribution of second row index, described second row index is to be arranged in each row of template image T and to have number greater than the pixel of the pixel value of " 0 ", X-axis histogram (Tx) then is the distribution of secondary series index, and described secondary series index is to be arranged in each row of template image T and to have number greater than the pixel of the pixel value of " 0 ".Based on these histograms, coalignment 71 calculates: line number is poor, and it is poor between first row index and second row index; Columns is poor, and it is poor between first column index and the secondary series index; Head office's value, it is the row difference sum that obtains about each row respectively; And total train value, it is the row difference sum that obtains about each row respectively.So coalignment 71 is defined as correlation at each regional area in the regional area inverse of head office's value and total train value sum.Therefore, become bigger Zong correlation becomes littler with head office's value and train value, that is, a specific regional area becomes and approaches template image more in the regional area.By means of correlation thus defined, when utilizing regional area when pixel of row or column displacement is come the whole zone of scanning profile image, the calculating of pixel value can significantly reduce.For example, when local zone during along pixel of row displacement, have only new row that do not covered by previous regional area just need to calculate first row and, and at first row of remaining columns with can obtain in the step formerly.This also is applicable to the situation of regional area along a pixel of row displacements, in this case, have only newline just need calculate first row and.

In the present invention,, the scanning of contour images is carried out in limited region of search, so that improve the speed that detects mobile object based on moving of detected mobile object.For this reason, position estimation device 76 is united with mobile object extraction device 50 provides limited region of search in contour images, and described mobile object extraction device 50 provides the local part that at least one is covered partly or size reduces that covers mobile object.Figure 13 illustrates an example, and wherein mobile object extraction device 50 provides four (4) individual coverage part M1, M2, M3 and M4 in contour images OL.Position estimation device 76 is configured to: when mobile subject positioner 70 provides position data, obtain the time series data of position data; And two or more continuous time series datas of position-based data, calculate the estimated position P that target moves object EThen, position estimation device 76 is at estimated position P EAround the detecting area Z of preliminary dimension is set, and determine that limited region of search LSR, this limited region of search LSR comprise with overlapping coverage part M1, the M2 of detecting area Z and M3 and get rid of not Minimum Area with the overlapping coverage part M4 of detecting area Z.After having determined limited region of search LSR, the mobile subject positioner 70 of position estimation device 76 indications is only selected regional area in limited region of search LSR.

In this case, detecting area z can have such size, and the point (1) of this size and the before prelocalization of mobile object moves to the proportional variation of speed of point (2).The detecting area z of non-square structure also is available, and wherein, x shaft size and y shaft size are with the proportional variation of the speed of different degree and mobile object.

Alternatively, detecting area z can have such size, and this is of a size of the function of template image size.For example, detecting area z is defined as has such size, this size is more than 1 times of template image size.And, can select this multiple, make the proportional variation of detected speed of itself and mobile object, and can be different at x axle and y axle.

Figure 14 illustrates another scheme that further limited region of search LSR is restricted to FLSR by means of filtering area FZ, and described filtering area FZ is formed on estimated position P EAround, have such size, this is of a size of the function of speed of mobile object.So limited region of search LSR is further limited the regional FLSR total with filtering area FZ.

Alternatively, as shown in figure 15, can limited region of search LSR be restricted to FLSR by means of template filtering area TFZ, described template filtering area TFZ is formed on estimated position P EAround, have such size, this is of a size of the function of the size of template image.

In this respect, be noted that filtering area FZ or template filtering area TFZ can be used alone as limited region of search.

And, consider that (Hy, Hx), limited region of search LSR can further be restricted to XLSR, as shown in figure 16 for histogram along the pixel value of x axle and y axle.Histogram (Hy) is that the y axle of row index distributes, and described row index is the number of arranging and having the pixel of pixel value " 1 " or bigger pixel value along each row of limited region of search LSR.Obtain limited region of search LSR according to the scheme among Figure 13, perhaps, even obtain the limited region of search FLSR of restriction according to the scheme described among Figure 14 or Figure 15.Equally, in this case, histogram (Hx) is the distribution of column index, and described column index is the number of arranging and having the pixel of pixel value " 1 " or bigger pixel value along each row of limited region of search LSR.Position estimation device 76 with respectively with predetermined capable threshold value TH RWith predetermined row threshold value TH CThe mode of comparing is analyzed histogram, and (Hy, Hx), each provisional capital has than capable threshold value TH so that select wherein RThe continuous row group G of big row index Y, and wherein each row all has than row threshold value TH CThe continuation column group G of big column index XThen, position estimation device 76 is restricted to limited region of search LSR by selected group of G YAnd G XThe regional XLSR of gauge eliminates any possible noise simultaneously, moves object so that accurately detect target.

If two or more continuous row groups are owing to have greater than row threshold value TH RRow index and selected, if perhaps two or more continuation column group owing to have greater than row threshold value TH CColumn index and selected, shown in Figure 17 and 18, then position estimation device only makes and approaches estimated position P more EA group collection (Gy2, Gx2) effective, therefore limited region of search is restricted to the regional XLSR that is limited by effective group.

And, in limited region of search LSR, have in the situation of two or more continuous row or column groups, when limited region of search LSR further is restricted to XLSR, in order to reduce the quantity of calculating, position estimation device 76 can be configured at first to obtain in row index and the column index, and based on one analysis in described row index and the column index, cancellation is used for obtaining another unnecessary calculating of row index and column index.For ease of understanding, at first the scheme of calculating row index before the calculated column index is described with reference to Figure 17.After each row in the boundary of limited region of search LSR was obtained row index, position estimation device 76 selections wherein each provisional capital had greater than row threshold value TH RTwo of row index row group Gy1 and Gy2 continuously, and only make that organizing Gy1 than another approaches estimated position P more EOne the group Gy2 effective.Subsequently, position estimation device 76 only obtains column index in the scope by effective group of Gy2 institute gauge, and selecting wherein, each row all has greater than row threshold value TH CThe continuation column group Gx2 of column index, and limited region of search further is limited to by the selected continuation column group Gx2 and the effective regional XLSR of row group Gy2 institute gauge continuously.

Alternatively, as shown in figure 18, can at first analyze limited region of search LSR, so that make among continuation column group Gx1 and the Gx2 one effectively about column index.Make continuation column group Gx2 because approach estimated position P more than another group Gx1 EAnd effectively, calculate, so that only obtain row index in the scope that effectively continuation column group Gx2 is limited, and selection wherein has greater than row threshold value TH each continuous provisional capital RThe continuous row group Gy2 of row index.So position estimation device 76 further is restricted to limited region of search LSR by the continuous row group Gy2 of selection like this and the effective regional XLSR that limited of continuation column group Gx2.

In the such scheme of reference Figure 17 and 18 explanations, term " continuously row " or " continuation column " be not in the present invention with the meaning interpretation of strictness, but be defined as so a series of row or column, wherein, have following row index of threshold value or column index row or column be no more than predetermined number continuously, allow to have the following row index of threshold value or the of short duration insertion of row or column of column index, realize the accurate detection of mobile object so that eliminate possible noise or error.

Although above description is only exemplary open and various features have been described, so that easy to understand basic conception of the present invention,, any combination that should be noted in the discussion above that described feature here is also within the scope of the invention.

Claims (25)

1. a target moves the object tracking equipment, comprising:
Picture image memory (20), it is configured to store video camera (10) about covering the time series that possible target moves the captured real picture image in the observation district of object;
Display (30), it is configured to show selectable one or more real picture images in the described real picture image with required magnification ratio;
Contour images processor (40), it is configured to generate contour images according to each the real picture image in the described real picture image in the described picture image memory, and the time series data of consequent described contour images is stored in the contour images memory (42);
Template storage (60), it is configured to store and is used to discern the template image that described target moves object;
Mobile subject positioner (70), it is configured to each contour images in the described contour images is compared with described template image, with that detect each contour images and regional area described template image coupling, described mobile subject positioner obtains described target and moves the position data of object in described observation district based on being detected the described regional area that mates with described template image;
Amplify picture generating apparatus (80), it is configured to extract enlarged image from described real picture image with being detected the corresponding part of described regional area with described template image coupling of described contour images based on described position data, and on described display, show described enlarged image
Wherein, described mobile subject positioner (70) be configured to from the corresponding described contour images of described regional area that is detected with described template image coupling each contour images in extract mobile object outline image, described mobile object outline image is that being determined of described contour images comprises that described target moves the regional area of object
Template renewal device (62) is provided, be used for by replacing described template image with the combining of previous one or more mobile object outline image in the described mobile object outline image with the current mobile object outline image in the described mobile object outline image, thereby upgrade described template image
Mobile object outline video memory (72) is provided, the time series that is used for storing described mobile object outline image,
Described template renewal device (62) is configured to read the previous mobile object outline image of predetermined number from described mobile object outline video memory, described previous mobile object outline image is combined with described current mobile object outline image, and by upgrading described previous template image in conjunction with replacing previous template image with described
Described mobile subject positioner (70) comprises coalignment (71), and described coalignment (71) is configured to: collect different regional areas in the middle of described contour images, each zone is measure-alike with described template image; Calculating is about the correlation of each regional area in the described different regional area; And the regional area that will have maximum correlation is defined as the described mobile object outline image with described template image coupling, and
Described template renewal device is configured to obtain the pixel value of each pixel in the described mobile object outline image, so that described pixel value is added to described previous moving each the corresponding pixel in the pixel in the picture contour images.
2. target as claimed in claim 1 moves the object tracking equipment, wherein,
Described contour images processor (40) is configured to provide the described contour images by the binary data definition.
3. target as claimed in claim 1 moves the object tracking equipment, wherein,
Described contour images processor (40) is configured to provide the described contour images by the discrete gray levels data definition.
4. target as claimed in claim 1 moves the object tracking equipment, wherein,
Described contour images processor (40) is configured to obtain the contrast of described template image, described contour images by the binary data definition is provided when surpassing predetermined benchmark with the described contrast of box lunch, and the described contour images that is defined by gray-scale data is provided when described contrast is lower than described benchmark.
5. target as claimed in claim 4 moves the object tracking equipment, wherein,
In order to determine the described contrast of described template image, described contour images processor (40) is configured to detect average pixel value, described average pixel value is a mean value of composing the pixel value of the pixel in each subregion of giving in a plurality of subregions of described template image respectively, and when any one subregion in described a plurality of subregions is detected when having the described average pixel value that is lower than threshold value, perhaps when during greater than predetermined extent, just judging that described contrast is lower than described benchmark than the low degree of the described average pixel value of another subregion in described a plurality of subregions at the detected described average pixel value of any one subregion in described a plurality of subregions.
6. target as claimed in claim 2 moves the object tracking equipment, wherein,
Described contour images processor (40) is configured to be provided for described real picture image transitions is become the variable thresholding of the contour images of described binary data, described contour images processor (40) is configured to obtain the average gray-level value of described template image, and reduces described threshold value when described average gray-level value is lower than predetermined limits.
7. target as claimed in claim 1 moves the object tracking equipment, wherein,
Described template renewal device (62) is configured to upgrade described template image when new one group of continuous described mobile object outline image accumulation reaches described predetermined number.
8. move the object tracking equipment as claim 1 or 7 described targets, wherein,
Described template renewal device (62) is configured to only in conjunction be confirmed as effectively described mobile object outline image according to predetermined criterion.
9. target as claimed in claim 8 moves the object tracking equipment, wherein,
Described template renewal device (62) is configured to: calculating pixel index, described pixel index are to be included in the described mobile object outline image each to move in the object outline image and have number greater than the pixel of zero pixel value; And provide described criterion, described criterion is in the difference of the described pixel index of the described pixel index of described current mobile object outline image and described previous mobile object outline image during greater than predetermined extent, determines that described current mobile object outline image is for effectively.
10. target as claimed in claim 8 moves the object tracking equipment, wherein,
Described template renewal device (62) is configured to calculate the standard deviation of one pixel value in described current mobile object outline image and the corresponding real picture image, and provide described criterion, described criterion described standard deviation with about the difference of the described standard deviation of described previous mobile object outline image calculation during greater than predetermined extent, determine that described current mobile object outline image is for effectively.
11. target as claimed in claim 8 moves the object tracking equipment, wherein,
Described template renewal device (62) is configured to calculate each that constitute in the described mobile object outline image and moves the number of the pixel of the mobile contours of objects in the object outline image, and provide described criterion, described criterion about the number of the described pixel of described current mobile object outline image with about the difference of the number of the described pixel of described previous mobile object outline image during greater than predetermined extent, determine that described current mobile object outline image is for effectively.
12. target as claimed in claim 1 moves the object tracking equipment, wherein,
Described coalignment (71) is configured to described correlation is defined as: about be selected to the described different regional area that constitutes described contour images in each regional area in the corresponding pixel of pixel of mobile contours of objects, the pixel value sum that from described template image, obtains.
13. target as claimed in claim 1 moves the object tracking equipment, wherein,
Described coalignment (71) is configured to described correlation is defined as: about be selected to the described different regional area that constitutes described contour images in each regional area in the corresponding pixel of pixel of mobile contours of objects, the sum of powers of the pixel value that from described template image, obtains.
14. target as claimed in claim 1 moves the object tracking equipment, wherein,
Described contour images processor (40) is configured to provide the contour images of binary data, wherein, pixel value " 1 " is assigned to the pixel of the profile that constitutes described contour images, pixel value " 0 " then is assigned to the rest of pixels of described contour images, described coalignment (71) is configured to from described template image to select the corresponding pixel of pixel of the mobile contours of objects in each regional area in the described different regional area with the described contour images of formation, and obtain the number of having greater than the pixel of the pixel value of " 0 " in each the selected pixel in selected pixel, so that be weighted according to the number of pixels of acquisition like this pixel value to each the selected pixel in the described selected pixel, described coalignment (71) is configured to described correlation is defined as the pixel value sum of the weighting like this of the described selected pixel in the described template image.
15. target as claimed in claim 1 moves the object tracking equipment, wherein,
Described contour images processor (40) is configured to provide the contour images of binary data, wherein, pixel value " 1 " is assigned to the pixel of the profile that constitutes described contour images, pixel value " 0 " then is assigned to the rest of pixels of described contour images, described coalignment (71) is configured to obtain: satisfy first number of the pixel of following condition in each regional area in the described different regional area, this condition is that corresponding pixel both has pixel value " 1 " or bigger pixel value in pixel in each regional area in the described different regional area and the described template image; And second number that has the pixel of pixel value " 1 " or bigger pixel value in the described template image,
The described correlation of each regional area in the described different regional area of described coalignment (71) definition, it is the ratio of described first number to described second number.
16. target as claimed in claim 1 moves the object tracking equipment, wherein,
Described contour images processor (40) is configured to provide the contour images of binary data, and wherein, pixel value " 1 " is assigned to the pixel of the profile that constitutes described contour images, and pixel value " 0 " then is assigned to the rest of pixels of described contour images,
Described coalignment (71) is configured to obtain: first number of the pixel that satisfies following condition of each regional area in the described different regional area, this condition are that corresponding pixel both has pixel value " 1 " or bigger pixel value in pixel in each regional area in the described different regional area and the described template image; Second number of the pixel that satisfies following condition of each regional area in the described different regional area, this condition are that corresponding pixel both has pixel value " 0 " in pixel in each regional area in the described different regional area and the described template image; And the 3rd number that has the pixel of pixel value " 1 " or bigger pixel value in the described template image,
The described correlation of each regional area in the described different regional area of described coalignment (71) definition, it adds after the above second number ratio to described the 3rd number for described first number.
17. target as claimed in claim 1 moves the object tracking equipment, wherein,
Described contour images processor (40) is configured to provide the contour images of binary data, wherein, pixel value " 1 " is assigned to the pixel of the profile that constitutes described contour images, pixel value " 0 " then is assigned to the rest of pixels of described contour images, described coalignment (71) is configured to: obtain the maximum in the described pixel value from the pixel set of the selected pixel arrangement around described template image, described selected pixel is corresponding to each pixel in the pixel of the mobile contours of objects in each regional area in the described different regional area that constitutes described contour images; And described correlation is defined as the described maximum sum that obtains at each regional area respectively.
18. target as claimed in claim 1 moves the object tracking equipment, wherein,
Described contour images processor (40) is configured to provide the contour images of binary data, wherein, pixel value " 1 " is assigned to the pixel of the profile that constitutes described contour images, pixel value " 0 " then is assigned to the rest of pixels of described contour images, and described coalignment (71) is configured to obtain:
First row index, it is for arranging in every row of each regional area in the described different regional area of described contour images and having a number greater than the pixel of the pixel value of " 0 ";
First column index, it is for arranging in every row of each regional area in the described different regional area of described contour images and having a number greater than the pixel of the pixel value of " 0 ";
Second row index, it is for arranging in every row of described template image and having a number greater than the pixel of the pixel value of " 0 ";
The secondary series index, it is for arranging in every row of described template image and having a number greater than the pixel of the pixel value of " 0 ";
Line number is poor, and it is poor between described first row index of every row and described second row index;
Columns is poor, and it is poor between described first column index of every row and the described secondary series index;
Head office's value, it is the described line number difference sum that obtains about each row respectively; And
Total train value, it is the described columns difference sum that obtains about each row respectively,
Described coalignment (71) is defined as described correlation at the described head office value of each regional area in the described different regional area and the inverse of described total train value sum.
19. target as claimed in claim 1 moves the object tracking equipment, further comprises:
Position estimation device (76) is used to estimate the limited region of search in the described contour images, is used for the detection that described target moves object, and
Mobile object extraction device (50), it is configured to detect at least one possible mobile object based on difference between two or more contour images in the continuous contour images and time correlation, and provide that the size that covers described mobile object reduces at least one cover part;
Described position estimation device (76) is configured to: when described mobile subject positioner provides described position data, just obtain to be stored in the time series data of the described position data in the position data memory (74); Based on two or more continuous time series data of described position data, calculate the estimated position (P that described target moves object E); Be provided with around the detecting area (Z) of the preliminary dimension of described estimated position; And described limited region of search (LSR) is provided, it is to comprise at least one described Minimum Area of covering part overlapping with described detecting area,
Described mobile subject positioner is configured to only select described regional area in described limited region of search.
20. target as claimed in claim 19 moves the object tracking equipment, wherein,
Described position estimation device (76) is configured to: based on two or more continuous time series data of described position data, calculate the estimation translational speed of described mobile object; And described detecting area (Z) is provided, the described estimation translational speed of the size of described detecting area (Z) and described mobile object is proportional.
21. target as claimed in claim 19 moves the object tracking equipment, wherein,
Described position estimation device (76) is configured to determine the size of described detecting area, and this is of a size of the function of the size of described template image.
22. move the object tracking equipment as claim 20 or 21 described targets, wherein,
Described position estimation device (76) is configured to:
Obtain row index, it is for arranging and have the number of the pixel of pixel value " 1 " or bigger pixel value along every row of described limited region of search;
Select row group continuously, every provisional capital has the row index greater than the predetermined row threshold value;
Obtain column index, it is for arranging and have the number of the pixel of pixel value " 1 " or bigger pixel value along every row of described limited region of search;
Select the continuation column group, every row all have the column index greater than the predetermined column threshold value; And
Further described limited region of search is limited to the zone of being limited by the continuous continuation column group of selecting of going group and selecting.
23. target as claimed in claim 22 moves the object tracking equipment, wherein,
Described position estimation device (76) is configured to: when selected two or more continuously during the row group, only made to approach more in described group described target move object described estimated position a group effectively; And when having selected two or more continuation column groups, only make and approach one of described estimated position group that described target moves object in described group more effectively.
24. move the object tracking equipment as claim 20 or 21 described targets, wherein,
Described position estimation device (76) is configured to:
Obtain row index, it is for arranging and have the number of the pixel of pixel value " 1 " or bigger pixel value along every row of described limited region of search;
Select at least one row group continuously, every provisional capital has the row index greater than the predetermined row threshold value;
Move one of described estimated position of object the row group is effectively continuously when having selected two or more continuously during the row group, only made to approach more described target;
Obtain column index, it is for only by the number of effectively arranging and having the pixel of pixel value " 1 " or bigger pixel value continuously in the scope that limited of row group along every row of described limited region of search;
Select the continuation column group, every row all have the column index greater than the predetermined column threshold value; And
Further described limited region of search is limited to by continuation column group of selecting and described effectively going continuously and organizes the zone of being limited.
25. move the object tracking equipment as claim 20 or 21 described targets, wherein,
Described position estimation device (76) is configured to:
Obtain column index, it is for arranging and have the number of the pixel of pixel value " 1 " or bigger pixel value along every row of described limited region of search;
Select at least one continuation column group, every row all have the column index greater than the predetermined column threshold value;
When having selected two or more continuation column groups, only make and approach described target more to move the continuation column group of described estimated position of object effective;
Obtain row index, it is for only arranging and have the number of the pixel of pixel value " 1 " or bigger pixel value along every row of described limited region of search in the scope that is limited by effective continuation column group;
Select row group continuously, every provisional capital has the row index greater than the predetermined row threshold value; And
Further described limited region of search is limited to the zone of being limited by continuous row group of selecting and described effective continuation column group.
CN2007101425988A 2006-10-27 2007-08-29 Target moving object tracking device CN101170683B (en)

Priority Applications (6)

Application Number Priority Date Filing Date Title
JP2006-293079 2006-10-27
JP2006293079A JP4725490B2 (en) 2006-10-27 2006-10-27 Automatic tracking method
JP2006293078A JP4915655B2 (en) 2006-10-27 2006-10-27 Automatic tracking device
JP2006-293078 2006-10-27
JP2007110915A JP4867771B2 (en) 2007-04-19 2007-04-19 Template matching device
JP2007-110915 2007-04-19

Publications (2)

Publication Number Publication Date
CN101170683A CN101170683A (en) 2008-04-30
CN101170683B true CN101170683B (en) 2010-09-08

Family

ID=39391118

Family Applications (1)

Application Number Title Priority Date Filing Date
CN2007101425988A CN101170683B (en) 2006-10-27 2007-08-29 Target moving object tracking device

Country Status (2)

Country Link
JP (1) JP4915655B2 (en)
CN (1) CN101170683B (en)

Families Citing this family (32)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP4513039B2 (en) * 2008-05-30 2010-07-28 ソニー株式会社 Image processing apparatus, image processing method, and program
JP2010034885A (en) * 2008-07-29 2010-02-12 Fujitsu Ltd Imaging apparatus
CN101646066B (en) * 2008-08-08 2011-05-04 鸿富锦精密工业(深圳)有限公司 Video monitoring system and method
JP5370056B2 (en) * 2008-11-04 2013-12-18 オムロン株式会社 Image processing device
JP5279017B2 (en) * 2008-12-02 2013-09-04 国立大学法人 東京大学 Imaging apparatus and imaging method
CN101751549B (en) * 2008-12-03 2014-03-26 财团法人工业技术研究院 Method for tracking moving object
JP2010166256A (en) * 2009-01-14 2010-07-29 Sony Corp Information processor, information processing method, and program
JP4788798B2 (en) * 2009-04-23 2011-10-05 トヨタ自動車株式会社 Object detection device
JP4798259B2 (en) 2009-06-08 2011-10-19 株式会社ニコン Subject tracking device and camera
CN101739692B (en) * 2009-12-29 2012-05-30 天津市亚安科技股份有限公司 Fast correlation tracking method for real-time video target
CN102223473A (en) * 2010-04-16 2011-10-19 鸿富锦精密工业(深圳)有限公司 Camera device and method for dynamic tracking of specific object by using camera device
CN101872480B (en) * 2010-06-09 2012-01-11 河南理工大学 Automatic detection method for position and dimension of speckled characteristic in digital image
JP5740934B2 (en) * 2010-11-25 2015-07-01 カシオ計算機株式会社 Subject detection apparatus, subject detection method, and program
CN102158689B (en) * 2011-05-17 2013-12-18 无锡中星微电子有限公司 Video monitoring system and method
JP5746937B2 (en) * 2011-09-01 2015-07-08 ルネサスエレクトロニクス株式会社 Object tracking device
CN102445681B (en) * 2011-09-30 2013-07-03 深圳市九洲电器有限公司 Indoor positioning method and indoor positioning system of movable device
CN102663777A (en) * 2012-04-26 2012-09-12 安科智慧城市技术(中国)有限公司 Target tracking method and system based on multi-view video
WO2013188583A2 (en) * 2012-06-12 2013-12-19 Snap-On Incorporated An inventory control system having advanced functionalities
US20150261331A1 (en) * 2012-11-06 2015-09-17 Hewlett-Packard Development Company, L.P. Interactive Display
CN103106667B (en) * 2013-02-01 2016-01-20 山东科技大学 A kind of towards blocking the Moving Objects method for tracing with scene change
JP6406246B2 (en) * 2013-03-29 2018-10-17 日本電気株式会社 Object identification device, object identification method, and object identification program
CN104778677B (en) * 2014-01-13 2019-02-05 联想(北京)有限公司 A kind of localization method, device and equipment
CN104065878B (en) * 2014-06-03 2016-02-24 小米科技有限责任公司 Filming control method, device and terminal
US9584725B2 (en) 2014-06-03 2017-02-28 Xiaomi Inc. Method and terminal device for shooting control
CN104065932B (en) * 2014-06-30 2019-08-13 东南大学 A kind of non-overlapping visual field target matching method based on amendment weighting bigraph (bipartite graph)
JP6464706B2 (en) * 2014-12-05 2019-02-06 富士通株式会社 Object detection method, object detection program, and object detection apparatus
CN105979209A (en) * 2016-05-31 2016-09-28 浙江大华技术股份有限公司 Monitoring video display method and monitoring video display device
CN106339725A (en) * 2016-08-31 2017-01-18 天津大学 Pedestrian detection method based on scale constant characteristic and position experience
CN106683308A (en) * 2017-01-06 2017-05-17 天津大学 Event recognition photoelectric information fusion perception device and method
WO2018230016A1 (en) * 2017-06-13 2018-12-20 株式会社Ihi Mobile body observation method
JP2019017867A (en) * 2017-07-20 2019-02-07 株式会社東芝 Information processing apparatus, information processing system, and program
CN107562203A (en) * 2017-09-14 2018-01-09 北京奇艺世纪科技有限公司 A kind of input method and device

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6426718B1 (en) * 2000-03-14 2002-07-30 The Boeing Company Subaperture processing for clutter reduction in synthetic aperture radar images of ground moving targets
CN1489112A (en) * 2002-10-10 2004-04-14 北京中星微电子有限公司 Sports image detecting method
EP1526733A1 (en) * 2003-10-21 2005-04-27 Ankri, Rénald Automatic method for controlling the direction of the optical axis and the angle of the field of view of an electronic camera, especially for video surveilance using automatic tracking
CN1794010A (en) * 2005-12-19 2006-06-28 北京威亚视讯科技有限公司 Position posture tracing system
EP1703723A2 (en) * 2005-03-15 2006-09-20 Fujinon Corporation Autofocus system

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP3254464B2 (en) * 1992-07-13 2002-02-04 株式会社日立製作所 Vehicle recognition device and moving object recognition method
JP3481430B2 (en) * 1997-09-11 2003-12-22 富士通株式会社 Mobile tracking device
JPH11252587A (en) * 1998-03-03 1999-09-17 Matsushita Electric Ind Co Ltd Object tracking device
JP3437555B2 (en) * 2001-03-06 2003-08-18 キヤノン株式会社 Specific point detection method and device
JP4132725B2 (en) * 2001-05-23 2008-08-13 株式会社リコー Image binarization apparatus, image binarization method, and image binarization program
JP3857558B2 (en) * 2001-10-02 2006-12-13 株式会社日立国際電気 Object tracking method and apparatus
JP4586578B2 (en) * 2005-03-03 2010-11-24 株式会社ニコン Digital camera and program

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6426718B1 (en) * 2000-03-14 2002-07-30 The Boeing Company Subaperture processing for clutter reduction in synthetic aperture radar images of ground moving targets
CN1489112A (en) * 2002-10-10 2004-04-14 北京中星微电子有限公司 Sports image detecting method
EP1526733A1 (en) * 2003-10-21 2005-04-27 Ankri, Rénald Automatic method for controlling the direction of the optical axis and the angle of the field of view of an electronic camera, especially for video surveilance using automatic tracking
EP1703723A2 (en) * 2005-03-15 2006-09-20 Fujinon Corporation Autofocus system
CN1794010A (en) * 2005-12-19 2006-06-28 北京威亚视讯科技有限公司 Position posture tracing system

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
马军防.运动目标跟踪算法的研究.中国优秀博硕士学位论文全文数据库信息科技辑 4.2004,(4),I140-522. *

Also Published As

Publication number Publication date
JP2008113071A (en) 2008-05-15
CN101170683A (en) 2008-04-30
JP4915655B2 (en) 2012-04-11

Similar Documents

Publication Publication Date Title
CN104115192B (en) Three-dimensional closely interactive improvement or associated improvement
US10417503B2 (en) Image processing apparatus and image processing method
US9952678B2 (en) Information processing device and method, program and recording medium for identifying a gesture of a person from captured image data
Fielding et al. A review of methods for the assessment of prediction errors in conservation presence/absence models
US7167576B2 (en) Method and apparatus for measuring dwell time of objects in an environment
Raimondo et al. Automated evaluation of Her-2/neu status in breast tissue from fluorescent in situ hybridization images
JP2953712B2 (en) Moving object detection device
EP2426642B1 (en) Method, device and system for motion detection
CN102175222B (en) Crane obstacle-avoidance system based on stereoscopic vision
US6826293B2 (en) Image processing device, singular spot detection method, and recording medium upon which singular spot detection program is recorded
US8655078B2 (en) Situation determining apparatus, situation determining method, situation determining program, abnormality determining apparatus, abnormality determining method, abnormality determining program, and congestion estimating apparatus
CN101522107B (en) Medical image diagnostic apparatus, medical image measuring method, and medical image measuring program
CN104867225B (en) A kind of bank note towards recognition methods and device
KR100617408B1 (en) Method and system for classifying object in scene
KR101336139B1 (en) System and method for motion estimating using depth camera
JP3810657B2 (en) Moving object detection method and apparatus
US7221779B2 (en) Object measuring apparatus, object measuring method, and program product
KR101054274B1 (en) Full Depth Map Acquisition
RU2251739C2 (en) Objects recognition and tracking system
Leroy et al. A computer vision method for on-line behavioral quantification of individually caged poultry
CN102298778B (en) Estimation system, estimation method, and estimation program for estimating object state
CN104823522A (en) Sensory lighting system and method for characterizing illumination space
US7769227B2 (en) Object detector
US20110158540A1 (en) Pattern recognition method and pattern recognition apparatus
US20140046169A1 (en) Methods, systems, and devices for spine centrum extraction and intervertebral disk dividing

Legal Events

Date Code Title Description
PB01 Publication
C06 Publication
SE01 Entry into force of request for substantive examination
C10 Entry into substantive examination
GR01 Patent grant
C14 Grant of patent or utility model
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20100908

Termination date: 20170829

CF01 Termination of patent right due to non-payment of annual fee