CN101814145B - Target distinguishing method and target distinguishing device - Google Patents
Target distinguishing method and target distinguishing device Download PDFInfo
- Publication number
- CN101814145B CN101814145B CN200910005387.9A CN200910005387A CN101814145B CN 101814145 B CN101814145 B CN 101814145B CN 200910005387 A CN200910005387 A CN 200910005387A CN 101814145 B CN101814145 B CN 101814145B
- Authority
- CN
- China
- Prior art keywords
- target
- feature
- difference
- frame
- distinguished
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Expired - Fee Related
Links
- 238000000034 method Methods 0.000 title claims abstract description 58
- 238000004364 calculation method Methods 0.000 claims abstract description 47
- 238000000605 extraction Methods 0.000 claims abstract description 43
- 238000011156 evaluation Methods 0.000 claims description 18
- 230000011218 segmentation Effects 0.000 claims description 15
- 230000004069 differentiation Effects 0.000 claims description 13
- 239000000284 extract Substances 0.000 claims description 8
- 238000005192 partition Methods 0.000 abstract 1
- 238000010586 diagram Methods 0.000 description 16
- 238000012850 discrimination method Methods 0.000 description 12
- 238000012545 processing Methods 0.000 description 12
- 230000008569 process Effects 0.000 description 7
- 238000005259 measurement Methods 0.000 description 5
- 238000003909 pattern recognition Methods 0.000 description 5
- 230000008901 benefit Effects 0.000 description 4
- 238000004590 computer program Methods 0.000 description 4
- 238000004891 communication Methods 0.000 description 3
- 238000012549 training Methods 0.000 description 3
- 238000001514 detection method Methods 0.000 description 2
- 238000011161 development Methods 0.000 description 2
- 230000018109 developmental process Effects 0.000 description 2
- 230000000694 effects Effects 0.000 description 2
- 230000002452 interceptive effect Effects 0.000 description 2
- 239000000203 mixture Substances 0.000 description 2
- 230000003287 optical effect Effects 0.000 description 2
- 239000004065 semiconductor Substances 0.000 description 2
- 241001270131 Agaricus moelleri Species 0.000 description 1
- 238000012935 Averaging Methods 0.000 description 1
- 241000283690 Bos taurus Species 0.000 description 1
- 230000004075 alteration Effects 0.000 description 1
- 238000004422 calculation algorithm Methods 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 238000000354 decomposition reaction Methods 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 230000002349 favourable effect Effects 0.000 description 1
- 230000006870 function Effects 0.000 description 1
- 230000010365 information processing Effects 0.000 description 1
- 238000002372 labelling Methods 0.000 description 1
- 239000004973 liquid crystal related substance Substances 0.000 description 1
- 230000004807 localization Effects 0.000 description 1
- 230000006798 recombination Effects 0.000 description 1
- 238000005215 recombination Methods 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
Images
Landscapes
- Image Analysis (AREA)
Abstract
The invention discloses a target distinguishing method and a target distinguishing device. The target distinguishing method of the invention comprises: a characteristic extraction step: extracting the characteristic of each target in a target area; a target difference calculation step: calculating the difference between the current characteristic of each target to be distinguished and the characteristic of each target in the target area, which is extracted in the characteristic extraction step; a difference distinguishing step: taking the difference between the current characteristic of each target to be distinguished, which is calculated by the target difference calculation step and the characteristic of each target in the target area as a new characteristic of each target to be distinguished, and calculating the difference among the new characteristic of each target to be distinguished so as to partition each target to be distinguished.
Description
Technical Field
The present invention relates generally to pattern recognition, and more particularly, to a method and apparatus for distinguishing objects, which are used to accurately distinguish the objects when the number of objects is 2 or more in pattern recognition. More particularly, the present invention relates to object similarity calculation, and more particularly, to a method and apparatus for calculating a similarity between an object in an object region and an object of interest in image or video data, and accurately distinguishing the objects.
Background
In pattern recognition, the judgment of the similarity of a plurality of targets is a prerequisite for all the works. The existing judging methods are divided into two types, one is a method based on a difference measurement theory, and the other is a method based on a learning theory.
The difference measurement theory based method utilizes different distance measurement criteria to directly compare differences between target features for distinguishing. Various contour templates of the human body are established in a grading way in Shape-based Pedestrian Detection and tracking in Proc. of the IEEEIntellent Vehicles Symposium, Paris, France, 2002 by D.M.Gavrila and J.Giebel, and Pedestrian and background are distinguished by adopting a Charfer distance matching method. Bovine anecdotal and demeanor established multiple contour templates for aircraft in "deformation template-based multi-target identification and localization, electronics and informatics, vol.28, No.6, P1026-1030", using geometric constraints of image gradients, image gray scale and shape to derive the difference between targets using 3-term energy-integrated weighted averaging. If a large number of templates need to be established to take into account the various poses of the described objects, many false scenes will be created in the background during matching. Jia Liu, Xiiaofeng Tong et al in "Automatic PlayerDetection, Labeling and Tracking in Broadcast Soccer Video, Pattern recognition Letters, Volume 30, Issue 2(January 2009), Pages 103-: multi-target detection and tracking [ C ]. In European Conference on computer Vision, Prague: springer, 2004, 1: 28-39 "the difference between different players described by the color information is measured using the Bhattacharyya distance and the measurements are sorted. However, it can be seen from their work that this type of method of directly measuring differences is not accurate for measuring differences between more similar objects, and only gives a general class of discrimination results.
In addition, traditional euclidean, Mahalanobis, cosine distances, etc. are all used to directly measure the distances between multiple targets or the target and the background, and are used as the final difference representation method.
Because of the variety of the targets, whether the direct difference measure method can best describe the difference size and give reasonable distinction to multiple targets needs to be further confirmed by other methods.
The method based on the learning theory generates a measuring criterion of the difference among the samples by utilizing a mode of learning a large number of training samples, and distinguishes different samples. For example, in the documents "Peng Yang, Shiguang Shan, Wen Gao, Stan Z.Li, Dong Zhang, Face Recognition Using Ada-boost magnets sources, Proceedings of the six IEEEInternationality reference on Automatic Face and Gesth Recognition (FGR' 04)", and "Juweii Lu, K.N. platanities, etc., base-edge Recognition with Boosting for Face Recognition, IEEETransactionation on road works, Vol.17, No.1, Jan.2006", AdaBoost is used to train multiple classifiers to distinguish multiple Face objects. The advantage of this type of method is that it is possible to evaluate the differences of multiple samples with only one training. However, the difference between the evaluated target and the training sample cannot be significant, and the calculation of modifying the classifier parameters according to the sample is complicated, which is not favorable for processing the case that the sample is not clear in change.
Disclosure of Invention
The following presents a simplified summary of the invention in order to provide a basic understanding of some aspects of the invention. It should be understood that this summary is not an exhaustive overview of the invention. It is not intended to determine the key or critical elements of the present invention, nor is it intended to limit the scope of the present invention. Its sole purpose is to present some concepts in a simplified form as a prelude to the more detailed description that is discussed later.
In view of the above, the present invention provides a target distinguishing method and a target distinguishing apparatus for accurately distinguishing each target when the number of targets is 2 or more in pattern recognition.
According to an aspect of the present invention, there is provided a target discriminating apparatus including: the characteristic extraction module is configured to extract the characteristic of each target in the target area; the target difference calculation module is configured to calculate the difference between the current feature of each target to be distinguished and the feature of each target in the target area extracted by the feature extraction module; and a difference distinguishing module configured to take the difference between the current feature of each object to be distinguished calculated by the object difference calculating module and the feature of each object in the object region as a new feature of each object to be distinguished, and calculate the difference between the new features of the respective objects to be distinguished to distinguish the respective objects to be distinguished.
According to another aspect of the present invention, there is provided a target discriminating method including: a feature extraction step, which is to extract the features of each target in the target area; a target difference calculation step of calculating the difference between the current feature of each target to be distinguished and the feature of each target in the target area extracted in the feature extraction step; and a difference distinguishing step of taking the difference between the current feature of each object to be distinguished calculated by the object difference calculating step and the feature of each object in the object area as a new feature of each object to be distinguished, and calculating the difference between the new features of each object to be distinguished to distinguish each object to be distinguished.
Preferably, the feature of the current frame and the feature of the previous frame of each target in the target area are respectively extracted, the difference between the feature of the current frame of each target to be distinguished and the feature of the previous frame of each target in the target area is respectively calculated, the difference between the feature of the current frame of each target to be distinguished and the feature of the previous frame of each target in the target area is taken as the new feature of each target to be distinguished, and the difference between the new feature of the current frame of each target to be distinguished and the new feature of the previous frame of each target in the target area is respectively calculated.
According to an embodiment of the present invention, the object discriminating method, wherein the feature extracting step includes: a segmentation step, namely segmenting the target region into non-overlapping segmentation blocks; a feature calculation step of calculating a feature of each of the divided blocks divided by the division step; a target determination step of determining a range occupied by each target in the target area; and a feature generation step of extracting features of all the cut blocks within the range occupied by each object determined by the object determination step, based on the features of the cut blocks calculated by the feature calculation step, and combining to generate the features of each object.
Preferably, the feature calculation step calculates features corresponding to all pixel points in each segment; and the characteristic generating step extracts the characteristics corresponding to all the pixel points in the range occupied by each target and combines the characteristics to generate the characteristics of each target.
According to a preferred embodiment of the present invention, the target difference calculating step calculates the difference between the feature of the current frame of each target to be distinguished and the feature of the previous frame of each target in the target area, respectively, according to the following formula:
wherein,is the difference between the characteristics of the target j to be distinguished in the current frame t and the characteristics of the target N in the target area in the previous frame t-1, and the Distance is a difference evaluation mode,Is the feature of the object j to be distinguished in the current frame tth frame,is the feature of the target N in the t-1 frame of the previous frame within the target area.
According to one embodiment of the invention, the step of differentiating is toAs the new feature of the current frame of each object j to be distinguished, and respectively calculating the difference between the new feature of the current frame of each object to be distinguished and the new feature of the previous frame of each object in the object region according to the following formula:
wherein the new differencesj,NIs the difference, Distance, between the new feature of the target j in the current frame t frame and the new feature of the target N in the target area in the previous frame t-1 framenewIs a mode of evaluating the difference of the images,is a new feature of the object j to be distinguished in the current frame tth,is a new feature of the target N in the target area in the t-1 frame of the previous frame,is the difference between the feature of the target N in the target area in the t-1 th frame and the feature of the target i in the target area in the t-1 th frame.
Preferably, the new difference is calculated according to the following formulaj,NAnd will minimize the new differencej,NThe corresponding target N is used as a distinguishing result of the target j to be distinguished in the t-th frame of the current frame:
wherein, Distance' is a difference evaluation mode,is the difference between the feature of the object j to be distinguished in the current frame t frame and the feature in the previous frame t-1 frame,is the difference between the feature of the target N in the t-1 th frame and the feature of the target j to be distinguished in the t-1 th frame in the target area,is the difference between the feature of the object j to be distinguished in the current frame t frame and the feature of the object i in the object region in the previous frame t-1 frame,is the difference between the feature of the target N in the target area in the t-1 th frame and the feature of the target i in the target area in the t-1 th frame.
According to the preferred embodiment of the invention, the difference evaluation mode comprises a Chamfer distance, an energy comprehensive weighted average of geometric constraints of image gradient, image gray scale and shape, a Bhattacharyya distance, an Euclidean distance, a Mahalanobis distance and a cosine distance.
It can be seen that according to the target distinguishing method and the target distinguishing device provided by the present invention, firstly, the difference of different targets is directly measured by the method based on the difference measurement theory, then, other samples are used as new reference marks, and the accurate distinguishing of the current sample is completed by using the difference between the current sample and all other samples. Therefore, according to the target distinguishing method and the target distinguishing device provided by the invention, accurate positioning results can be given for different samples without learning in advance.
According to the object discrimination method and the object discrimination apparatus proposed by the present invention, it is possible to calculate the similarity between an object in an object region and an object of interest in image or video data, and thereby accurately discriminate the object.
In addition, the invention also provides a computer program for realizing the target distinguishing method.
Furthermore, the present invention also provides a computer program product in the form of at least a computer readable medium having recorded thereon computer program code for implementing the above object discriminating method.
Drawings
The invention may be better understood by referring to the following description in conjunction with the accompanying drawings, in which like reference numerals are used throughout the figures to indicate like or similar parts. The accompanying drawings, which are incorporated in and form a part of this specification, illustrate preferred embodiments of the present invention and, together with the detailed description, serve to further explain the principles and advantages of the invention. In the drawings:
FIG. 1 illustrates an overall flow diagram of a target discrimination method according to an embodiment of the invention;
FIG. 2 is a schematic diagram of object distribution for illustrating object feature extraction;
FIG. 3 shows a detailed flow chart of the feature fast extraction step according to an embodiment of the present invention;
FIG. 4 shows a schematic of the distribution of objects in a feature space;
FIG. 5 shows a schematic representation of object discrimination in a feature space according to the prior art;
FIG. 6 shows a schematic diagram of object discrimination in feature space according to an embodiment of the invention;
fig. 7 is a block diagram showing the configuration of a target discriminating apparatus according to an embodiment of the present invention;
FIG. 8 shows a block diagram of the structure of a feature extraction module according to an embodiment of the invention;
FIG. 9 illustrates an example of an original image and an object to be distinguished;
FIG. 10 shows a diagram of the results of object discrimination using Euclidean distance for the example shown in FIG. 9 according to the prior art;
FIG. 11 is a diagram illustrating the results of object discrimination for the example shown in FIG. 9 using the object discrimination method and apparatus of the present invention based on Euclidean distance; and
fig. 12 shows a block diagram of the structure of an information processing apparatus for implementing the object discriminating method according to the present invention.
Skilled artisans appreciate that elements in the figures are illustrated for simplicity and clarity and have not necessarily been drawn to scale. For example, the dimensions of some of the elements in the figures may be exaggerated relative to other elements to help improve the understanding of the embodiments of the present invention.
Detailed Description
Exemplary embodiments of the present invention will be described hereinafter with reference to the accompanying drawings. In the interest of clarity and conciseness, not all features of an actual implementation are described in the specification. It will of course be appreciated that in the development of any such actual embodiment, numerous implementation-specific decisions must be made to achieve the developers' specific goals, such as compliance with system-related and business-related constraints, which will vary from one implementation to another. Moreover, it will be appreciated that such a development effort might be complex and time-consuming, but would nevertheless be a routine undertaking for those of ordinary skill in the art having the benefit of this disclosure.
It should be noted that, in order to avoid obscuring the present invention with unnecessary details, only the device structures and/or processing steps closely related to the solution according to the present invention are shown in the drawings, and other details not so relevant to the present invention are omitted.
For the purpose of illustrating the principles of the present invention, an embodiment of the present invention will be described hereinafter with the differentiation of soccer players on a soccer field as an application scene as shown in fig. 9, but it will be understood by those skilled in the art that the present invention is not limited to being applied only in the object differentiation scene as shown in fig. 9.
The general working principle of the object discriminating method according to an embodiment of the present invention will be described first with reference to the accompanying drawings, in particular fig. 1 to 6. FIG. 1 shows an overall flow diagram of a target discrimination method according to an embodiment of the invention.
As shown in fig. 1, the object discriminating method according to the embodiment of the present invention includes a feature fast extracting step S101, an object difference calculating step S103, and a difference accurately discriminating step S105.
First, the feature of each object within the object region is extracted in the feature fast extraction step S101, then the difference between the current feature of each object to be distinguished and the feature of each object within the object region extracted in the feature extraction step S101 is calculated in the object difference calculation step S103, and finally, the difference between the current feature of each object to be distinguished calculated in the object difference calculation step S103 and the feature of each object within the object region is calculated as a new feature of each object to be distinguished in the difference precise distinguishing step S105 to distinguish the respective objects to be distinguished.
Preferably, according to an embodiment of the present invention, in the rapid feature extraction step S101, the features of the current frame and the features of the previous frame of each target in the target area are respectively extracted. The difference between the feature of the current frame of each object to be distinguished and the feature of the previous frame of each object within the object area is calculated in the object difference calculation step S103, respectively. In addition, in the difference accurate distinction step S105, the difference between the feature of the current frame of each object to be distinguished and the feature of the previous frame of each object within the object region is taken as the new feature of each object to be distinguished, and the difference between the new feature of the current frame of each object to be distinguished and the new feature of the previous frame of each object within the object region is calculated, respectively.
The following specifically describes the processing procedure of feature extraction by taking the target distribution diagram shown in fig. 2 as an example. As shown in fig. 2, a total of 4 targets are shown in this example, assuming that box 1 corresponds to the region of the current target, boxes 2, 3, 4 correspond to the regions of the interfering targets, and two dotted boxes calculate ranges for the features of the current target 1 and the interfering target 2, respectively.
Since the difference between the current target and each interference target needs to be calculated in the difference accurate distinguishing step S105, the fast and accurate calculation of the characteristics of each target is a precondition for each subsequent step. With the example shown in fig. 2, when the feature calculation range 11 corresponding to the processing target 1 and the feature range 12 corresponding to the processing target 2 are provided, the calculation time is proportional to the area of the region occupied by the target. In the example shown in fig. 2, according to the existing feature calculation method, the total feature calculation time for the target 1 and the target 2 is:
TTotal=(T1+T2+T3+T4)+(T1+T2+T3+T4)
wherein, TiTime is calculated for the feature corresponding to target number i. It can be seen that the overlapping portions of the two dot-dash lines and the overlapping portions of the ranges of the objects are overlappedAnd (5) performing complex feature extraction operation. By analogy, when an image region is an overlapping portion of N target feature calculation ranges, the image region is subjected to N times of the same feature extraction operation.
In order to increase the speed of feature extraction, according to one embodiment of the present invention, a new fast feature extraction algorithm is proposed. Fig. 3 shows a detailed flowchart of a feature fast extraction method according to an embodiment of the present invention. As shown in fig. 3, the feature rapid extraction method according to this embodiment includes a segmentation step S301, a feature calculation step S303, a target determination step S305, and a feature generation step S307.
First, the target region is sliced into non-overlapping slices in the slicing step S301, and then the features of each of the non-overlapping slices sliced in the slicing step S301 are calculated in the feature calculating step S303.
Next, the range occupied by each target within the target area is determined in the target determination step S305, and in the feature generation step S307, the features of all the cut-out blocks within the range occupied by each target determined in the target determination step S305 are extracted from the features of each non-overlapping cut-out block calculated in the feature calculation step S303, and combined to generate the feature of each target.
According to a preferred embodiment of the present invention, in the feature calculating step S303, features corresponding to all pixel points in each segment are calculated, in the feature generating step S307, features corresponding to all pixel points in a range occupied by each object are extracted, and the features are combined to generate features of each object.
That is, according to the feature rapid extraction method of the embodiment of the present invention, the features are extracted once for all the current target area portions and saved. Taking the example shown in fig. 2 as an example, the features corresponding to all pixel points in the image range where the objects 1, 2, 3, and 4 are located are calculated, wherein the portion where 1 overlaps with 2, 3, and 4 is only calculated once. Then, when the feature of an arbitrary area portion is required, it is extracted directly from the saved range without performing feature calculation again.
In this way, when a plurality of targets are close to and overlap each other, the rapid feature extraction method according to the present invention can greatly save the time for feature extraction. The time saved is shown by the following equation:
reduced time (overlap number of overlap region-1) times characteristic time for extracting overlap region
For the example shown in fig. 2, according to the feature fast extraction method of the embodiment of the present invention, the feature calculation time becomes:
T′Total=T1+T2+T3+T4-(T1∩T2)-(T3∩T4)。
after the fast feature extraction of each target in the fast feature extraction step S101, in the target difference calculation step S103, the difference between each target that needs to be currently distinguished is calculated by using an existing difference evaluation method.
After feature extraction is performed on each object, each object can be represented by a point in the feature space. As shown in fig. 4, a schematic diagram of the distribution of the respective objects in the feature space is shown. Each point 1, 2, 3, 4, 5,. and N, to which an arrow points in fig. 4, represents a respective object that interferes with the object that currently needs to be accurately distinguished. The middle point j represents the object that currently needs to be distinguished.
The work required in the target difference calculation step S103 is to find a method for evaluating the difference between the respective targets, and calculate the difference between each interference target and the current target. As in fig. 4The corresponding black line shows the difference between the feature of the target j to be distinguished in the current frame t-th frame and the feature of the target N in the target area in the previous frame t-1.
According to a preferred embodiment of the present invention, in the target difference calculating step S103, the difference between the feature of the current frame of each target to be distinguished and the feature of the previous frame of each target in the target area is calculated according to the following formula:
wherein,is the difference between the characteristics of the target j to be distinguished in the current frame t and the characteristics of the target N in the target area in the previous frame t-1, the Distance is a difference evaluation mode,is the feature of the object j to be distinguished in the current frame tth frame,is the feature of the target N in the t-1 frame of the previous frame within the target area.
In the existing object discrimination methods, the difference calculated directly from the above is usedTo distinguish the targets, which are more concerned, often seek a fitTo more accurately obtain the difference between the targets j and NFig. 5 shows a schematic representation of object discrimination in a feature space according to the prior art.
As shown in fig. 5, the existing object discrimination method discriminates the current object j by the objects corresponding to the black straight line segment having the smallest difference from the current object j in the feature space or all objects within a range smaller than a given threshold (i.e., within a circle drawn by a dotted line centered on the object j and having a radius of a gray arrow). Then all regions within the range of the dotted line in fig. 5 are defined as similar. All points above the dotted line are similar to the current target j to the same extent, which cannot be distinguished by the existing methods.
In contrast, the method used according to the embodiment of the present invention is based on the existing method and not only compares the characteristics of the current target j in the current frame tWith the known reference features in the previous frame t-1The difference between them, in addition to the consideration ofThe difference between the interference targets N and other interference targets N is accurately distinguished by increasing reference points, so that the variation range of the difference between the targets can be enlarged while certain distinguishing capacity is increased.
For example, in the difference distinguishing step S105, the pairwise differences between the current target j and all other interference targets N obtained in the target difference calculating step S103 are utilized in an embodiment of the present inventionWill be provided withAs the new feature of the current frame of each object j to be distinguished, and respectively calculating the difference between the new feature of the current frame of each object to be distinguished and the new feature of the previous frame of each object in the object region according to the following formula:
wherein the new differencesj,NIs the difference, Distance, between the new feature of the target j in the current frame t frame and the new feature of the target N in the target area in the previous frame t-1 framenewIs a mode of evaluating the difference of the images,is a new feature of the object j to be distinguished in the current frame tth,is a new feature of the target N in the target area in the t-1 frame of the previous frame,is the difference between the feature of the target N in the target area in the t-1 th frame and the feature of the target i in the target area in the t-1 th frame.
Preferably, the new difference is calculated according to the following formulaj,NAnd will minimize the new differencej,NThe corresponding target N is used as a distinguishing result of the target j to be distinguished in the t-th frame of the current frame:
wherein Distance' is a differenceThe manner of evaluation is such that,is the difference between the feature of the object j to be distinguished in the current frame t frame and the feature in the previous frame t-1 frame,is the difference between the feature of the target N in the t-1 th frame and the feature of the target j to be distinguished in the t-1 th frame in the target area,is the difference between the feature of the object j to be distinguished in the current frame t frame and the feature of the object i in the object region in the previous frame t-1 frame,is the difference between the feature of the target N in the target area in the t-1 th frame and the feature of the target i in the target area in the t-1 th frame.
The object discrimination method according to the present invention is illustrated in fig. 6, which shows a schematic diagram of object discrimination performed according to an embodiment of the present invention in a feature space. In fig. 6, only the differences of the interference points 1, 3, N from the current target are givenAs indicated by the drawing of circles at points in whichIs a radius,And making a circle by taking the corresponding interference target N as the center of the circle.
It can be intuitively understood from the image that, in the feature space, the interference targets are taken as the centers of circles,draw circles for the radius, the intersection point of each circle isThe closest position of the target. When the position of the feature point for similarity calculation is moved, not only does itThe size of the mixture is increased, and the mixture is,also, the two are multiplied so that the difference variation range is larger than the calculation result of the target difference calculation step S103. The result of determining the nearest target is not divided from a circular line of the existing method, and only one point is nearest to the target j to be divided, and the rest points are all different. In this way, the ability to distinguish differences between objects in the feature space is greatly increased.
It should be noted that, according to the preferred embodiment of the present invention, there is no need to specifically limit the difference evaluation manner, but various existing difference evaluation manners may be used in the object differentiation method of the present invention, such as a Chamfer distance, an energy comprehensive weighted average of geometric constraints of image gradients, image gray scales and shapes, a Bhattacharyya distance, an euclidean distance, a Mahalanobis distance, a cosine distance, and the like.
The processing flow of the object discriminating method according to the embodiment of the present invention is described above. The operation principle of the object discriminating apparatus according to the embodiment of the present invention will be described with reference to fig. 7 and 8.
Fig. 7 is a block diagram illustrating a structure of a target discriminating apparatus according to an embodiment of the present invention. As shown in fig. 7, the object discriminating apparatus according to this embodiment includes a feature extraction module 71, an object difference calculation module 73, and a difference discrimination module 75.
First, the feature extraction module 71 extracts a feature of each object in the object region, then the object difference calculation module 73 calculates a difference between the current feature of each object to be distinguished and the feature of each object in the object region extracted by the feature extraction module 71, and finally the difference distinguishing module 75 calculates a difference between the current feature of each object to be distinguished calculated by the object difference calculation module 73 and the feature of each object in the object region as a new feature of each object to be distinguished, and calculates a difference between the new features of the respective objects to be distinguished to distinguish the respective objects to be distinguished.
According to one embodiment of the present invention, the feature extraction module 71 respectively extracts a feature of a current frame and a feature of a previous frame of each object in the object region, the object difference calculation module 73 respectively calculates a difference between the feature of the current frame of each object to be distinguished and the feature of the previous frame of each object in the object region, and the difference distinguishing module 75 respectively calculates a difference between the new feature of the current frame of each object to be distinguished and the new feature of the previous frame of each object in the object region, taking the difference between the feature of the current frame of each object to be distinguished and the feature of the previous frame of each object in the object region as the new feature of each object to be distinguished.
Fig. 8 shows a block diagram of the structure of the feature extraction module 71 according to the embodiment of the present invention. As shown in fig. 8, the feature extraction module 71 includes a segmentation unit 81, a feature calculation unit 83, a target determination unit 85, and a feature generation unit 87.
First, the segmentation unit 81 segments the target region into non-overlapping segments, and then the feature calculation unit 83 calculates the feature of each of the non-overlapping segments segmented by the segmentation unit 81.
Next, the object determining unit 83 determines the range occupied by each object within the object area, and the feature generating unit 87 extracts the features of all the cut-out blocks within the range occupied by each object determined by the object determining unit 83, based on the features of each non-overlapping cut-out block calculated by the feature calculating unit 83, and combines to generate the features of each object.
According to a preferred embodiment of the present invention, the feature calculating unit 83 calculates features corresponding to all pixels in each non-overlapping segmentation block, and the feature generating unit 87 extracts features corresponding to all pixels in a range occupied by each object and combines the extracted features to generate features of each object.
The specific processing procedures of the units such as the segmentation unit 81, the feature calculation unit 83, the target determination unit 85, and the feature generation unit 87 in the feature extraction module 71 are similar to the processing of the steps such as the segmentation step S301, the feature calculation step S303, the target determination step S305, and the feature generation step S307 in the feature extraction method described with reference to fig. 3, respectively, and will not be described in further detail here.
According to a preferred embodiment of the present invention, the target difference calculating module 73 calculates the difference between the feature of the current frame of each target to be distinguished and the feature of the previous frame of each target in the target area according to the following formula:
wherein,is the difference between the characteristics of the target j to be distinguished in the current frame t and the characteristics of the target N in the target area in the previous frame t-1, the Distance is a difference evaluation mode,is the feature of the object j to be distinguished in the current frame tth frame,is the feature of the target N in the t-1 frame of the previous frame within the target area.
Furthermore, in accordance with a preferred embodiment of the present invention, in the difference distinguishing module 75, there will be providedAs the new feature of the current frame of each object j to be distinguished, and respectively calculating the difference between the new feature of the current frame of each object to be distinguished and the new feature of the previous frame of each object in the object region according to the following formula:
wherein the new differencesj,NIs the difference, Distance, between the new feature of the target j in the current frame t frame and the new feature of the target N in the target area in the previous frame t-1 framenewIs a mode of evaluating the difference of the images,is a new feature of the object j to be distinguished in the current frame tth,is a new feature of the target N in the target area in the t-1 frame of the previous frame,is the difference between the feature of the target N in the target area in the t-1 th frame and the feature of the target i in the target area in the t-1 th frame.
Preferably, the new difference is calculated according to the following formulaj,N:
Wherein, Distance' is a difference evaluation mode,is the difference between the feature of the object j to be distinguished in the current frame t frame and the feature in the previous frame t-1 frame,is the difference between the feature of the target N in the t-1 th frame and the feature of the target j to be distinguished in the t-1 th frame in the target area,is the difference between the feature of the object j to be distinguished in the current frame t frame and the feature of the object i in the object region in the previous frame t-1 frame,is the difference between the feature of the target N in the target area in the t-1 th frame and the feature of the target i in the target area in the t-1 th frame.
According to the object discriminating apparatus of this embodiment, the minimum new difference isj,NThe corresponding target N is the distinguishing result of the target j to be distinguished in the t-th frame of the current frame.
Also, in the target region device according to the embodiment of the present invention, the difference evaluation manner may include, but is not limited to, Chamfer distance, energy comprehensive weighted average of geometric constraints of image gradient, image gray scale and shape, Bhattacharyya distance, euclidean distance, Mahalanobis distance, cosine distance, and the like.
It is also noted herein that specific processing procedures in the respective modules of the feature extraction module 71, the target difference calculation module 73, and the difference discrimination module 75 in the target discrimination method according to the present invention are similar to those in the respective steps of the feature rapid extraction step S101, the target difference calculation step S103, and the difference precise discrimination step S105 in the target discrimination method described with reference to fig. 1, respectively, and further detailed description is omitted here.
An example of the application of the object discrimination method and the object discrimination apparatus according to the present invention to the discrimination of soccer players is given below. For example, as shown in fig. 9, an example of an original image of a soccer player on a soccer field and a target to be distinguished is shown. Fig. 10 shows a diagram of the results of object discrimination using euclidean distance for the example shown in fig. 9 according to the prior art. In addition, fig. 11 shows a schematic diagram of the result of object discrimination for the example shown in fig. 9 using the object discrimination method and apparatus of the present invention based on euclidean distance.
It is known that when the shot is far from the players, the players in the same team are very similar and cannot be well distinguished from each other using the existing object distinguishing method. After the target distinguishing technology is used for processing, the distinguishing effect is very obvious. As can be seen from the distinguishing effect fig. 10, the objects distinguished by euclidean distance are not significantly different, and the current object (the object selected by the thick black box in the figure) cannot be clearly distinguished. Fig. 11 shows the results of discrimination of each object by the object discrimination method and the object discrimination device according to the present invention, in which the darker portions indicate higher similarity and the lighter portions indicate higher difference, by using euclidean distance for evaluation of the difference. It can be seen that the center of the target area portion in the thick black frame in the whole image has the largest similarity (indicated by black dots in the figure), and the difference rapidly increases (the black dots are smaller) with the position deviation, and the differences are larger (lighter) everywhere else.
While the principles of the invention have been described in connection with specific embodiments thereof, it should be noted that it will be understood by those skilled in the art that all or any of the steps or elements of the method and apparatus of the invention may be implemented in any computing device (including processors, storage media, etc.) or network of computing devices, in hardware, firmware, software, or any combination thereof, which will be within the skill of those in the art after reading the description of the invention and applying their basic programming skills.
Thus, the objects of the invention may also be achieved by running a program or a set of programs on any computing device. The computing device may be a general purpose device as is well known. The object of the invention is thus also achieved solely by providing a program product comprising program code for implementing the method or the apparatus. That is, such a program product also constitutes the present invention, and a storage medium storing such a program product also constitutes the present invention. It is to be understood that the storage medium may be any known storage medium or any storage medium developed in the future.
In the case where the embodiment of the present invention is implemented by software and/or firmware, a program constituting the software is installed from a storage medium or a network to a computer having a dedicated hardware structure, such as a general-purpose personal computer 700 shown in fig. 12, which is capable of executing various functions and the like when various programs are installed.
In fig. 12, a Central Processing Unit (CPU)701 performs various processes in accordance with a program stored in a Read Only Memory (ROM)702 or a program loaded from a storage section 708 to a Random Access Memory (RAM) 703. In the RAM 703, data necessary when the CPU 701 executes various processes and the like is also stored as necessary. The CPU 701, the ROM 702, and the RAM 703 are connected to each other via a bus 704. An input/output interface 705 is also connected to the bus 704.
The following components are connected to the input/output interface 705: an input section 706 including a keyboard, a mouse, and the like; an output section 707 including a display such as a Cathode Ray Tube (CRT), a Liquid Crystal Display (LCD), and the like, and a speaker and the like; a storage section 708 including a hard disk and the like; and a communication section 709 including a network interface card such as a LAN card, a modem, and the like. The communication section 709 performs communication processing via a network such as the internet.
A driver 710 is also connected to the input/output interface 705 as needed. A removable medium 711 such as a magnetic disk, an optical disk, a magneto-optical disk, a semiconductor memory, or the like is mounted on the drive 710 as necessary, so that the computer program read out therefrom is mounted in the storage section 708 as necessary.
In the case where the above-described series of processes is realized by software, a program constituting the software is installed from a network such as the internet or a storage medium such as the removable medium 711.
It should be understood by those skilled in the art that such a storage medium is not limited to the removable medium 711 shown in fig. 12 in which the program is stored, distributed separately from the apparatus to provide the program to the user. Examples of the removable medium 711 include a magnetic disk (including a floppy disk (registered trademark)), an optical disk (including a compact disc-read only memory (CD-ROM) and a Digital Versatile Disc (DVD)), a magneto-optical disk (including a mini-disk (MD) (registered trademark)), and a semiconductor memory. Alternatively, the storage medium may be the ROM 702, a hard disk included in the storage section 708, or the like, in which programs are stored and which are distributed to users together with the apparatus including them.
It is further noted that in the apparatus and method of the present invention, it is apparent that each component or step can be decomposed and/or recombined. These decompositions and/or recombinations are to be regarded as equivalents of the present invention. Also, the steps of executing the series of processes described above may naturally be executed chronologically in the order described, but need not necessarily be executed chronologically. Some steps may be performed in parallel or independently of each other.
Although the present invention and its advantages have been described in detail, it should be understood that various changes, substitutions and alterations can be made herein without departing from the spirit and scope of the invention as defined by the appended claims. Furthermore, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. The term "comprising", without further limitation, means that the element so defined is not excluded from the group consisting of additional identical elements in the process, method, article, or apparatus that comprises the element.
Claims (12)
1. A target differentiation apparatus comprising:
the characteristic extraction module is configured to extract the characteristic of each target in the target area;
the target difference calculation module is configured to calculate the difference between the current feature of each target to be distinguished and the feature of each target in the target area extracted by the feature extraction module; and
the difference distinguishing module is configured to take the difference between the current feature of each target to be distinguished and the feature of each target in the target area, which is calculated by the target difference calculating module, as a new feature of each target to be distinguished, and calculate the difference between the new features of each target to be distinguished so as to distinguish each target to be distinguished;
the target difference calculation module respectively calculates the difference between the characteristics of the current frame of each target to be distinguished and the characteristics of the previous frame of each target in the target area according to the following formula:
wherein,is the difference between the characteristics of the target j to be distinguished in the current frame t and the characteristics of the target N in the target area in the previous frame t-1, the Distance is a difference evaluation mode,is the feature of the object j to be distinguished in the current frame tth frame,is the feature of the target N in the t-1 frame of the previous frame in the target area;
wherein the difference distinguishing module is toAs a new feature of the current frame for each object j to be distinguished, andrespectively calculating the difference between the new feature of the current frame of each target to be distinguished and the new feature of the previous frame of each target in the target area according to the following formula:
wherein the new differencesj,NIs the difference, Distance, between the new feature of the target j in the current frame t frame and the new feature of the target N in the target area in the previous frame t-1 framenewIs a mode of evaluating the difference of the images,is a new feature of the object j to be distinguished in the current frame tth,is a new feature of the target N in the target area in the t-1 frame of the previous frame,is the difference between the feature of the target N in the target area in the t-1 th frame and the feature of the target i in the target area in the t-1 th frame.
2. The object differentiation device according to claim 1, wherein
The feature extraction module respectively extracts the features of the current frame and the features of the previous frame of each target in the target area;
the target difference calculation module respectively calculates the difference between the characteristics of the current frame of each target to be distinguished and the characteristics of the previous frame of each target in the target area; and
the difference distinguishing module takes the difference between the feature of the current frame of each object to be distinguished and the feature of the previous frame of each object in the object area as the new feature of each object to be distinguished, and respectively calculates the difference between the new feature of the current frame of each object to be distinguished and the new feature of the previous frame of each object in the object area.
3. The object differentiation device according to claim 2, wherein the feature extraction module comprises:
the segmentation unit is configured to segment the target region into non-overlapping segmentation blocks;
a feature calculating unit configured to calculate a feature of each of the divided blocks divided by the dividing unit;
a target determination unit configured to determine a range occupied by each target within the target area; and
a feature generation unit configured to extract features of all the cut pieces within a range occupied by each of the targets determined by the target determination unit, based on the features of the cut pieces calculated by the feature calculation unit, and combine to generate a feature of each of the targets.
4. The object differentiation device according to claim 3, wherein
The characteristic calculation unit calculates the characteristics corresponding to all the pixel points in each segmentation block; and
the feature generation unit extracts features corresponding to all pixel points in the range occupied by each target, and combines the features to generate features of each target.
5. The object differentiation device according to claim 1, wherein
Wherein, Distance' is a difference evaluation mode,is the difference between the feature of the object j to be distinguished in the current frame t frame and the feature in the previous frame t-1 frame,is the difference between the feature of the target N in the t-1 th frame and the feature of the target j to be distinguished in the t-1 th frame in the target area,is the difference between the feature of the object j to be distinguished in the current frame t frame and the feature of the object i in the object region in the previous frame t-1 frame,is the difference between the feature of the target N in the target area in the t-1 th frame and the feature of the target i in the target area in the t-1 th frame; and
minimum new differencej,NThe corresponding target N is the distinguishing result of the target j to be distinguished in the t-th frame of the current frame.
6. The object differentiation apparatus according to claim 5, wherein the difference evaluation means comprises a Chamfer distance, an energy integrated weighted average of geometric constraints of image gradient, image gray scale and shape, a Bhattacharyya distance, an euclidean distance, a Mahalanobis distance, and a cosine distance.
7. A method of object discrimination, comprising:
a feature extraction step, which is to extract the features of each target in the target area;
a target difference calculation step of calculating the difference between the current feature of each target to be distinguished and the feature of each target in the target area extracted in the feature extraction step; and
a difference distinguishing step of taking the difference between the current feature of each target to be distinguished calculated in the target difference calculating step and the feature of each target in the target area as a new feature of each target to be distinguished, and calculating the difference between the new features of each target to be distinguished to distinguish each target to be distinguished;
wherein the target difference calculating step calculates the difference between the feature of the current frame of each target to be distinguished and the feature of the previous frame of each target in the target area according to the following formula:
wherein,is the difference between the characteristics of the target j to be distinguished in the current frame t and the characteristics of the target N in the target area in the previous frame t-1, the Distance is a difference evaluation mode,is the feature of the object j to be distinguished in the current frame tth frame,is the feature of the target N in the t-1 frame of the previous frame in the target area;
wherein the step of differentiating is toAs the new feature of the current frame of each object j to be distinguished, and respectively calculating the difference between the new feature of the current frame of each object to be distinguished and the new feature of the previous frame of each object in the object region according to the following formula:
wherein the new differencesj,NIs the difference, Distance, between the new feature of the target j in the current frame t frame and the new feature of the target N in the target area in the previous frame t-1 framenewIs a mode of evaluating the difference of the images,is a new feature of the object j to be distinguished in the current frame tth,is a new feature of the target N in the target area in the t-1 frame of the previous frame,is the difference between the feature of the target N in the target area in the t-1 th frame and the feature of the target i in the target area in the t-1 th frame.
8. The object differentiation method according to claim 7, wherein
The characteristic extraction step is used for respectively extracting the characteristics of the current frame and the characteristics of the previous frame of each target in the target area;
the target difference calculation step calculates the difference between the characteristics of the current frame of each target to be distinguished and the characteristics of the previous frame of each target in the target area respectively; and
and in the difference distinguishing step, the difference between the characteristic of the current frame of each object to be distinguished and the characteristic of the previous frame of each object in the object area is used as a new characteristic of each object to be distinguished, and the difference between the new characteristic of the current frame of each object to be distinguished and the new characteristic of the previous frame of each object in the object area is respectively calculated.
9. The object discriminating method as claimed in claim 8, wherein the feature extracting step includes:
a segmentation step, namely segmenting the target region into non-overlapping segmentation blocks;
a feature calculation step of calculating a feature of each of the divided blocks divided by the division step;
a target determination step of determining a range occupied by each target in the target area; and
a feature generation step of extracting features of all the cut blocks within the range occupied by each object determined by the object determination step, based on the features of the cut blocks calculated by the feature calculation step, and combining to generate features of each object.
10. The object differentiation method according to claim 9, wherein
Calculating the characteristics corresponding to all pixel points in each segmentation block by a characteristic calculation step; and
the feature generation step extracts features corresponding to all pixel points in the range occupied by each target, and combines the features to generate the features of each target.
11. The object differentiation method according to claim 7, wherein
Wherein, Distance' is a difference evaluation mode,is the difference between the feature of the object j to be distinguished in the current frame t frame and the feature in the previous frame t-1 frame,is the difference between the feature of the target N in the t-1 th frame and the feature of the target j to be distinguished in the t-1 th frame in the target area,is the difference between the feature of the object j to be distinguished in the current frame t frame and the feature of the object i in the object region in the previous frame t-1 frame,is the difference between the feature of the target N in the target area in the t-1 th frame and the feature of the target i in the target area in the t-1 th frame; and
minimum new differencej,NThe corresponding target N is the distinguishing result of the target j to be distinguished in the t-th frame of the current frame.
12. The object differentiation method according to claim 11, wherein the difference evaluation means comprises Chamfer distance, energy integrated weighted average of geometric constraints of image gradient, image gray scale and shape, Bhattacharyya distance, euclidean distance, Mahalanobis distance, and cosine distance.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN200910005387.9A CN101814145B (en) | 2009-02-24 | 2009-02-24 | Target distinguishing method and target distinguishing device |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN200910005387.9A CN101814145B (en) | 2009-02-24 | 2009-02-24 | Target distinguishing method and target distinguishing device |
Publications (2)
Publication Number | Publication Date |
---|---|
CN101814145A CN101814145A (en) | 2010-08-25 |
CN101814145B true CN101814145B (en) | 2014-01-29 |
Family
ID=42621396
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN200910005387.9A Expired - Fee Related CN101814145B (en) | 2009-02-24 | 2009-02-24 | Target distinguishing method and target distinguishing device |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN101814145B (en) |
Families Citing this family (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102609713A (en) * | 2011-01-20 | 2012-07-25 | 索尼公司 | Method and equipment for classifying images |
CN103456179A (en) * | 2012-06-04 | 2013-12-18 | 中兴通讯股份有限公司 | Vehicle statistics method and device and video monitoring system |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN1716281A (en) * | 2005-06-29 | 2006-01-04 | 上海大学 | Visual quick identifying method for football robot |
CN101017573A (en) * | 2007-02-09 | 2007-08-15 | 南京大学 | Method for detecting and identifying moving target based on video monitoring |
CN101159855A (en) * | 2007-11-14 | 2008-04-09 | 南京优科漫科技有限公司 | Characteristic point analysis based multi-target separation predicting method |
-
2009
- 2009-02-24 CN CN200910005387.9A patent/CN101814145B/en not_active Expired - Fee Related
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN1716281A (en) * | 2005-06-29 | 2006-01-04 | 上海大学 | Visual quick identifying method for football robot |
CN101017573A (en) * | 2007-02-09 | 2007-08-15 | 南京大学 | Method for detecting and identifying moving target based on video monitoring |
CN101159855A (en) * | 2007-11-14 | 2008-04-09 | 南京优科漫科技有限公司 | Characteristic point analysis based multi-target separation predicting method |
Also Published As
Publication number | Publication date |
---|---|
CN101814145A (en) | 2010-08-25 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Cheng et al. | Boundary-preserving mask r-cnn | |
CN110020592B (en) | Object detection model training method, device, computer equipment and storage medium | |
Zitnick et al. | Edge boxes: Locating object proposals from edges | |
US8761510B2 (en) | Object-centric spatial pooling for image classification | |
Li et al. | Graph mode-based contextual kernels for robust SVM tracking | |
US9489566B2 (en) | Image recognition apparatus and image recognition method for identifying object | |
CN102142147A (en) | Device and method for analyzing site content as well as device and method for detecting and tracking target | |
EP3107038A1 (en) | Method and apparatus for detecting targets | |
US11144752B1 (en) | Physical document verification in uncontrolled environments | |
US9129152B2 (en) | Exemplar-based feature weighting | |
CN109376717A (en) | Personal identification method, device, electronic equipment and the storage medium of face comparison | |
EP2234388B1 (en) | Object detection apparatus and method | |
Pieringer et al. | Flaw detection in aluminium die castings using simultaneous combination of multiple views | |
CN107886066A (en) | A kind of pedestrian detection method based on improvement HOG SSLBP | |
Abdelali et al. | Fast and robust object tracking via accept–reject color histogram-based method | |
Araar et al. | Traffic sign recognition using a synthetic data training approach | |
Li et al. | Finely Crafted Features for Traffic Sign Recognition | |
Budiarsa et al. | Face recognition for occluded face with mask region convolutional neural network and fully convolutional network: a literature review | |
CN101814145B (en) | Target distinguishing method and target distinguishing device | |
CN116824333B (en) | Nasopharyngeal carcinoma detecting system based on deep learning model | |
CN101826150A (en) | Head detection method and device and head detection and category judgment method and device | |
CN115512428B (en) | Face living body judging method, system, device and storage medium | |
CN114494891B (en) | Hazardous article identification device and method based on multi-scale parallel detection | |
Yang et al. | On-road vehicle tracking using keypoint-based representation and online co-training | |
Cui et al. | Pedestrian detection using improved histogram of oriented gradients |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
C14 | Grant of patent or utility model | ||
GR01 | Patent grant | ||
CF01 | Termination of patent right due to non-payment of annual fee | ||
CF01 | Termination of patent right due to non-payment of annual fee |
Granted publication date: 20140129 |