CN102737249A - Object identification device and object identification method - Google Patents

Object identification device and object identification method Download PDF

Info

Publication number
CN102737249A
CN102737249A CN201210034176XA CN201210034176A CN102737249A CN 102737249 A CN102737249 A CN 102737249A CN 201210034176X A CN201210034176X A CN 201210034176XA CN 201210034176 A CN201210034176 A CN 201210034176A CN 102737249 A CN102737249 A CN 102737249A
Authority
CN
China
Prior art keywords
information
detection zone
deformation detection
personage
monitored space
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201210034176XA
Other languages
Chinese (zh)
Other versions
CN102737249B (en
Inventor
李媛
三好雅则
伊藤诚也
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hitachi Industry and Control Solutions Co Ltd
Original Assignee
Hitachi Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hitachi Ltd filed Critical Hitachi Ltd
Publication of CN102737249A publication Critical patent/CN102737249A/en
Application granted granted Critical
Publication of CN102737249B publication Critical patent/CN102737249B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Image Analysis (AREA)
  • Closed-Circuit Television Systems (AREA)
  • Alarm Systems (AREA)
  • Image Processing (AREA)

Abstract

The invention provides an object identification device and an object identification method, being able to make the identified object be deformed on the image, and being able to improving the identification rate for the identified object. In the object identification device (1), a Z storage part (20) stores the monitoring object information (210) about the information of the identified object, and the camera information (220) of the parameters of the camera device,the object identification device (1) uses the monitoring object information (210) to generate a detection area in the monitoring space, and uses the camera information (220) to transform the generated detection area into a deformation detection area on the monitored image,and the object identification device (1) extracts the characteristic quantity from the image information (100) of the deformation detection area so as to determine whether to be the character of the identified object.

Description

Object detector and object identification method
Technical field
The present invention relates to a kind of object detector and object identification method.
Background technology
As the method for discerning certain objects from the image information of obtaining by camera heads such as surveillance cameras; Proposed in the prior art much from the image the regulation zone (for example; The rectangular area of the scope of identification processing is carried out in expression) extract characteristic quantity, and come the method for recognition object according to the characteristic quantity that is extracted.
For example; As the technology that realizes person recognition; Following technology is disclosed in patent documentation 1; That is, use recognizer generating apparatus and recognition device to realize person recognition, said recognizer is from by the image that comprises the personage with do not comprise the characteristic quantities such as study sample extraction outward appearance (appearance) information (shape that reads from the outward appearance of object and the information of color) and profile information of personage's image construction; Discern the object beyond personage or the personage, said recognition device uses this recognizer to come whether to have the personage in the regulation zone (rectangular area) on the recognition image.
In addition; Following technology is disclosed in patent documentation 2; That is, in order to improve accuracy of identification, zone about the zone of the regulation on the image (rectangular area) is divided into; Any image in being judged as left-half image, right half part image and general image is identified as regulation region memory on this image the personage when the personage.
[patent documentation 1] open patent 2009-181220 of Japan communique
[patent documentation 2] open patent 2010-108138 of Japan communique
In the existing object recognition methods, adopted on image the method that whether comprises identifying objects such as personage or object in the regulation zone (rectangular area) that is identified on each picture position has been scanned in predefined regulation zone (rectangular area).In the numerous study sample that when generating the recognizer of carrying out this identification, uses, the personage has got the posture of standing mostly.Therefore, sample is the same for the personage in the rectangular area that scans and study when being in the posture of standing, and discrimination is than higher.
But; In fact; The object that surveillance camera etc. photograph (personage etc.) is different because of the position relation between the size of the camera parameters such as position, the angle of depression and field angle of video camera, object and video camera and the object etc. (the following information that will be somebody's turn to do information relevant with video camera or object self and the position relation between expression video camera and the object etc. is referred to as " monitored space information "), and the presentation mode on image is also different.For example, in the end of image distortion takes place sometimes, and because of the method to set up of video camera is different, image run-off the straight sometimes (image is not in horizontal state).In addition, because of the distance between video camera and the object, object can dwindle on image or the distortion of amplification etc. sometimes.With respect to the image that has produced distortion such as distortion; If utilize the recognition methods in the regulation zone (rectangular area) on the predefined image; Then as long as the identifying object in this rectangular area is inconsistent with the shape of study sample, the problem of the identification etc. of mistake will appear discerning or carry out.
For example, in patent documentation 1 disclosed technology, according to the study sample of the character image that comprises stance, the profile of study whole body also generates recognizer.When having used the recognizer that is generated,, then can carry out correct identification if the personage that the regulation zone (rectangular area) from image extracts is in the posture of standing., if include distorted portion in the image, then the character features amount that extracts of the zone of the regulation from the image (rectangular area) is insufficient, can't discern the personage sometimes.
In addition; In patent documentation 2 disclosed technology; Even zone about with the regulation area dividing on the image being, the distortion of contained distortion etc. still exists in this image, so there is the problem that can't extract the required characteristic quantity of person recognition and cause discrimination to descend.
When adopting prior art to solve the problems referred to above, can adopt the method that makes the study sample comprise the data of considering distortion such as distortion.But when this method of employing; Owing to need make a large amount of samples accordingly with the various conditions of video camera or object; Perhaps because of the variation of the rendering method (profile etc.) of identifying object on image; The background that is not identifying object is presented the trend of increase by error as the probability of identifying object, so said method and unrealistic.
Summary of the invention
The present invention accomplishes under above-mentioned background, and problem of the present invention is to provide a kind of object detector and object identification method, makes to have taken place on the image also can improve the discrimination to this identifying object under the situation of distortion at identifying object.
In order to solve above-mentioned problem; Object detector of the present invention storage and monitored space information as the video camera information of the parameter correlation of the relevant monitor objects information of the object self of monitored object and preservation and camera system; Object detector uses monitor objects information on monitored space, to generate surveyed area, and the surveyed area that uses video camera information 220 to generate is transformed to the deformation detection zone on the monitoring picture.In addition, object detector extracts characteristic quantity from the image information in deformation detection zone, to judge whether to be the object as identifying object.
(invention effect)
According to the present invention, a kind of object detector and object identification method can be provided, taken place on the image also can improve discrimination under the situation of distortion at identifying object even make to this identifying object.
Description of drawings
Fig. 1 is the functional block diagram of the structure example of the related object detector of expression first embodiment of the present invention.
Fig. 2 is the functional block diagram of the structure of the related deformation detection zone generation portion of expression first embodiment of the present invention.
Fig. 3 is the figure that expression is transformed to the surveyed area on the monitored space example in the deformation detection zone on the monitoring picture.
Fig. 4 is the figure of expression by the example in the deformation detection zone on the monitoring picture of deformation detection zone generation portion generation.
Fig. 5 is the functional block diagram of the structure of the related object identification portion of expression first embodiment of the present invention.
Fig. 6 is the process flow diagram of the flow process handled of object identification that expression is undertaken by the related object detector of first embodiment of the present invention.
Fig. 7 is the functional block diagram of the structure example of the related object detector of expression second embodiment of the present invention.
Fig. 8 is the functional block diagram that the related identification range of expression second embodiment of the present invention is confirmed the structure of portion.
Fig. 9 confirms the definite key diagram of handling of identification range that portion carries out by the related identification range of second embodiment of the present invention.
Figure 10 is the process flow diagram of identification range that expression is undertaken by the related object detector of the second embodiment of the present invention flow process confirming to handle.
Figure 11 is the functional block diagram of the structure of the related object identification portion of expression the 3rd embodiment of the present invention.
Figure 12 is the process flow diagram of the flow process handled of person recognition that expression is undertaken by the related object detector of the 3rd embodiment of the present invention.
Figure 13 is the key diagram of the flow process of the related person recognition processing of the 3rd embodiment of the present invention.
Figure 14 is the functional block diagram of the structure of the related object identification portion of expression the 4th embodiment of the present invention.
Figure 15 is the key diagram that the related personage of the 4th embodiment of the present invention counts processing.
Figure 16 is a process flow diagram of representing to be counted by the personage that the related object detector of the 4th embodiment of the present invention carries out the flow process of processing.
Figure 17 is the functional block diagram of the structure example of the related object detector of expression the 5th embodiment of the present invention.
Figure 18 is the key diagram of the related personage's tracking process of the 5th embodiment of the present invention.
Figure 19 is the functional block diagram of the structure of the related tracking object identification part of expression the 5th embodiment of the present invention.
Figure 20 is the process flow diagram of the flow process of expression personage's tracking process of being undertaken by the related object detector of the 5th embodiment of the present invention.
Figure 21 is the process flow diagram of the flow process of expression personage's tracking process of being undertaken by the related object detector of the 5th embodiment of the present invention.
Symbol description: 1-object detector; The 10-control part; The 20-storage part; The 30-memory section; The 40-input part; The 50-efferent; The personage locus of 92-former frame; The 110-image acquiring unit; 120-monitored space information acquiring section; 130-deformation detection zone generation portion; 131-monitored space surveyed area generation portion; 132-image detection domain transformation portion; 140-object identification portion; 141-surveyed area image extraction portion; 142-Characteristic Extraction portion; 143-discerns handling part; 144-zone normalization portion; 145-person detecting portion; 146-moving direction identification part; The 147-judging part of coming in and going out; The 150-output processing part; The 160-identification range is confirmed portion; 161-monitored space environment generation portion; 162-image-context transformation component; 163-identification range differentiation portion; 170-follows the trail of deformation detection zone generation portion; 180-follows the trail of the object identification part; 181-trace detection area image extraction portion; 182-personage confirms portion; 200-monitored space information; 210-monitor objects information; The 220-video camera information; The 1000-monitored space; 1100-surveyed area (rectangular area); 1101-trace detection zone; The 2000-monitoring picture; 2100-deformation detection zone; 2101-follows the trail of the deformation detection zone.
Embodiment
Below, suitably the mode that is used for embodiment of the present invention (below be called " embodiment ") is elaborated with reference to accompanying drawing.
" first embodiment: object detector "
At first, the object detector 1 (1a) that first embodiment of the present invention is related is described.
The surveyed area of stating after the related object detector 1 (1a) of first embodiment of the present invention generates in monitored space 1000 (with reference to Fig. 3) shown in Figure 3 1100; As the zone that comprises identifying object; And use video camera information 220, the surveyed area that is generated 1100 is transformed to the deformation detection zone 2100 on the monitoring picture 2000.Deformation detection zone 2100 reflections on this monitoring picture 2000 and the information of the various parameter correlations of camera system, and as shown in Figure 3, on the basis of the distortion of considering monitoring picture 2000, generate this deformation detection zone 2100.In addition, object detector 1 (1a) is from the image information 100 extraction characteristic quantities in the deformation detection zone 2100 that generates as the zone that comprises the identifying object that be out of shape because of distortion etc., to judge whether to be the object as identifying object.
Fig. 1 is the functional block diagram of the structure example of the related object detector 1 (1a) of expression first embodiment of the present invention.
As shown in Figure 1, the related object detector 1 (1a) of first embodiment of the present invention is configured to comprise control part 10, storage part 20, memory section 30, input part 40 and efferent 50.
Input part 40 is made up of input interface, and never camera head such as illustrated camera system obtains the image information 100 that photographs.
In addition, efferent 50 is made up of output interface, is used for the result of control part 10 is presented at not shown display device such as monitor.
Storage part 20 is made up of storage unit such as hard disk or flash memories, is used for storing the monitored space information 200 of position relation between the information relevant with video camera or object self and expression video camera and the object etc.This monitored space information 200 is configured to comprise monitor objects information 210 and video camera information 220.
Monitor objects information 210 be with as relevant information such as the personage of identifying object or objects; For example; At identifying object is under personage's the situation, the width that this monitor objects information 210 is personages, height, foot's coordinate information such as (position coordinateses in the monitored space).In addition, be under the situation of object at identifying object, this monitor objects information 210 is the information such as size (vertically, laterally and height) of this object (for example, the animal beyond automobile etc. or the people etc.).In addition, generation portion 130 generates the surveyed area (rectangular area) 1100 o'clock that comprises this identifying object in deformation detection zone, uses the monitor objects information 210 (details with then state) of this personage or object.
Video camera information 220 is that transmission is by the inner parameter of the not shown camera system of the image information 100 of input part 40 acquisitions and the information of external parameter.Wherein, inner parameter is meant the center (xc, yc) of focus of camera (f), image coordinate and the aspect ratio (S) of lens aberration coefficient (k) and image coordinate etc.In addition, external parameter is meant that the initial point with world coordinate system is the parallel mobile vector (T) and the triplex row three row rotating vectors (R) of benchmark.The setting of these external parameters for example can adopt " R.Y.Tsai; ' and A versatile camera calibration technique for high-accuracy 3Dmachine vision metrology using off-the-shelf TV camera and lenses '; IEEE Journal of Robotics and Automation; Vol.RA-3, No.4, pp.323-344.1987 " etc. existing technology carry out.
Memory section 30 is made up of RAM memory storages such as (Random Access Memory), is used for storing the required information of object identification processing procedure that control part 10 carries out etc. temporarily.
Control part 10 is responsible for integral body control is carried out in the object identification processing of object detector 1 (1a), is configured to comprise image acquiring unit 110, monitored space information acquiring section 120, deformation detection zone generation portion 130, object identification portion 140 (140a) and output processing part 150.In addition, the function of this control part 10 for example reads memory section 30 through CPU (Central Processing Unit) the program in the storage part 20 of object detector 1 (1a) of will being stored in and handles and realize.
Image acquiring unit 110 is obtained image information 100 via input part 40 from camera system etc.
Monitored space information acquiring section 120 is obtained monitor objects information 210 and video camera information 220 from the monitored space information 200 in the storage part 20.
Deformation detection zone generation portion 130 obtains monitor objects information 210 via monitored space information acquiring section 120; According to generate as the personage of identifying object or the relevant information of object surveyed area (rectangular area) 1100 in the monitored space 1000 (with reference to after Fig. 3 of stating); And, this surveyed area 1100 is transformed to the deformation detection zone 2100 (Fig. 3 that states after the reference) on the monitoring picture 2000 based on video camera information 220.In addition, about the details of deformation detection zone generation portion 130, will after state part and explain with reference to Fig. 2 to Fig. 4.
Object identification portion 140 (140a) receives deformation detection zone 2100 that is generated by deformation detection zone generation portion 130 and the image information of being obtained by image acquiring unit 110 100, and the image information 100 in this deformation detection zone 2100 is extracted characteristic quantity.After this, object identification portion 140 (140a) uses the characteristic quantity that is extracted to judge whether to be personage or object as identifying object.In addition, object identification portion 140 (140a) outputs to this recognition result in the output processing part 150.In addition, about the details of object identification portion 140 (140a) will after state part and explain with reference to Fig. 5.
Output processing part 150 is presented on the display device via the object identification portion 140 handled recognition results of efferent 50 with control part 10.Output processing part 150 for example shows with the deformation detection zone 2100 that red block etc. surrounds on the monitoring picture 2000 of display device; Perhaps on monitoring image, show Pop-up information (Pop-up message), this Pop-up information representation has recognized personage or the object as identifying object.In addition, also can be arranged to send expression to portable terminal has recognized as the personage of identifying object and the information (mail etc.) of this content of object.
(deformation detection zone generation portion 130)
Below, specify the structure of the deformation detection zone generation portion 130 of the related object detector 1 (1a) of first embodiment of the present invention.Fig. 2 is the functional block diagram of the structure of the related deformation detection zone generation portion 130 of expression first embodiment of the present invention.
As shown in Figure 2, deformation detection zone generation portion 130 is configured to comprise monitored space surveyed area generation portion 131 and image detection domain transformation portion 132.
Monitored space surveyed area generation portion 131 receives monitor objects information 210 from monitored space information acquiring section 120, and in monitored space 1000, generates surveyed area 1100.Whether this surveyed area 1100 is illustrated in the monitored space 1000 object detectors 1 and detects on its locus and exist as the personage of identifying object or the zone of object.The shape of this surveyed area can be according to for example adopting different shapes such as rectangle, circle, rhombus, pentagon as the personage of identifying object or the situation of object, below, in this embodiment, be that example describes with the rectangle.
For example, when identifying object was the personage, monitored space surveyed area generation portion 131 used monitor objects information 210, in world coordinates, confirmed width, height and foot's coordinate as the personage of identifying object, to generate surveyed area (rectangular area) 1100.In addition, be imported into through input part 40 under the situation of object detector 1 (1a) the specific personage who wants to discern, monitored space surveyed area generation portion 131 obtains and this personage's information corresponding from monitor objects information 210.On the other hand; Not that personage to specific discerns; But when wanting all personages are discerned; Also can monitored space surveyed area generation portion 131 be arranged to generate personage's width and height various surveyed areas (rectangular area) 1100 inequality, and handle and discern not specific personage through carrying out object identification repeatedly.
Image detection domain transformation portion 132 receives video camera information 220 from monitored space information acquiring section 120, and the surveyed area 1100 of monitored space surveyed area generation portion 131 generations is transformed on the monitoring picture 2000, to generate deformation detection zone 2100.
Fig. 3 is the key diagram that image detection domain transformation portion 132 will be transformed to the example in the deformation detection zone 2100 on the monitoring picture 2000 by the monitored space 1000 interior surveyed areas (rectangular area) 1100 that monitored space surveyed area generation portion 131 generates.
As shown in Figure 3, represent monitored space 1000 with world coordinate system, the initial point 61 of world coordinate system for example is set at ground arbitrfary point.In addition, camera system 62 be arranged on locus 63 (X, Y, Z) on, represent the monitoring picture 2000 that photographs by camera system 62 with image coordinate system.And monitored space surveyed area generation portion 131 is according to the personage's who obtains from monitor objects information 210 width and height etc., generation surveyed area (rectangular area) 1100 on the assigned position of the monitor area monitored space 1000 in (position coordinates).
On the other hand; Image detection domain transformation portion 132 uses inner parameter and the external parameter that obtains from video camera information 220; With the world coordinate transformation on the summit (four summits of rectangle) of surveyed area (rectangular area) 1100 is the point on the monitoring picture 2000, to generate deformation detection zone 2100.
The distortion of this deformation detection zone 2100 on monitoring picture 2000, dwindle, degree such as amplification changes because of the difference at the locus 63 of camera system 62 or the angle of depression 64 etc.In addition, the distortion of this deformation detection zone 2100 on monitoring picture 2000, dwindle, degree such as amplification also changes because of the difference of the position in monitored space 1000 (monitor area).
Below, the generation in the deformation detection of being undertaken by image detection domain transformation portion 132 zone 2100 handled specifying.Image detection domain transformation portion 132 will be transformed to the image coordinate on the monitoring picture 2000 by the apex coordinate (being four summits of rectangle) that monitored space surveyed area generation portion 131 is created on the surveyed area (rectangular area) 1100 on the monitored space 1000 in Fig. 3, generate deformation detection zone 2100 thus.
Below, the inner parameter that image detection domain transformation portion 132 is used video camera informations 220 and external parameter are transformed to image coordinate on the monitoring picture 2000 with monitored space 1000 interior arbitrfary point p (u, processing v) describe.
At first, shown in (1), with the world coordinates (x of a p w, y w, z w) be transformed to camera coordinates (x c, y c, z c).
x c y c z c = R x w y w z w - T R = r 1 r 2 r 3 r 4 r 5 r 6 r 7 r 8 r 9 Formula (1)
In the formula, T representes the parallel mobile vector of inner parameter, and R representes the rotating vector of triplex row three row.
Then, shown in (2), be transformed to image coordinate (X from camera coordinates u, Y u).
X u = f x c z c Y u = f y c z c Formula (2)
Wherein, f representes focus of camera, (x c, y c, z c) center of presentation video coordinate.
Then, with the value substitution following formula of the aspect ratio S of lens aberration coefficient k 1, k2 and image coordinate
(3) in, obtain the picture position (X when having distortion with this d, Y d).
X u=SX d(1+k 1r 2+ k 2r 4) Y u=Y d(1+k 1r 2+ k 2r 4) Formula (3)
After this, according to following formula (4) obtain by above-mentioned formula (1), formula (2) and formula (3) will put p transform to image coordinate on the monitoring picture 2000 (u, v).
U=u c-X dV=v c-Y dFormula (4)
As stated; Image detection domain transformation portion 132 employing formulas (1)~formula (4); The summit of the surveyed area (rectangular area) 1100 that will in monitored space 1000, generate is transformed to the point on the monitoring picture 2000, can generate thus with monitoring picture 2000 on the corresponding deformation detection regional 2100 of distortion such as distortion.
Fig. 4 be by deformation detection zone generation portion 130 generate in monitoring picture 2000 (2000A, deformation detection zone 2100 (2100A, illustrated view 2100B) on 2000B).Deformation detection zone 2100 when Fig. 4 (a) and Fig. 4 (b) show surveyed area (rectangular area) 1100 to the same position in the identical monitored space 1000 and changed the angle of depression 64, on the monitoring picture 2000.The angle of depression 64A of camera system 62A among Fig. 4 (a) is 30 degree, and the angle of depression 64B of the camera system 62B among Fig. 4 (b) is 60 degree.Even the position of the surveyed area (rectangular area) 1100 in the monitored space 1000 is identical, the last image coordinate of the degree of the distortion such as distortion of deformation detection zone 2100A, 2100B and monitoring picture 2000A, 2000B is also different.In addition, in Fig. 4 (a) and Fig. 4 (b),, also show the part that exceeds display frame for the distortion such as distortion in deformation detection zone 2100 are shown with being easily understood.
In addition, this deformation detection zone generation portion 130 also can adopt following method generating deformation detection zone 2100 o'clock.Deformation detection zone generation portion 130 all used formula (1)~formula (4) to carry out the generation processing in deformation detection zone 2100 at 1100 o'clock at each surveyed area (rectangular area) that generates, but generated the world coordinates (x that makes monitored space 1000 in advance w, y w, z w) with monitoring picture 2000 on image coordinate (u, v) corresponding look-up table (reference table).Then, deformation detection zone generation portion 130 is generating deformation detection zone 2100 o'clock, through omitting conversion process with reference to this look-up table, and the speed in the time of can improving the mutual conversion in surveyed area (rectangular area) 1100 and deformation detection zone 2100 thus.
(object identification portion 140)
Below, specify the structure of the object identification portion 140 (140a) of the related object detector 1 (1a) of first embodiment of the present invention.Fig. 5 is the functional block diagram of the structure of the related object identification portion 140 (140a) of expression first embodiment of the present invention.
As shown in Figure 5, object identification portion 140 (140a) is configured to comprise surveyed area image extraction portion 141, Characteristic Extraction portion 142 and identification handling part 143.
Surveyed area image extraction portion 141 receives image information 100 from image acquiring unit 110 (with reference to Fig. 1), and from this image information that receives 100, extracts the image information 100 in the deformation detection zone 2100 that is generated by deformation detection zone generation portion 130.
Characteristic Extraction portion 142 uses the image information 100 that is extracted by surveyed area image extraction portion 141, extracts the characteristic quantity with the corresponding regulation of recognizer.This characteristic quantity for example can adopt " N.Dalal and B.Triggs; ' and Histograms of Oriented Gradients for Human Detection '; IEEE Computer Vision and Pattern Recognition (computer vision nuclear pattern identification); pp.886-893,2005 " in the record HOG (Histograms of Oriented Gradients).This HOG obtains the brightness step histogramization of regional area, can access the profile of object.In addition, as other characteristic quantity, can be used as characteristic quantity and adopt the information of brightness value or color etc. of Haar-Like characteristic, each pixel of the lightness difference of the rectangular area of using adjacency.
Identification handling part 143 uses the characteristic quantity that is extracted by Characteristic Extraction portion 142, and personage or object are discerned processing.As the recognizer that this identification handling part 143 uses, for example can adopt the recognition methods of the recognizer that use generates according to the study sample or calculate the algorithm of method of the similar degree of the color and style.In the method for calculating similar degree, often use the residual error absolute value of brightness and the quadratic sum (SSD:Sum of Squared Difference) or the normalized crosscorrelation (NCC:Normalized Cross Correlation) of the residual error of (SAD:Sum of Absolute Difference), brightness.
(object identification method of object detector)
Below, specify the object identification of being undertaken by the related object detector 1 (1a) of first embodiment of the present invention and handle.
Fig. 6 is the process flow diagram of the flow process handled of object identification that expression is undertaken by the related object detector 1 (1a) of first embodiment of the present invention.At this, be that personage's situation is that example describes with identifying object.
At first, the image acquiring unit 110 of object detector 1 (1a) is obtained image information 100 (step S101) via input part 40 from camera heads such as surveillance camera.
Then, the monitored space information acquiring section 120 of object detector 1 (1a) is obtained monitor objects information 210 and video camera information 220 (step S102) from the monitored space information 200 in the storage part 20.
Then, the deformation detection of object detector 1 (1a) zone generation portion 130 receives monitor objects information 210 from monitored space information acquiring section 120, and generates the surveyed area (rectangular area) 1100 (step S103) in the monitored space 1000.
Specifically, the monitored space surveyed area generation portion 131 of deformation detection zone generation portion 130 uses information such as the width that is included in the personage in the monitor objects information 210 or height, generates surveyed area (rectangular area) 1100.
After this; The image detection domain transformation portion 132 of deformation detection zone generation portion 130 receives video camera information 220 from monitored space information acquiring section 120; And the coordinate (step S104) in the position coordinates (, being foot's coordinate) of the monitor area in the selection monitored space 1000 as the personage of identifying object at this.In addition, this processing is to be used for the processing that the foot's coordinate to corresponding monitored space 1000 scans when on the optional position of monitoring picture 2000, having the personage.
Then, image detection domain transformation portion 132 uses video camera information 220, and the surveyed area 1100 that will in step S103, be generated by monitored space surveyed area generation portion 131 is transformed to deformation detection zone 2100 (the step S105) on the monitoring picture 2000.
Then, the object identification portion 140 of object detector 1 (1a) uses the image information 100 in deformation detection zone 2100 to extract characteristic quantity.
Specifically; The surveyed area image extraction portion 141 of object identification portion 140 (140a) receives the image information 100 of surveillance cameras etc. from image acquiring unit 110, and from this image information that receives 100, extracts the image information 100 (step S106) in the deformation detection zone 2100 that is generated by deformation detection zone generation portion 130.After this, the Characteristic Extraction portion 142 of object identification portion 140 uses the image information 100 that is extracted by surveyed area image extraction portion 141, extracts the characteristic quantity (step S107) with the corresponding regulation of recognizer.
Then, the characteristic quantity that the identification handling part 143 use characteristic amount extraction portions 142 of object identification portion 140 extract is discerned processing (step S108), and judges whether this result is identifying object (step S109).At this, identification handling part 143 judges whether to be the personage as identifying object.
After this, in step S109, be identifying object (personage) (step S109 → be) if identification handling part 143 is judged as, then discern handling part 143 this information is sent in the output processing part 150.
Afterwards, output processing part 150 outputs to recognition result in the monitoring picture 2000 that is presented on the not shown display device (step S110) via efferent 50.At this, output processing part 150 for example waits the deformation detection zone 2100 of representing monitoring picture 2000 with red block, shows the situation that has recognized the personage thus.In addition, output processing part 150 also can be configured to except monitoring picture 2000, also on the picture of having simulated monitored space 1000, shows the surveyed area (rectangular area) 1100 that comprises identifying object (personage) with the mode that surrounds with frame.After this, enter into next procedure S111.
On the other hand, in step S109, not identifying object (personage) (step S109 → deny), then enter into next procedure S111 if identification handling part 143 is judged as.
In step S111, object identification portion 140 (identification handling part 143) judges whether whole processing of the position coordinates (at this, being the foot's coordinate as the personage of identifying object) of the monitor area in the monitored space 1000 finish.
At this, if also there is the position coordinates (step S111 → deny) of the monitor area of not handling, then turn back to step S104, select the position coordinates of next monitor area to proceed to handle.On the other hand, if all position coordinateses of monitor area have all carried out handling (step S111 → be), then finish object identification and handle.
As stated; The related object detector 1 (1a) of first embodiment of the present invention uses the deformation detection zone 2100 on the monitoring picture 2000; Can extract characteristic quantity with the mode of the distortion such as distortion that are fit to personage or object, thereby can improve accuracy of identification.
" second embodiment: the identification range of object detector is confirmed to handle "
Below, explain that the identification range of the object detector 1 (1b) (with reference to Fig. 7) that second embodiment of the present invention is related is confirmed to handle.The related object detector 1 (1b) of second embodiment of the present invention possesses the monitoring environment information 230 of the object configuration in the expression monitored space 1000 in monitored space information 200, the identification range that is arranged in the control part 10 confirms that portion 160 confirms the effective recognition scope is handled in object identification with the mode of getting rid of the zone that can not have identifying object.Thus; Deformation detection zone generation portion 130 need not handle all position coordinateses of the monitor area in the monitored space 1000, gets final product and only need confirmed the identification range that portion 160 confirms by identification range in, surveyed area (rectangular area) 1100 to be transformed to deformation detection zone 2100.
Fig. 7 is the functional block diagram of the structure example of the related object detector 1 (1b) of expression second embodiment of the present invention.Object detector 1 (1b) shown in Figure 7 also possesses monitoring environment information 230 in the monitored space information 200 of storage part 20 except the structure of object detector shown in Figure 11 (1a), and in control part 10, possess identification range and confirm portion 160.
Monitoring environment information 230 is to represent information configured on the object in monitored space 1000.For example; If monitored space 1000 is outdoor; Then monitoring environment information 230 is information configured of expression wall, road surface, buildings (high building, house) etc.; If monitored space 1000 is indoor, then monitoring environment information 230 is information configured of expression wall, passage, desk, cabinet etc.
This monitoring environment information 230 for example is to use 3D design tool, map system (navigational system), GPS (Global Positioning system) etc. such as table software for calculation or CAD to generate in advance, and is stored in the monitored space information 200 of storage part 20.
Then, identification range confirms that portion 160 uses monitoring environment information 230, in monitored space 1000, confirms the effective recognition scope is handled in object identification with the mode of getting rid of the zone that can not have identifying object.
Fig. 8 is the functional block diagram that the related identification range of expression second embodiment of the present invention is confirmed the structure of portion 160.
As shown in Figure 8, identification range confirms that portion 160 is configured to comprise monitored space environment generation portion 161, image-context transformation component 162 and identification range differentiation portion 163.
Monitored space environment generation portion 161 uses the monitoring environment information 230 that receives from monitored space information acquiring section 120, and the configuration of objects such as wall in the monitored space 1000 or road surface is calculated as world coordinates.
Image-context transformation component 162 receives video camera information 220 from monitored space information acquiring section 120, and uses video camera information 220 that the configuration of the object in the monitored space 1000 is transformed to the image coordinate on the monitoring picture 2000.In addition, this processing can through 132 that carry out with the image detection domain transformation portion of first embodiment of the present invention, be that the identical processing of processing of the image coordinate on the monitoring picture 2000 is carried out with the point transformation in the monitored space 1000.Thus, can will correctly transform on the monitoring picture 2000 by the object configuration of monitoring environment information 230 expressions according to the angle of depression of camera system and locus etc.
Identification range differentiation portion 163 is distinguished by the zone on the monitoring picture 2000 of image-context transformation component 162 conversion, and can not there be the zone of identifying object in eliminating, confirms the effective recognition scope is handled in object identification with this.For example; Deformation detection zone 130 (image detection domain transformation portions 132) of generation portion can not exist personage's the position coordinates of the monitor area of wall zone in the monitored space 1000 that will retrieve and get rid of, and come to confirm the identification range that object detector 1 (1b) will be handled thus.And identification range differentiation portion 163 sends to the information (the identification range information 400 of Fig. 8) of the identification range of confirming in the deformation detection zone generation portion 130.
Fig. 9 is that the related identification range of second embodiment of the present invention is confirmed the definite key diagram of handling of identification range that portion 160 carries out.
Shown in Fig. 9 (a), be provided with wall 71 and road surface 72 in the monitored space 1000.The image-context transformation component 162 that identification range is confirmed portion 160 is the image coordinate on the monitoring picture 2000 with the world coordinate transformation on wall in the monitored space 1,000 71 and road surface 72.On the other hand; The exterior lateral area such as wall 71 that can not have identifying object are got rid of by identification range differentiation portion 163; And shown in Fig. 9 (b),, carry out object identification and handle existing road surface 72 to confirm as identification range as the personage of identifying object or object (for example automobile etc.).
(identification range of object detector is confirmed to handle)
Below, specify definite processing of identification range that the related object detector 1 (1b) of second embodiment of the present invention carries out.
Figure 10 is the process flow diagram of identification range that expression second embodiment of the present invention related object detector 1 (1b) the carries out flow process confirming to handle.
In addition; In whole object identification shown in Figure 6 is handled; After the surveyed area that is undertaken by deformation detection zone generation portion 130 in step S103 generates and handles; Carry out this identification range and confirm to handle, and this identification range to confirm to handle be the processing of the scope when being used for being limited to step S104 deformation detection zone generation portion 130 and selecting the position coordinates of monitor areas.
At first, identification range confirms that the monitored space environment generation portion 161 of portion 160 obtains part 120 via the monitored space environment, and the monitored space information 200 in storage part 20 receive monitoring environment information 230.After this; Wall, road surface, buildings (when outdoor) or wall, passage, desk, the cabinet configuration of object in monitored space 1000 such as (when indoor) in the monitored space 1000 of monitored space environment generation portion 161 calculating monitoring environment information 230 are as world coordinates (step S201).
Then; The image-context transformation component 162 that identification range is confirmed portion 160 obtains part 120 from the monitored space environment and receives video camera informations 220, and uses object in the monitored space 1000 that video camera information 220 will calculate in step S201 to dispose to be transformed to the image coordinate (step S202) on the monitoring picture 2000.
Then; Identification range confirms that the identification range differentiation portion 163 of portion 160 will transform to object configuring area on the monitoring picture 2000 and be divided into this object shared zone on monitoring picture 2000; And get rid of the zone (with reference to Fig. 9) that can not have identifying object, to confirm that effective recognition scope (zone) (step S203) is handled in object identification.
Thus; In the step S104 and S105 of Fig. 6; Deformation detection zone generation portion 130 can the scope of the position coordinates of selecting monitor area be limited in the monitored space 1000 with on the corresponding position of confirming on the monitoring picture 2000 of identification range (identification range information 400), generate deformation detection zone 2100 thus.
Therefore, related second embodiment of the invention object detector 1 (1b) is confirmed identification range through using monitoring environment information 230, can reduce the processing load of object detector 1.In addition, owing to processing is not discerned in the zone that need not keep watch on, can reduce the possibility of the erroneous detection of object identification processing.
" the 3rd embodiment: the person recognition of object detector is handled "
Below, the object detector 1 that the 3rd embodiment of the present invention is related is described.The related object detector 1 of the 3rd embodiment of the present invention is characterised in that; Except the object identification of the related object detector 1 (1a) of first embodiment of the present invention is handled; Also in object identification portion 140 (140c) (with reference to Figure 11); The image information 100 in the deformation detection zone 2100 that deformation detection zone generation portion 130 is generated is transformed to the form that is fit to recognizer and recognizer and carries out normalization, and uses the image information 100 extraction characteristic quantities after this normalization.
The structure of the object detector 1 (1a) that the one-piece construction of the object detector 1 that the 3rd embodiment of the present invention is related and first embodiment of the present invention shown in Figure 1 are related is identical.At this, to having the structure of identical function, use identical title and symbolic representation, and omit explanation with object detector 1 (1a).
Figure 11 is the functional block diagram of the structure of the related object identification portion 140 (140c) of expression the 3rd embodiment of the present invention.Be with the difference of the related object identification portion 140 (140a) of first embodiment of the present invention shown in Figure 5; Between surveyed area image extraction portion 141 and Characteristic Extraction portion 142, be provided with regional normalization portion 144, and be provided with the person detecting portion 145 that replaces identification handling part 143.
Zone normalization portion 144 adopts perspective projection transformations, and the image information 100 in the deformation detection zone 2100 that surveyed area image extraction portion 141 is extracted is transformed to shape, the for example rectangle that is fit to recognizer or recognizer, and carries out normalization.
The characteristic quantity that person detecting portion 145 extracts based on Characteristic Extraction portion 142, the recognizer that employing generates according to the study sample etc. judge whether to be the personage.Be judged as when being the personage, person detecting portion 145 outputs to output processing part 150 with being transformed to rectangle and having passed through normalized person detecting zone on the monitoring picture 2000 as its result.
(person recognition of object detector is handled)
Below, with reference to Figure 12 and Figure 13, specify the person recognition processing that the related object detector of the 3rd embodiment of the present invention 1 carries out.
Figure 12 is the process flow diagram of the flow process handled of person recognition that expression the 3rd embodiment of the present invention related object detector 1 carries out.Figure 13 is the key diagram of the flow process of the related person recognition processing of the 3rd embodiment of the present invention.
In the process flow diagram of Figure 12, the processing of the step S101 to S105 during object identification shown in Figure 6 is handled (generation in deformation detection zone is handled) is identical with first embodiment, so give identical number of steps and omit its explanation.Below, the later processing of step S301 that the object identification portion 140 (140c) of the related object detector 1 of the 3rd embodiment is carried out describes.
Shown in figure 12; Generate deformation detection zone 2100 (behind the step S103~S105) through deformation detection zone generation portion 130; The image information 100 that the surveyed area image extraction portion 141 of object identification portion 140 (140c) receives by acquisitions such as camera systems from image acquiring unit 110, and from this image information that receives 100, extract the image information 100 (step S301 :) in the deformation detection zone 2100 that generates by deformation detection zone generation portion 130 with reference to the S301 of Figure 13.
After this; The regional normalization portion 144 of object identification portion 140 (140c) adopts perspective projection transformations will be transformed to the shape (for example rectangle) that is fit to recognizer or recognizer by the image information 100 that surveyed area image extraction portion 141 extracts, and carries out normalization (step S302: with reference to the S302 of Figure 13).
Then, the Characteristic Extraction portion 142 of object identification portion 140 (140c) uses the image information 100 after the normalization, extracts the characteristic quantity (step S303: with reference to the S303 of Figure 13) with the recognizer relevant provisions.For example, the image information 100 of Characteristic Extraction portion 142 after the normalization extracted characteristic quantities such as the HOG that used by recognizer, Haar-Like characteristic, colouring information.
Then; The characteristic quantity that the 145 use characteristic amount extraction portions 142 of person detecting portion of object identification portion 140 (140c) extract is discerned processing (step S304: with reference to the S304 of Figure 13), and judges whether identifying object is that personage (step S305) is used as its result.For example, person detecting portion 145 adopts the recognizer that generates according to the study sample to discern the object beyond personage or the personage.
After this, in step S305, if person detecting portion 145 is judged as (step S305 → be) when being the personage, then person detecting portion 145 sends to output processing part 150 with this information.
Then, output processing part 150 outputs to recognition result in the monitoring picture 2000 that is presented on the not shown display device (step S306) via efferent 50.At this, output processing part 150 for example waits the deformation detection zone 2100 of representing monitoring picture 2000 with red block, shows thus to have recognized the personage.In addition, output processing part 150 also can be configured to except monitoring picture 2000, also on the picture of having simulated monitored space 1000, shows the surveyed area (rectangular area) 1100 that comprises identifying object (personage) with the mode that surrounds with frame.After this, enter into next procedure S307.
On the other hand, in step S305, not personage (step S305 → deny), then enter into next procedure S307 if person detecting portion 145 is judged as.
In step S307, object identification portion 140 (person detecting portion 145) judges whether whole processing of the position coordinates (as foot's coordinate of the personage of identifying object) of the monitor area in the monitored space 1000 finish.
If also there is the position coordinates (step S307 → deny) of the monitor area that does not have processing, then turn back to step S104, select the position coordinates of next monitor area to proceed to handle (with reference to the S104 of Figure 13).On the other hand, if the processing of all position coordinateses of monitor area finishes (step S307 → be), then finish object identification and handle.
Thus; Adopt the perspective projection transformation method that the image information 100 in deformation detection zone 2100 is transformed to the shape (for example rectangle) that is fit to recognizer or recognizer and carries out the image information 100 that normalization obtains through using, can extract personage's characteristic quantity.Therefore, can improve the person recognition rate.
In addition; The related person recognition of the 3rd embodiment of the present invention is handled also can be provided in and the related identification range of second embodiment of the present invention is set in the control part 10 is confirmed portion 160; And after step S104 shown in Figure 12, identification range confirms that portion 160 will select the scope of the position coordinates of monitor area to be limited to and get rid of in the identification range in the zone that can not have identifying object.
" the 4th embodiment: the personage of object detector counts processing "
Below, the object detector 1 that the 4th embodiment of the present invention is related is described.The related object detector 1 of the 4th embodiment of the present invention is characterised in that and uses object detector to carry out the personage that the discrepancy number is counted is counted processing.
The structure of the object detector 1 (1a) that the one-piece construction of the object detector 1 that the 4th embodiment of the present invention is related and first embodiment of the present invention shown in Figure 1 are related is identical.At this, to having the structure of identical function, use identical title and symbolic representation, and omit explanation with object detector 1 (1a).
Figure 14 is the functional block diagram of the structure of the related object identification portion 140 (140d) of expression the 4th embodiment of the present invention.Be with the difference of the related object identification portion 140 (140a) of first embodiment of the present invention shown in Figure 5; The surveyed area image extraction portion 141 that is possessed except object identification portion 140 (140a), Characteristic Extraction portion 142 and the identification handling part 143 also have the moving direction identification part 146 and the judging part (Entry/Exit Judgment Portion) 147 of coming in and going out.
In addition, the related identification handling part 143 of the 4th embodiment of the present invention comes the personage is discerned through the recognition methods of using the recognizer that generates according to the study sample.In addition, identification handling part 143 can also be configured to according to the characteristic quantity that is extracted by Characteristic Extraction portion 142 this personage's head detected processing.In addition, except the recognition methods of using the recognizer that generates according to the study sample, identification handling part 143 can also use the color and style of template matches or head, face, the colour of skin, working cloth etc. to carry out person recognition.
The personage's who is recognized by identification handling part 143 moving direction is judged in moving direction identification part 146.This moving direction for example can be judged according to the direction of the personage's who recognizes head (face towards) in moving direction identification part 146.In addition; Can also be arranged to use the particle filter (particle filter) of record in " Cheng Chan, ' Multiple object tracking with kernel particle filter ', Computer Vision and Pattern Recognition (computer vision nuclear pattern identification); 2005; IEEE Computer Society Conference, vol.1, pp.566-573; 2005 " to carry out personage's tracking, judge through the continuity of moving direction and tracking.
Discrepancy judging part 147 judges that according to the judgement of the 146 pairs of personage's moving directions in moving direction identification part the personage is entering or leaves away.And the judging part 147 of coming in and going out outputs to its judged result in the output processing part 150,
Figure 15 is the key diagram that the related personage of the 4th embodiment of the present invention counts processing.Shown in figure 15, use the personage of turnover at 80 places, gateway of the 2000 pairs of regulations of monitoring picture that photograph by camera system 84 to count.Shown in figure 15, the number of leaving away is 2 people, representes with personage 81a, 81b.The number that gets into is 4 people, representes with personage 81c, 81d, 81e, 81f.The deformation detection zone generation portion 130 of the object detector 1 that the 4th embodiment of the present invention is related uses monitor objects information 210 and video camera information 220, the corresponding to deformation detection of the personage's on generation and the monitoring picture 2000 shape zone 2100a, 2100b, 2100c, 2100d, 2100e, 2100f.In addition, object identification portion 140 (140d) extracts the characteristic quantity of this deformation detection zone 2100a, 2100b, 2100c, 2100d, 2100e, 2100f, and the identification personage, judges moving direction thus.And personage 81 turnover is judged according to the relation between moving direction and the gateway 80 by object identification portion 140 (140d).If get into, then entering number adds 1, if leave away, then the number of leaving away adds 1.Such this result that shows shown in symbol 82.In addition, object identification portion 140 obtains existing number on the whole monitoring picture 2000 and counts as the result of person recognition, and that kind shows shown in symbol 82.
(personage of object detector counts processing)
Below, specify the personage who is undertaken by the related object detector 1 of the 4th embodiment of the present invention and count processing.
Figure 16 is the process flow diagram that personage that expression the 4th embodiment of the present invention related object detector 1 carries out counts the flow process of processing.
Count in the processing the related personage of the 4th embodiment of the present invention; Situation below explaining: object identification portion 140 (140d) uses the characteristic quantity of head to judge whether to exist the personage; And, judge that the personage gets into or leaves away according to the direction of head (face towards).
At first, the processing of object identification processed steps S101 to S105 shown in Figure 6 (generation in deformation detection zone is handled) is identical with first embodiment, so give identical number of steps and omit explanation at this.Below, the processing that the object identification portion 140 (140d) of the related object detector 1 of the 4th embodiment is carried out describes.
Shown in figure 16; If generate deformation detection zone 2100 (step S105) through deformation detection zone generation portion 130; Then the surveyed area image extraction portion 141 of object identification portion 140 (140d) receives the image information 100 of surveillance cameras etc. from image acquiring unit 110, and from this image information that receives 100, extracts the image information 100 (step S401) in the deformation detection zone 2100 that is generated by deformation detection zone generation portion 130.
Then, the image information 100 that the Characteristic Extraction portion 142 of object identification portion 140 (140d) uses surveyed area image extraction portion 141 to extract is extracted the characteristic quantity (step S402) with the recognizer relevant provisions.
Then, the identification handling part 143 of object identification portion 140 (140d) uses the characteristic quantity that is extracted by Characteristic Extraction portion 142, discerns processing (step S403).Specifically, identification handling part 143 uses the characteristic quantity that extracts that the personage is discerned, and detects this personage's head.
Then; In step S404; Detect the head (step S404 → be) of existence if identification handling part 143 is judged as, then discern handling part 143 this information is sent to moving direction identification part 146, and enter into next procedure S405 as the personage of identifying object.On the other hand, if identification handling part 143 is judged as the head (step S404 → deny) that does not have the personage, then get into step S409.
After this, in step S404, if identification handling part 143 is judged as the head (step S404 → be) that has the personage, then the direction (face towards) (step S405) of these heads is detected in moving direction identification part 146.Then, come in and go out judging part 147 according to as the face of this testing result towards judging moving direction (step S406), when (step S406 → be), the entering number adds 1 (step S407) when being judged as the front.On the other hand, be judged as face towards the rear when (not being positive) (step S406 → not) at the judging part 147 of coming in and going out, the number of leaving away adds 1 (step S408).
Then, if end step S407 and step S408, the judging part 147 of then coming in and going out enters into step S409.
In step S409, the judging part 147 of coming in and going out judges whether whole processing of the position coordinates (is the foot's coordinate as the personage of identifying object at this) of the monitor area in the monitored space 1000 finish (step S409).At this, if also there is the monitor area of not handling (step S409 → deny), then turn back to step S104, select next monitor area to proceed to handle.On the other hand, if all processing of monitor area all finish (step S409 → be), then enter into next procedure S410.
In step S410, output processing part 150 outputs to result in the monitoring picture 2000 that is presented on the not shown display device via efferent 50.At this, output processing part 150 for example in the deformation detection zone 2100 of the monitoring picture 2000 that is generated by step S105, is represented to get into the personage with red block, and is represented the personage that leaves away with blue frame.In addition; Output processing part 150 also can be configured to except monitoring picture 2000; The surveyed area 1100 that also on the picture of having simulated monitored space 1000, for example shows the entering personage in the surveyed area (rectangular area) 1100 that comprises identifying object (personage) with red block, and show the personage's that leaves away surveyed area 1100 with blue frame.After this, the entering number that output processing part 150 will be calculated in step S407 and step S408 and the number of leaving away etc. is presented on the picture, and end process.
So, through on monitoring picture 2000, generating deformation detection zone 2100, can extract with image on personage's the image information 100 in the corresponding to zone of distortion profile, can judge personage's turnover more accurately, can improve the performance of personage's counting.
In addition, also can count the structure of use second embodiment of the present invention in the processing and the structure of the 3rd embodiment of the present invention the personage.Confirm portion 160 according to the identification range of second embodiment, can reduce the processing load of object detector 1.In addition, according to the object identification portion 140 (140c) of the 3rd embodiment, can further improve the performance of personage's counting.
In addition,, carry out the personage with this and count processing, then can also grasp number through this gateway 80 if use the related monitoring environment information 230 of second embodiment that identification range is limited to gateway shown in Figure 15 80.
" the 5th embodiment: personage's tracking process of object detector "
Below, personage's tracking process of the object detector 1 (1e) that the 5th embodiment of the present invention is related is described.The image information 100 of object detector 1 (1e) sequence service time that the 5th embodiment of the present invention is related is followed the trail of personage's (same personage is discerned continuously) of regulation between frame.
Figure 17 is the functional block diagram of the structure of the related object detector 1 (1e) of expression the 5th embodiment of the present invention.In object detector shown in Figure 17 1 (1e), except the structure of object detector shown in Figure 11 (1a), also in control part 10, possess the deformation detection of tracking zone generation portion 170 and follow the trail of object identification part 180.
The tracking deformation detection of the object detector 1 (1e) that the 5th embodiment of the present invention is related zone generation portion 170 confirms the position (the personage locus 92 of the former frame of afterwards stating) in the monitored space 1000 that the personage as identifying object is identified in the image information 100 of former frame, and near the assigned position the personage locus 92 of this former frame regional 1101 (with reference to Figure 18 (a)) of a plurality of trace detection of generation.And, follow the trail of deformation detection zone generation portion 170 a plurality of trace detection zone 1101 in the monitored space 1000 be transformed to the tracking deformation detection zone 2101 on the monitoring picture 2000 respectively.In addition; Follow the trail of object identification part 180 and extract characteristic quantity from the image information 100 of following the trail of deformation detection zone 2101; When characteristic quantity that is extracted and the personage's who confirms in former frame characteristic quantity are consistent; Be judged as and have identical personage in this frame, be present on the position of this trace detection regional 1101 in monitored space 1000 thereby detect to this personage.And, till frame end, carry out this processing repeatedly.Figure 18 (b) shows: at initial frame (f 1) monitoring picture 2000 on confirm deformation detection zone 2100, and from next frame (f 2) to a last frame (f n) till, consistent and follow the trail of for same personage's personage 91 to be judged as characteristic quantity through personage's tracking process.
Below, to the identical structure of object detector 1 (1a) of Fig. 1, use identical title and symbolic representation, and omit explanation.
Turn back to Figure 17; Follow the trail of deformation detection zone generation portion 170 bases and the position coordinates that detects as the monitor area in the corresponding monitored space 1000 in position on the personage's of identifying object the monitoring picture 2000, confirm locus (the personage locus 92 of former frame) as the personage of identifying object.Then, follow the trail of deformation detection zone generation portion 170 and near the assigned position the personage locus 92 of the former frame of monitored space 1000, generate a plurality of trace detection zone 1101.Then, follow the trail of deformation detection zone generation portion 170 and use video camera information 220, a plurality of trace detection zone 1101 that is generated is transformed to the tracking deformation detection zone 2101 on the monitoring picture 2000 respectively.
Follow the trail of object identification part 180 and judge whether the personage who confirms in former frame is consistent with characteristic quantity, be judged as when being same personage the definite tracking deformation detection of personage on monitoring picture 2000 regional 2101 as identifying object.Then, following the trail of object identification part 180 outputs to this information in the output processing part 150.
Figure 19 is the functional block diagram of the structure of the related tracking object identification part 180 of expression the 5th embodiment of the present invention.
Shown in figure 19, follow the trail of object identification part 180 and be configured to comprise that trace detection area image extraction portion 181, Characteristic Extraction portion 142 (142e) and personage confirm portion 182.
A zone is selected by trace detection area image extraction portion 181 from a plurality of tracking deformation detection zone 2101 of following the trail of 170 generations of deformation detection zone generation portion, and from the image information 100 of this frame, extracts the image information 100 in selected tracking deformation detection zone 2101.
Characteristic Extraction portion 142 (142e) extracts characteristic quantity from the image information 100 of following the trail of deformation detection zone 2101.When extracting this characteristic quantity, for example can use the characteristic quantity of the histogram, SIFT (Scale-Invariant Feature Transform), HOG etc. of color.
The personage confirms that the personage's that 182 pairs in portion extracts in former frame characteristic quantity and Characteristic Extraction portion 142 (142e) compare at the characteristic quantity that this frame extracts, and judge whether both are consistent.Then, when the characteristic quantity of the characteristic quantity that is judged as former frame and this frame was consistent, the personage confirmed that it is same personage that portion 182 confirms as, and the information that will follow the trail of deformation detection zone 2101 outputs to output processing part 150.
(personage's tracking process of object detector)
Below, specify personage's tracking process of being undertaken by the related object detector 1 (1e) of the 5th embodiment of the present invention.
Figure 20 and Figure 21 are the process flow diagrams of the flow process of expression personage's tracking process of being undertaken by the related object detector 1 (1e) of the 5th embodiment of the present invention.
At first, shown in figure 20, carry out the identical processing of processing with step S101 to S105 shown in Figure 6, generate the surveyed area (rectangular area) 1100 of personage in monitored space 1000 as tracing object.
Then, deformation detection zone generation portion 130 reads the image information 100 (step S501) of initial frame from the image information 100 that image acquiring unit 110 obtains.
Then, (step S502) handled in the object identification of identifying object of carrying out step S104~step S111 of Fig. 6 of deformation detection zone generation portion 130 and object identification portion 140.
After this, the identification handling part 143 through object identification portion 140 judges whether to detect the personage (step S503) as identifying object.When identification handling part 143 does not detect the personage as identifying object (step S503 → not), deformation detection zone generation portion 130 reads the image information 100 (step S504) of next frame, and returns step S502.On the other hand, when identification handling part 143 detects the personage as identifying object (step S503 → be), this information sent to follow the trail of in the deformation detection zone generation portion 170.
With reference to Figure 21; In step S505; Follow the trail of deformation detection zone generation portion 170 and in step S502, detect positional information, definite personage's as identifying object locus (the personage locus 92 of former frame) (step S505) as the monitor area in the personage's of identifying object the monitored space 1000 according to identification handling part 143.
Then, follow the trail of the image information 100 (step S506) that deformation detection zone generation portion 170 reads next frame.
Then, follow the trail of deformation detection zone generation portion 170 near the assigned position the personage locus 92 of the former frame of monitored space 1000, generate a plurality of trace detection zone 1101 (step S507).Then, follow the trail of deformation detection zone generation portion 170 and use video camera information 220, a plurality of trace detection zone 1101 that is generated is transformed to tracking deformation detection zone 2101 (the step S508) on the monitoring picture 2000.
After this, a zone (step S509) is selected by the trace detection area image extraction portion 181 of following the trail of object identification part 180 from a plurality of tracking deformation detection zone 2101 that is generated by tracking deformation detection zone generation portion 170.
Then, trace detection area image extraction portion 181 extracts the image information 100 (step S510) in selected tracking deformation detection zone 2101 from the image information 100 of this frame.
Extract characteristic quantity (step S511) the image information 100 in the tracking deformation detection zone 2101 that the Characteristic Extraction portion 142 (142e) of after this, following the trail of object identification part 180 extracts from trace detection area image extraction portion 181.
Then, the personage confirms that 182 pairs in portion compares at the characteristic quantities of characteristic quantity that former frame extracts and this frame that in step S511, extracts, and judges whether unanimity (step S512) of two characteristic quantities.At this,, enter into next procedure S514 being judged as two characteristic quantities when consistent (step S512 → be).On the other hand, being judged as two characteristic quantities when inconsistent (step S512 → not), get into step S513.
In step S513, the personage confirms whether a plurality of tracking deformation detection zone 2101 that portion's 182 judgements generate all handles in step S508.If also there is the tracking deformation detection zone 2101 (step S513 → deny) of not handling, then turn back to step S509, select the next deformation detection zone 2101 of following the trail of.On the other hand, if the tracking deformation detection that is generated zone 2101 has all carried out handling (step S513 → be), enter into next procedure S514.
In step S514, output processing part 150 confirms that from the personage portion 182 receives result, and recognition result is outputed in the monitoring picture 2000 that is presented on the not shown display device (step S514).At this; When the characteristic quantity of characteristic quantity and former frame is consistent (step S512 → be); Output processing part 150 for example waits the tracking deformation detection zone 2101 of representing monitoring picture 2000 with red block, representes to have recognized the personage identical with former frame thus.In addition, output processing part 150 also can be configured to except monitoring picture 2000, also on the picture of having simulated monitored space 1000, shows the trace detection zone 1101 that comprises identifying object (personage) with the mode that surrounds with frame.On the other hand, in the corresponding to tracking deformation detection of the characteristic quantity that does not have characteristic quantity and former frame zone 2101 o'clock (step S512 → deny step S513 → be), output processing part 150 does not show on monitoring picture 2000 follows the trail of deformation detection zone 2101.
After this, follow the trail of deformation detection zone generation portion 170 and judge whether the frame of the image information 100 that image acquiring unit 110 is obtained has all carried out handling (step S515).And, if also there is the frame of not handling (step S515 → deny), then turn back to step S505, proceed to handle.At this; In step S512; If it is consistent to be judged as the characteristic quantity of characteristic quantity and former frame of this frame; Then follow the trail of deformation detection zone generation portion 170 the 2101 corresponding trace detection zones 1101, tracking deformation detection zone with monitoring picture 2000 this frame in the monitored space 1000 are replaced into the personage locus 92 of former frame, and proceed the later processing of step S506.In addition; At the corresponding to tracking deformation detection of the characteristic quantity that does not have characteristic quantity and former frame 2101 o'clock (the step S513 → be) in zone; Follow the trail of deformation detection zone generation portion 170 and be not updated in the information of the personage locus 92 of the former frame of confirming in the former frame, and proceed the later processing of step S506.On the other hand, be judged as all frames when all having carried out processing (step S515 → be), following the trail of deformation detection zone generation portion 170 end process.
As stated, according to monitored space information 200, the characteristic quantity of following the trail of usefulness can be correctly extracted in generation and personage's distortion relevant detection zone 1100 thus in each frame, thereby can improve the performance that the personage follows the trail of.

Claims (6)

1. an object detector is used to judge on the monitoring picture that shows the image information that photographs from monitored space whether have identifying object, and said object detector is characterised in that to possess:
Storage part; Said storage portion stores monitor objects information and video camera information; Said monitor objects information preserve with as the object of said identifying object from the relevant information of size in said monitored space, said video camera information is preserved the locus of the camera system of representing the said monitored space of shooting and the parameter of characteristic;
Image acquiring unit, said image acquiring unit is obtained said image information;
Deformation detection zone generation portion; Said deformation detection zone generation portion uses said monitor objects information on said monitored space, to generate the surveyed area in zone that expression contains the size of said identifying object, and it is regional to be transformed to the deformation detection in the zone of representing the said identifying object on the said monitoring picture according to the said surveyed area that said video camera information will generate;
Object identification portion extracts the characteristic quantity of said identifying object the said image information in the said deformation detection zone of said object identification portion from said monitoring picture, and uses the characteristic quantity of said extraction to judge whether to exist said identifying object; And
Output processing part is being judged as when having said identifying object, and said output processing part is exported said deformation detection zone to display device.
2. object detector as claimed in claim 1 is characterized in that,
Further store the monitoring environment information of the object configuration in the said monitored space of expression in the said storage part,
Said object detector also possesses identification range and confirms portion; Said identification range confirms that portion is with reference to said monitoring environment information; In said monitored space, get rid of the zone there is said identifying object, with the zone on the said monitoring picture of confirming to exist said identifying object
Said deformation detection zone generation portion is confirming the said surveyed area of generation in the zone of regional corresponding said monitored space of the determined said monitoring picture of portion with said identification range.
3. like claim 1 or 2 described object detectors, it is characterized in that,
Said object identification portion is through carrying out normalization to the distortion based on the said parameter of said camera system; Thus the said image information in said deformation detection zone is revised, and used and saidly carried out normalized image information and extract said characteristic quantity.
4. like claim 1 or 2 described object detectors, it is characterized in that,
Said object identification portion uses the said characteristic quantity that extracts to judge whether said identifying object is the personage; Be judged as said identifying object when being the personage; Judge this personage whether towards the moving direction of regulation, and count being judged as towards the personage's of the moving direction of said regulation quantity.
5. like claim 1 or 2 described object detectors, it is characterized in that,
Said object detector possesses:
Follow the trail of deformation detection zone generation portion; Said tracking deformation detection zone generation portion confirms personage locus in the said monitored space of expression and the former frame corresponding surveyed area in said deformation detection zone that is judged as the said monitoring picture that has said identifying object by said object identification portion; Generate a plurality of trace detection zone of containing the size of said identifying object with said surveyed area equally near the personage locus of said definite said former frame the regulation zone, and the trace detection zone of said generation is transformed to the tracking deformation detection zone on the said monitoring picture respectively according to said video camera information; With
Follow the trail of the object identification part; Extract the characteristic quantity of said identifying object the said tracking deformation detection zone said image information separately of said tracking object identification part from said monitoring picture; With when the characteristic quantity that is judged as the said identifying object that the former frame that has said identifying object extracts is consistent, being judged as is same said identifying object at the characteristic quantity of the said identifying object that is extracted.
6. object identification method is to judge the object identification method that on the monitoring picture that shows the image information that photographs from monitored space, whether has the object detector of identifying object, and said object identification method is characterised in that,
Said object detector possesses storage part; Said storage portion stores monitor objects information and video camera information; Said monitor objects information preserve with as oneself relevant information of size in said monitored space of the object of said identifying object; Said video camera information is preserved the locus of the camera system of representing the said monitored space of shooting and the parameter of characteristic
Said object identification method is carried out following steps:
Obtain the step of said image information;
Use said monitor objects information on said monitored space, to generate the surveyed area in zone that expression contains the size of said identifying object, and according to said video camera information the surveyed area of said generation is transformed to the step in deformation detection zone in the zone of the said identifying object on the said monitoring picture of expression;
Extract the characteristic quantity of said identifying object the said image information in the said deformation detection zone from said monitoring picture, and use the characteristic quantity of said extraction to judge whether to exist the step of said identifying object; And
Be judged as when having said identifying object, exporting the step in said deformation detection zone to display device.
CN201210034176.XA 2011-04-14 2012-02-15 Object identification device and object identification method Expired - Fee Related CN102737249B (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2011089729A JP5594672B2 (en) 2011-04-14 2011-04-14 Object recognition apparatus and object recognition method
JP2011-089729 2011-04-14

Publications (2)

Publication Number Publication Date
CN102737249A true CN102737249A (en) 2012-10-17
CN102737249B CN102737249B (en) 2015-07-08

Family

ID=46992715

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201210034176.XA Expired - Fee Related CN102737249B (en) 2011-04-14 2012-02-15 Object identification device and object identification method

Country Status (2)

Country Link
JP (1) JP5594672B2 (en)
CN (1) CN102737249B (en)

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105282456A (en) * 2014-06-11 2016-01-27 佳能株式会社 Image processing method and image processing apparatus
CN106462961A (en) * 2014-06-03 2017-02-22 住友重机械工业株式会社 Human detection system for construction machine
CN107545132A (en) * 2016-06-29 2018-01-05 富士通株式会社 Situation recognition methods and condition recognition device
CN107924465A (en) * 2016-03-18 2018-04-17 Jvc 建伍株式会社 Object detector, object identification method and object identification program
CN109074652A (en) * 2016-03-30 2018-12-21 株式会社爱考斯研究 Pattern recognition device, mobile body device and image recognition program
CN109525877A (en) * 2018-10-18 2019-03-26 百度在线网络技术(北京)有限公司 Information acquisition method and device based on video
CN110321767A (en) * 2018-03-30 2019-10-11 株式会社日立制作所 Image acquiring apparatus and method, behavior analysis system and storage medium
CN111259723A (en) * 2018-11-30 2020-06-09 株式会社理光 Action recognition device and action recognition method
CN111657858A (en) * 2019-03-07 2020-09-15 株式会社日立制作所 Image diagnosis apparatus, image processing method, and program
CN112446275A (en) * 2019-09-04 2021-03-05 株式会社东芝 Object number estimation device, object number estimation method, and storage medium

Families Citing this family (20)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP5935118B2 (en) 2012-05-30 2016-06-15 株式会社日立製作所 Object detection apparatus and object detection method
JP6265132B2 (en) * 2012-12-06 2018-01-24 日本電気株式会社 Image recognition processing aptitude display system, method and program
JP6221390B2 (en) * 2013-06-18 2017-11-01 富士通株式会社 Image processing apparatus, program, and image processing method
JP6244221B2 (en) * 2014-02-24 2017-12-06 アイホン株式会社 Human detection device
CN106462962B (en) * 2014-06-03 2020-08-04 住友重机械工业株式会社 Human detection system for construction machinery and excavator
JP6397354B2 (en) * 2015-02-24 2018-09-26 Kddi株式会社 Human area detection apparatus, method and program
JP6494418B2 (en) * 2015-05-25 2019-04-03 キヤノン株式会社 Image analysis apparatus, image analysis method, and program
JP6700845B2 (en) * 2016-02-22 2020-05-27 キヤノン株式会社 Information processing apparatus, information processing method, and program
JP6508134B2 (en) * 2016-06-14 2019-05-08 トヨタ自動車株式会社 Object discrimination device
JP7092688B2 (en) 2017-02-17 2022-06-28 住友重機械工業株式会社 Peripheral monitoring system for work machines
KR102404791B1 (en) * 2017-03-30 2022-06-02 삼성전자주식회사 Device and method for recognizing objects included in input image
JP6703284B2 (en) * 2017-09-27 2020-06-03 キヤノンマーケティングジャパン株式会社 Image processing system, image processing system control method, and program
JP6669977B2 (en) * 2017-09-27 2020-03-18 キヤノンマーケティングジャパン株式会社 Image processing system, control method for image processing system, and program
JP6894398B2 (en) * 2018-03-28 2021-06-30 Kddi株式会社 Object tracking device, object tracking method, and object tracking program
JP7463052B2 (en) 2018-09-19 2024-04-08 キヤノン株式会社 Information processing device, information processing system, information processing method, and program
JP7242232B2 (en) * 2018-09-28 2023-03-20 キヤノン株式会社 Information processing device, information processing method, and program
CN111918034A (en) * 2020-07-28 2020-11-10 上海电机学院 Embedded unattended base station intelligent monitoring system
CN113674321B (en) * 2021-08-25 2024-05-17 燕山大学 Cloud-based method for multi-target tracking under monitoring video
WO2023135726A1 (en) * 2022-01-14 2023-07-20 日本電気株式会社 Information processing device, information processing method, and computer-readable medium
CN114581959A (en) * 2022-05-09 2022-06-03 南京安元科技有限公司 Work clothes wearing detection method based on clothes style feature extraction

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101276404A (en) * 2007-03-30 2008-10-01 李季檩 System and method for quickly and exactly processing intelligent image
JP2010049297A (en) * 2008-08-19 2010-03-04 Secom Co Ltd Image monitoring device
US20100067742A1 (en) * 2008-09-12 2010-03-18 Sony Corporation Object detecting device, imaging apparatus, object detecting method, and program

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2003196655A (en) * 2001-12-25 2003-07-11 Toyota Motor Corp Eye image detection device
JP5183152B2 (en) * 2006-12-19 2013-04-17 株式会社日立国際電気 Image processing device

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101276404A (en) * 2007-03-30 2008-10-01 李季檩 System and method for quickly and exactly processing intelligent image
JP2010049297A (en) * 2008-08-19 2010-03-04 Secom Co Ltd Image monitoring device
US20100067742A1 (en) * 2008-09-12 2010-03-18 Sony Corporation Object detecting device, imaging apparatus, object detecting method, and program

Cited By (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106462961A (en) * 2014-06-03 2017-02-22 住友重机械工业株式会社 Human detection system for construction machine
CN105282456A (en) * 2014-06-11 2016-01-27 佳能株式会社 Image processing method and image processing apparatus
US10311325B2 (en) 2014-06-11 2019-06-04 Canon Kabushiki Kaisha Image processing method and image processing apparatus
CN107924465A (en) * 2016-03-18 2018-04-17 Jvc 建伍株式会社 Object detector, object identification method and object identification program
CN107924465B (en) * 2016-03-18 2021-09-10 Jvc 建伍株式会社 Object recognition device, object recognition method, and storage medium
CN109074652A (en) * 2016-03-30 2018-12-21 株式会社爱考斯研究 Pattern recognition device, mobile body device and image recognition program
CN109074652B (en) * 2016-03-30 2022-06-14 株式会社爱信 Image recognition device, mobile device, and image recognition program
CN107545132A (en) * 2016-06-29 2018-01-05 富士通株式会社 Situation recognition methods and condition recognition device
CN110321767A (en) * 2018-03-30 2019-10-11 株式会社日立制作所 Image acquiring apparatus and method, behavior analysis system and storage medium
CN110321767B (en) * 2018-03-30 2023-01-31 株式会社日立制作所 Image extraction device and method, behavior analysis system, and storage medium
CN109525877A (en) * 2018-10-18 2019-03-26 百度在线网络技术(北京)有限公司 Information acquisition method and device based on video
CN111259723A (en) * 2018-11-30 2020-06-09 株式会社理光 Action recognition device and action recognition method
CN111657858A (en) * 2019-03-07 2020-09-15 株式会社日立制作所 Image diagnosis apparatus, image processing method, and program
CN111657858B (en) * 2019-03-07 2023-08-01 株式会社日立制作所 Image diagnosis device, image processing method, and storage medium
CN112446275A (en) * 2019-09-04 2021-03-05 株式会社东芝 Object number estimation device, object number estimation method, and storage medium

Also Published As

Publication number Publication date
CN102737249B (en) 2015-07-08
JP2012221437A (en) 2012-11-12
JP5594672B2 (en) 2014-09-24

Similar Documents

Publication Publication Date Title
CN102737249A (en) Object identification device and object identification method
US9818023B2 (en) Enhanced face detection using depth information
US7853038B2 (en) Systems and methods for object dimension estimation
JP5759161B2 (en) Object recognition device, object recognition method, learning device, learning method, program, and information processing system
US7486800B2 (en) Action analysis method and system
JP6590609B2 (en) Image analysis apparatus and image analysis method
CN112102409B (en) Target detection method, device, equipment and storage medium
AU2022202588A1 (en) Item identification and tracking system
CN110598556A (en) Human body shape and posture matching method and device
US20210064857A1 (en) Image analysis device, image analysis method, and recording medium
CN109961082A (en) Object identification processing unit, object identification processing method and storage medium
JPWO2018235198A1 (en) Information processing apparatus, control method, and program
CN113240678A (en) Plane information detection method and system
US20240029295A1 (en) Method and apparatus for determining pose of tracked object in image tracking process
CN112668452A (en) Binocular vision-based occluded target identification and positioning method
CN112132110A (en) Method for intelligently judging human body posture and nursing equipment
Pudics et al. Safe robot navigation using an omnidirectional camera
JP6773825B2 (en) Learning device, learning method, learning program, and object recognition device
Kochi et al. 3D modeling of architecture by edge-matching and integrating the point clouds of laser scanner and those of digital camera
CN111198563A (en) Terrain recognition method and system for dynamic motion of foot type robot
Hadi et al. Fusion of thermal and depth images for occlusion handling for human detection from mobile robot
US20170228647A1 (en) Depth-based feature systems for classification applications
US20220343661A1 (en) Method and device for identifying presence of three-dimensional objects using images
CN112598738B (en) Character positioning method based on deep learning
JP7096176B2 (en) Object position estimator and its method

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
ASS Succession or assignment of patent right

Owner name: HITACHI INDUSTRIAL CONTROL SOLUTIONS LTD.

Free format text: FORMER OWNER: HITACHI,LTD.

Effective date: 20141114

C41 Transfer of patent application or patent right or utility model
TA01 Transfer of patent application right

Effective date of registration: 20141114

Address after: Hitachi County of Ibaraki City, Japan

Applicant after: HITACHI INDUSTRY AND CONTROL SOLUTIONS, LTD.

Address before: Tokyo, Japan, Japan

Applicant before: Hitachi Ltd.

C14 Grant of patent or utility model
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20150708

Termination date: 20160215

CF01 Termination of patent right due to non-payment of annual fee