CN102855459A - Method and system for detecting and verifying specific foreground objects - Google Patents

Method and system for detecting and verifying specific foreground objects Download PDF

Info

Publication number
CN102855459A
CN102855459A CN2011101815059A CN201110181505A CN102855459A CN 102855459 A CN102855459 A CN 102855459A CN 2011101815059 A CN2011101815059 A CN 2011101815059A CN 201110181505 A CN201110181505 A CN 201110181505A CN 102855459 A CN102855459 A CN 102855459A
Authority
CN
China
Prior art keywords
depth
depth map
background
current environment
template
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN2011101815059A
Other languages
Chinese (zh)
Other versions
CN102855459B (en
Inventor
王鑫
范圣印
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ricoh Co Ltd
Original Assignee
Ricoh Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ricoh Co Ltd filed Critical Ricoh Co Ltd
Priority to CN201110181505.9A priority Critical patent/CN102855459B/en
Publication of CN102855459A publication Critical patent/CN102855459A/en
Application granted granted Critical
Publication of CN102855459B publication Critical patent/CN102855459B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Image Analysis (AREA)

Abstract

The invention provides a method and a system for detecting and verifying a plurality of specific foreground objects in an image. The method includes that a background model is built according to dynamic changes of the distance of pixel points in the image; a foreground area is obtained by removing the background model from a current image; the foreground area is divided; and each foreground is verified whether to be a specific object of some class. The specific object can be a person or a chair and the like. The method and the system for detecting and verifying specific foreground objects achieve fast algorithm of detection, improve detection accuracy, and reduce false detection rate.

Description

The method and system that is used for the detection validation of particular prospect object
Technical field
The present invention relates to a kind of detection validation method and system for the display foreground object, relate more specifically to a kind of method and system that uses captured depth image special object to be carried out detection validation.
Background technology
In various man-machine interactions, game and intelligence system etc. are used, usually to carry out video monitoring to the personnel that participate in, and important link of video monitoring is exactly the participant who needs to detect in the video image, and this testing process must be carried out to the video image that comprises the participant image processing.And these images processing adopt the common camera shooting to carry out Video processing usually.In the actual application, the image processing techniques that the video that this employing common camera is taken detects the participant faces many problems: low such as verification and measurement ratio, false drop rate is high, can't be real-time etc., cause the reason of these problems to be to comprise the behavior complicacy of the participant in participant's the scene, there is the situation of sudden change usually in the light brightness relatively darker and scene of scene.People detect the problems referred to above of participant's method existence in order to solve the video that adopts common camera to take, many solutions have been proposed, for example, participant in scene increases sensor with it, but this method can only be useful in some specific scene, and participant's (user) experience effect is bad.
In addition, along with the appearance of depth cameras, because the depth cameras collection is range information in the scene, so people attempt to utilize range information to come the participant is detected, but it is still immature at present to be based on the algorithm of depth camera.At present, people have proposed some detection techniques for the people.
U.S. Patent application US20090210193A1 has proposed a method that detects and locate the people.The method is used the distance based on object in the range image sensor output region of TOF, and gives this variable in distance, detects the zone that comprises this variable in distance; Then adopt and cut apart module is partitioned into participant's people from detected variable in distance zone given shape, thus people from location direction.Obviously, this U.S. Patent application has used a kind of concrete people's feature, trunk for example, and the features such as leg are cut apart the image of people in the variable in distance zone.This patent is used trunk, the signature verification people such as leg according to the variable in distance inspected object.
In addition, in view of the three-dimensional feature of common object, European patent application EP 1983484A1 proposes a kind of employing harvester and gathers three-dimensional body and calculate the method that disparity map (disparity component data) detects three dimensional object.The method is set up the model of three dimensional object in advance, calculate one group of gray-scale map from different view to this resulting two-dimensional projection of three-dimensional model, and with this group gray-scale map be defined as object template, then comparison other template and the image-region that collects, between certain zone of the image that collects and described object template, when having the highest degree of association value, think that then this zone has a three dimensional object.Obviously, the three dimensional object model adopts gray-scale map, can't carry out normalized.
In view of participant's face characteristic, U.S. Patent application US20100158387A1 proposes a kind of method for detecting human face.The method adopts a kind of image processing module to use multiple image to calculate range information and be partitioned into foreground area and background area according to range information, then adopt people's face detection module according to distance foreground area to be carried out convergent-divergent, then detect people's face in the image behind convergent-divergent.Therefore yet this patented claim can only detect people's face, can't detect the people's that occurs in the scene other parts or other objects.
Summary of the invention
For these problems of the prior art of mentioning above solving, the present invention proposes a kind of detection validation method and system of particular prospect object.
Particularly, the invention provides a kind of detection validation method of particular prospect object, comprising: adopt depth cameras to obtain the depth information of current environment, and create the depth map of current environment based on the depth information of described acquisition; The depth map of the current environment that relatively creates and the degree of depth of each pixel of initialization background depth map are upgraded the background depth graph model; Deduct the background depth graph model that upgrades the depth map of the current environment of again taking from depth cameras, thereby obtain the depth map of foreground area of the background of current environment; One or more connected domains are numbered in the depth map with resulting foreground area, and in the situation that have a plurality of connected domains, these connected domains are separated, as a plurality of candidates' particular prospect object; And adopt template matches mechanism verify cut apart acquisition the particular prospect object whether belong to the particular prospect object of the template type that is mated.
According to the detection validation method of particular prospect object of the present invention, the degree of depth of the depth map of the current environment that wherein said comparison creates and each pixel of initialization background depth map comprises in order to upgrade the background depth graph model: process by adopting medium filtering to carry out noise reduction filtering to continuous several two field pictures.
Detection validation method according to particular prospect object of the present invention, the degree of depth of the depth map of the current environment that wherein said comparison creates and each pixel of initialization background depth map comprises the following steps of repeatedly carrying out in order to upgrade the background depth graph model: the degree of depth of the depth map of the current environment that relatively creates and existing each pixel of background depth map before the depth map of the current environment that creates, and finding in the current degree of depth of a pixel of current environment depth map the degree of depth of the corresponding pixel points in the described background depth map to be updated to the current depth value of a pixel of current environment depth map during greater than the degree of depth of the corresponding pixel points in the described background depth map; Repeatedly carry out above-mentioned steps, be less than a predetermined amount threshold until the quantity of the pixel of above-mentioned renewal occurs in a schedule time threshold value.
According to the detection validation method of particular prospect object of the present invention, described method also comprises: before whether the particular prospect object of acquisition that checking is cut apart belongs to the particular prospect object of the template type that is mated, set up the special object template.
According to the detection validation method of particular prospect object of the present invention, wherein said special object template is a kind of depth map of special object, has fixing size and depth value and is the fixed range of such special object to the appointment video camera.
Detection validation method according to particular prospect object of the present invention, described method also comprises: described employing template matches mechanism verify cut apart acquisition the particular prospect object whether belong to the step of particular prospect object of the template type that is mated before, the range information that foreground object to be verified is comprised according to its depth map changes the size of foreground object depth map.
According to the detection validation method of particular prospect object of the present invention, the step that the described range information that foreground object to be verified is comprised according to its depth map changes the size of foreground object depth map comprises: the mean depth of depth value of calculating each pixel of described foreground object depth map; Calculate the scaling of described foreground object depth map based on the mean depth of the constant depth value of appointment in the described special object template and the foreground object depth map that calculates; And the size that changes described foreground object depth map according to the scaling that calculates.
According to the detection validation method of particular prospect object of the present invention, wherein said employing template matches mechanism verify cut apart acquisition the particular prospect object algorithm of step by adopting normalized correlation coefficient (NCC) that whether belong to the particular prospect object of the template type that is mated described special object template and described foreground object depth map after varying sized carried out template matches carry out.
According to another aspect of the present invention, also provide a kind of detection validation system of particular prospect object, having comprised: depth map acquisition equipment, obtain the depth information of current environment, and create the depth map of current environment based on the depth information of described acquisition; The background modeling unit, the depth map of the current environment that relatively creates and the degree of depth of each pixel of initialization background depth map are upgraded the background depth graph model; The background subtraction unit deducts the background depth graph model that upgrades the depth map of the current environment of again taking from depth cameras, thereby obtains the depth map of foreground area of the background of current environment; The foreground object cutting unit, one or more connected domains are numbered in the depth map with resulting foreground area, and in the situation that have a plurality of connected domains, these connected domains are separated, as a plurality of candidates' particular prospect object; And the foreground object authentication unit, adopt template matches mechanism verify cut apart acquisition the particular prospect object whether belong to the particular prospect object of the template type that is mated.
The present invention just uses depth map, and the detected characteristics of employing is the profile of special object, has more standby robustness.
Description of drawings
Shown in Figure 1 is the scene schematic diagram that adopts the detection validation method and system of foreground object of the present invention.
The process flow diagram that shown in Figure 2 is according to the detection validation method of foreground object of the present invention.
The process flow diagram that shown in Figure 3 is according to background modeling step of the present invention.
The process flow diagram that shown in Figure 4 is according to background subtraction step of the present invention and foreground object segmentation procedure.
Shown in Fig. 5 explains is schematic diagram according to background subtraction step of the present invention and foreground object segmentation procedure.
The process flow diagram that shown in Figure 6 is according to foreground object verification step of the present invention.
Shown in Fig. 7-1 is process flow diagram according to change depth map size step of the present invention.
Shown in Fig. 7-2 is schematic diagram according to change depth map size step of the present invention.
Shown in Fig. 8-1 is the schematic diagram that head according to the present invention is takeed on template.
Shown in Fig. 8-2 is the schematic diagram of takeing on the different gray-scale map template of template from head according to the present invention.
Shown in Figure 9 is according to the process flow diagram with object shapes and template matches step of the present invention.
Shown in Figure 10 is with the visual schematic diagram of NCC matching result according to of the present invention.
Shown in Figure 11 is according to system block diagram of the present invention.
Embodiment
Below, describe with reference to the accompanying drawings specific embodiments of the invention in detail.
Shown in Figure 1 is the scene schematic diagram that adopts the detection validation method and system of foreground object of the present invention.Taken by a three-dimensional camera, simultaneously the present invention processes the data of taking.Output can be illustrated on the display device.
The process flow diagram that shown in Figure 2 is according to the detection validation method of foreground object of the present invention.At first, at step 11 place, by depth map acquisition unit U10, obtain the depth information of current environment, and create the depth map of current environment based on the depth information of described acquisition.Subsequently, at step 12 place, compare the depth map of the current environment that creates and the degree of depth of each pixel of initialization background depth map by background modeling unit U13, upgrade the background depth graph model.Then, at step 13 place, deduct the background depth graph model that upgrades the depth map of the current environment of again being taken from depth cameras by background subtraction unit U14, thereby obtain the depth map of foreground area of the background of current environment.Then, at step 14 place, be numbered by one or more connected domains in the depth map of foreground object cutting unit U15 with resulting foreground area, and in the situation that have a plurality of connected domains, these connected domains are separated, as a plurality of candidates' particular prospect object.At last, at step 14 place, by foreground object authentication unit U16 adopt template matches mechanism verify cut apart acquisition the particular prospect object whether belong to the particular prospect object of the template type that is mated.And at step 15 place, output is through the object of checking.
Depth map can obtain by three-dimensional camera, such as PrimeSense.So-called depth map is exactly, depth cameras is positioned at environment before the camera lens by shooting, and calculate in the captured environment each pixel apart from the distance of depth cameras, and for example adopt that 16 numerical value records the object of each pixel representative and the distance between the depth cameras, thereby there are 16 bit value of the incidental expression distance of these each pixels to form the figure that a width of cloth represents distance between each pixel and the camera.A degree of depth Figure 10 is an image, and the implication of its each pixel value is that this position is from the range information of video camera.The absolute figure of distance is can't be visual, thus need to carry out some processing so that it satisfies the constraint of digital picture numerical value to data, therefore be known as depth map.The degree of depth Figure 10 that mentions in follow-up explanation refers to the distance value that it is original rather than processed visual pixel value.
The process flow diagram that shown in Figure 3 is according to background modeling step of the present invention.At first, at step S110 place, input initialization model.The initialization background model at first can with the depth map of the first frame (the perhaps mean value of former frames) as initial back-ground model, can be constantly updated background model afterwards dynamically.In order to use the present invention in any scene, therefore, background model needs real-time update.For this reason, need to adopt depth cameras to obtain continuously the N frame depth map of institute of the present invention application scenarios at step S111 place.In view of the degree of depth of every frame figure may exist unstablely, therefore, at step S112 place the N frame depth map that obtains is carried out noise reduction process.For instance, noise-reduction method is: obtain N frame depth map, to corresponding N the depth value of the pixel of the same position in this N frame depth map, use noise reduction function to carry out.Noise reduction function can be used the medium filtering function, and its expression formula is:
d ‾ ( x , y ) = median 1 ≤ i ≤ N ( d i ( x , y ) ) - - - ( 1 )
Wherein d (x, y) is illustrated in the depth value on the position (x, y), N presentation video frame number.
After the depth value of each position in the acquisition depth map is carried out above-mentioned processing, the depth map of depth value after the output of step S113 place comprises noise reduction process.Afterwards, at step S115 place, adopt through the depth map after the noise reduction process and upgrade initial back-ground model.The renewal processing procedure is specific as follows: the background model that had existed before the input of step S114 place is upgraded, depth value between the depth map that obtains after the background model depth map that has existed before the afterwards more described renewal and the process noise reduction process, if there is the large situation of depth value of the background model depth map that has existed before the more described renewal of depth value of the depth map that obtains after the process noise reduction process in certain pixel of two width of cloth depth map correspondence positions between depth value, just show through this position pixel of the depth map of acquisition after the noise reduction process farther apart from depth cameras, and blocked by certain foreground object when showing the background model of this pixel before formation is updated, and this foreground object is removed when forming above-mentioned N frame depth map.Therefore, this position pixel of the depth map that obtains after this noise reduction process more should become the part of background, thus with the depth value of this pixel more shape be the depth value of corresponding pixel points in the background model depth map.Concrete renewal expression formula is:
d B ( x , y ) = d ‾ ( x , y ) , d ‾ ( x , y ) > d B ( x , y ) d B ( x , y ) , otherwise - - - ( 2 )
Wherein
Figure BDA0000072727880000062
Depth value in the depth map behind the expression noise reduction, and d BDepth value in the expression background model.
In view of in using scene process of the present invention, can not be in the change procedure always, on the contrary, through after the variation at initial stage, usually can be in certain steady state (SS).For example a meeting occasion, after people settled down, the variation of scene generally seldom.In order to reduce the calculated amount of renewal, the present invention further can stipulate to upgrade and end.For this reason, the present invention's process of setting up background model also comprises step S116.At step S116 place, utilize the termination condition to end the background model renewal process, the termination condition is defined as at the appointed time in the T, and the pixel quantity that occurs among the step S115 of update background module to upgrade is less than a given threshold value Count Th
Therefore by the way, can carry out the background model depth map and be in dynamical state, can use the present invention in real time and be not subjected to the impact of environmental change.
The process flow diagram that shown in Figure 4 is according to background subtraction step of the present invention and foreground object segmentation procedure.After background model is upgraded or simultaneously, by from the depth map that newly obtains, deducting the depth map of current background model, the depth map of possible foreground object in the depth map that obtains thus newly to obtain.The detailed process of this flow process is as follows:
At first, at step S120 place, from the camera of depth cameras, obtain a frame depth map.Then at step S121 place, receive the background model (this background model can be initial, also can be just to upgrade) of inputting, and from the depth map that obtains, deduct the depth map of the background model of inputting, and output foreground depth Figure 122.
Concrete subduction strategy is provided by following expression formula (3):
d F ( x , y ) = 0 , | d ( x , y ) - d B ( x , y ) | < Th Sub d ( x , y ) , otherwise - - - ( 3 )
D wherein BThe pixel value of expression background model depth map, d represent the pixel value of the depth map inputted, d FThe pixel value of expression foreground depth figure and Th SubBe predefined threshold value.
Then at step S130 place, based on the connected domain of the degree of depth of foreground depth figure foreground depth figure is divided into a plurality of foreground object, and is output as foreground object set 131.The partitioning algorithm that uses is the connected domain analysis (DCCA) based on the degree of depth.Concrete algorithm can find in the U.S. Patent application US20090183125 (A1) that Prime Sense company submits to.By reference the content of this patented claim is contained in this at this.
Shown in Fig. 5 explains is schematic diagram according to background subtraction step of the present invention and foreground object segmentation procedure.The data layout of the object in the foreground object set 131 is identical with depth map, and the net result in the schematic diagram is only listed the part foreground object.
The process flow diagram that shown in Figure 6 is according to foreground object verification step of the present invention.The method of template matches is adopted in the foreground object checking that the present invention carries out.Detailed process is as follows.
At first, at step S141 place, from the foreground object set 131 of inputting, select any one foreground object.Then calculate the mean depth of selected foreground object at step S142 place.Below be the expression formula of calculating the method for mean depth:
d avg = &Sigma; ( x , y ) &Element; Obj d ( x , y ) / size - - - ( 4 )
Wherein Obj represents foreground object, and d represents depth value and size represents the area of foreground object.
After the mean depth value that obtains foreground object, at step S144 place, change the size of selected foreground object depth map based on the mean depth value of calculating.Shown in Fig. 7-1 is process flow diagram according to change depth map size step of the present invention.
Shown in Fig. 7-1, change depth map size step S143 and comprise: recomputate depth value step S1430, calculate scaling step S1431 and convergent-divergent foreground object step S1432.
At step S1430 place, recomputate depth value.This step is the consistance of depth value before and after convergent-divergent in order to guarantee depth map.Be different from gray-scale map and carrying out convergent-divergent and do not need to change the gray-scale value of its pixel, need the depth value of foreground object is recomputated.Reason is the value representation range information of depth map pixel, and is relevant with its size.Expression formula (5) provides the method that recomputates:
d′(x,y)=D Norm*d(x,y)/d avg (5)
Wherein d represents the depth value of foreground object, d AvgCalculate by expression formula (4) and to get and DNorm is normalized parameter, represent that all foreground object all zoom to this distance parameter.In follow-up explanation, suppose to use DNorm=3 as example.
Then, at step S1431 place, calculate scaling.Scaling is provided by expression formula (6):
ratio=d avg/D Norm (6)
D wherein AvgCalculate the mean depth value that gets, D for expression formula (4) NormBe normalized parameter.
At last, at step S1432 place, come the selected foreground object of convergent-divergent based on the scaling that calculates.The parameter of convergent-divergent is calculated by expression formula (7):
h = H / ratio w = W / ratio - - - ( 7 )
Wherein H be original foreground object image height and h is the height of sight object images behind the convergent-divergent; W is image wide of original foreground object and w is the wide of sight object images behind the convergent-divergent; Ratio is the scaling that expression formula (6) calculates.Foreground object is carried out template matches after processing through convergent-divergent foreground object step S1432, because template size is fixing, so can fall calculated amount and time that the face coupling needs.
Shown in Fig. 7-2 is schematic diagram according to change depth map size step of the present invention.
Return referring to accompanying drawing 6, at step S144 place, receive existing shape template input, and will mate contrast through the shape of the foreground object behind the convergent-divergent and the template of a certain type.
General template matching method is by the size of continuous change template and finds the position of mating most in given image.But because the difference of position, the size of actual object in image is different, and template matches needs a large amount of computing times like this, specifically, needs to attempt all possible template size and just can finish once coupling.The solution that among the present invention template matches is raised speed is according to depth value foreground object and template to be normalized to same yardstick.Shown in Fig. 8-1 is the schematic diagram that head according to the present invention is takeed on template instances.Shown in Fig. 8-2 is the schematic diagram of takeing on the different gray-scale map template of template from the head according to the present invention shown in Fig. 8-1.We finder's head shoulder has " Ω " shape of stable and robust, and this stability can be used as the feature of template matches.Template is the normalization three-dimensional template shown in Fig. 8 among the present invention-1, mainly contains two characteristics: the first, and three-dimensional template: template is depth map, what its pixel value represented is distance value.The second, normalization template: the D that template size and aforementioned expression formula (6) provide NormRelevant, for what fix.Concrete meaning is that the distance when the object distance camera is D NormObtain this object as template when (such as 3 meters).Compare with the gray-scale map template shown in Fig. 8-2, the head shoulder template shown in Fig. 8 of the present invention-1 has the following advantages and is to improve the accuracy of coupling and reduces noise.Even the outward appearance of certain object has similar " Ω " shape, if but its three-dimensional surface does not possess the shape of an ellipsoid, just can distinguish the head shoulder that this object is not the people by the three-dimensional head shoulder of normalization template so, and the template of the gray-scale map shown in Fig. 8-2 can not realize above-mentioned technique effect.
Shown in Figure 9 is according to the process flow diagram with object shapes and template matches step of the present invention.This coupling proof procedure comprises: step S1440, carry out the NCC template matches; Step S1441, the maximum matching value of thresholding; And step S1442, calculate the actual position that head is takeed on.
At step S1440 place, carry out the NCC template matches, use exactly normalized correlation coefficient (NCC) as template matches.Related coefficient (NCC) in essence with the convolution similar process.Expression formula (8) is the computing formula of related coefficient:
R ccoeff ( x , y ) = &Sigma; x &prime; , y &prime; [ T ( x &prime; , y &prime; ) &CenterDot; I ( x + x &prime; , y + y &prime; ) ] 2 - - - ( 8 )
Wherein T represents template image; I represents target image, i.e. foreground object among the present invention.
Expression formula (9) provides normalization coefficient; Expression formula (10) provides the NCC calculation expression:
Z ( x , y ) = &Sigma; x &prime; , y &prime; T ( x &prime; , y &prime; ) 2 &CenterDot; &Sigma; x &prime; , y &prime; I ( x + x &prime; , y + y &prime; ) 2 - - - ( 9 )
R ccoeff _ normed ( x , y ) = R ccoeff ( x , y ) Z ( x , y ) - - - ( 10 )
Wherein Z (x, y) calculates by expression formula (9).
Wide and the height of supposing template itself is respectively w and h, and the wide and height of foreground object is respectively W and H; The result of NCC is that one (W-w+1) multiply by the two-dimensional array of (H-h+1) so.The value representation matching degree of NCC, result's from 0 to 1,1 expression is coupling fully.
Shown in Figure 10 is with the visual schematic diagram of NCC matching result according to of the present invention.
Then at step S1441 place, the maximum matching value of thresholding namely finds maximum matching value V from NCC result Max(x0, y0) judges whether it is the tactful as follows of head shoulder:
Location = ( x 0 , y 0 ) , V Max > Match th NULL , otherwise - - - ( 11 )
Match wherein ThIt is predefined matching threshold.
Subsequently, at step S1442 place, calculate the actual position of head shoulder, namely the head shoulder result according to coupling calculates it in the position of input picture.
At first, according to NCC result and expression formula (11), the position in the foreground object of head shoulder zone behind convergent-divergent is a rectangular area, with RECT (x0, y0, w, h) expression, x0 wherein, y0 represents to obtain the coordinate of maximum matching value, and w and h are respectively the wide and high of template.Then, the position of head shoulder zone in original input picture is RECT (x0*ratio+ Δ x, y0*ratio+ Δ y, w*ratio, h*ratio), wherein ratio is the scaling that expression formula (6) calculates, and Δ x and Δ y are the relative position of foreground object and entire depth figure.
Finally, as shown in Figure 2, take on regional foreground object as being verified in order to carry out subsequent treatment at the identified head that was verified of step 15 place output.
Although the present invention has only adopted people's head shoulder as special object embodiments of the present invention to be described, those skilled in the art can understand, the present invention can be applied to any special object, for example such as tiger, and the various animals such as lion, vehicle.Different is to set up different matching templates to get final product.
The block scheme that shown in Figure 11 is according to system of the present invention.As shown in the figure, system according to the present invention comprises: central processing unit U11, memory device U12, display device U17, depth map acquisition equipment U10, obtain the depth information of current environment, and create the depth map of current environment based on the depth information of described acquisition; Background modeling unit U13, the depth map of the current environment that relatively creates and the degree of depth of each pixel of initialization background depth map are upgraded the background depth graph model; Background subtraction unit U14 deducts the background depth graph model that upgrades the depth map of the current environment of again taking from depth cameras, thereby obtains the depth map of foreground area of the background of current environment; Foreground object cutting unit U15, one or more connected domains are numbered in the depth map with resulting foreground area, and in the situation that have a plurality of connected domains, these connected domains are separated, as a plurality of candidates' particular prospect object; And foreground object authentication unit U16, adopt template matches mechanism verify cut apart acquisition the particular prospect object whether belong to the particular prospect object of the template type that is mated.
Method of the present invention can be carried out at a computing machine (processor), perhaps can be carried out by many computer distribution types.In addition, program can be transferred to the remote computer at the there executive routine.
Will be understood by those skilled in the art that, according to designing requirement and other factors, as long as it falls in the scope of claims or its equivalent, various modifications, combination, part combination and alternative can occur.

Claims (9)

1. the detection validation method of a particular prospect object comprises:
Adopt depth cameras to obtain the depth information of current environment, and create the depth map of current environment based on the depth information of described acquisition;
The depth map of the current environment that relatively creates and the degree of depth of each pixel of initialization background depth map are upgraded the background depth graph model;
Deduct the background depth graph model that upgrades the depth map of the current environment of again taking from depth cameras, thereby obtain the depth map of foreground area of the background of current environment;
One or more connected domains are numbered in the depth map with resulting foreground area, and in the situation that have a plurality of connected domains, these connected domains are separated, as a plurality of candidates' particular prospect object; And
Adopt template matches mechanism verify cut apart acquisition the particular prospect object whether belong to the particular prospect object of the template type that is mated.
2. the method for claim 1, the degree of depth of the depth map of the current environment that wherein said comparison creates and each pixel of initialization background depth map comprises in order to upgrade the background depth graph model:
Process by adopting medium filtering to carry out noise reduction filtering to continuous several two field pictures.
3. the method for claim 1, the degree of depth of the depth map of the current environment that wherein said comparison creates and each pixel of initialization background depth map comprises the following steps of repeatedly carrying out in order to upgrade the background depth graph model:
The degree of depth of the depth map of the current environment that relatively creates and existing each pixel of background depth map before the depth map of the current environment that creates, and finding in the current degree of depth of a pixel of current environment depth map the degree of depth of the corresponding pixel points in the described background depth map to be updated to the current depth value of a pixel of current environment depth map during greater than the degree of depth of the corresponding pixel points in the described background depth map;
Repeatedly carry out above-mentioned steps, be less than a predetermined amount threshold until the quantity of the pixel of above-mentioned renewal occurs in a schedule time threshold value.
4. the method for claim 1, described method also comprises:
Before whether the particular prospect object of acquisition that checking is cut apart belongs to the particular prospect object of the template type that is mated, set up the special object template.
5. method as claimed in claim 4, wherein said special object template is a kind of depth map of special object, having fixing size and depth value is that such special object is to the fixed range of specifying video camera.
6. method as claimed in claim 5, described method also comprises: described employing template matches mechanism verify cut apart acquisition the particular prospect object whether belong to the step of particular prospect object of the template type that is mated before, the range information that foreground object to be verified is comprised according to its depth map changes the size of foreground object depth map.
7. method as claimed in claim 6, the step that the described range information that foreground object to be verified is comprised according to its depth map changes the size of foreground object depth map comprises:
Calculate the mean depth of depth value of each pixel of described foreground object depth map;
Calculate the scaling of described foreground object depth map based on the mean depth of the constant depth value of appointment in the described special object template and the foreground object depth map that calculates; And
Change the size of described foreground object depth map according to the scaling that calculates.
8. method as claimed in claim 7, wherein said employing template matches mechanism verify cut apart acquisition the particular prospect object algorithm of step by adopting normalized correlation coefficient (NCC) that whether belong to the particular prospect object of the template type that is mated described special object template and described foreground object depth map after varying sized carried out template matches carry out.
9. the detection validation system of a particular prospect object comprises:
Depth map acquisition equipment obtains the depth information of current environment, and creates the depth map of current environment based on the depth information of described acquisition;
The background modeling unit, the depth map of the current environment that relatively creates and the degree of depth of each pixel of initialization background depth map are upgraded the background depth graph model;
The background subtraction unit deducts the background depth graph model that upgrades the depth map of the current environment of again taking from depth cameras, thereby obtains the depth map of foreground area of the background of current environment;
The foreground object cutting unit, one or more connected domains are numbered in the depth map with resulting foreground area, and in the situation that have a plurality of connected domains, these connected domains are separated, as a plurality of candidates' particular prospect object; And
The foreground object authentication unit, adopt template matches mechanism verify cut apart acquisition the particular prospect object whether belong to the particular prospect object of the template type that is mated.
CN201110181505.9A 2011-06-30 2011-06-30 For the method and system of the detection validation of particular prospect object Active CN102855459B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201110181505.9A CN102855459B (en) 2011-06-30 2011-06-30 For the method and system of the detection validation of particular prospect object

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201110181505.9A CN102855459B (en) 2011-06-30 2011-06-30 For the method and system of the detection validation of particular prospect object

Publications (2)

Publication Number Publication Date
CN102855459A true CN102855459A (en) 2013-01-02
CN102855459B CN102855459B (en) 2015-11-25

Family

ID=47402038

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201110181505.9A Active CN102855459B (en) 2011-06-30 2011-06-30 For the method and system of the detection validation of particular prospect object

Country Status (1)

Country Link
CN (1) CN102855459B (en)

Cited By (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105225217A (en) * 2014-06-23 2016-01-06 株式会社理光 Based on background model update method and the system of the degree of depth
CN105447895A (en) * 2014-09-22 2016-03-30 酷派软件技术(深圳)有限公司 Hierarchical picture pasting method, device and terminal equipment
CN105678696A (en) * 2014-11-20 2016-06-15 联想(北京)有限公司 Image acquisition method and electronic equipment
CN105744151A (en) * 2014-12-24 2016-07-06 三星电子株式会社 Method Of Face Detection, Method Of Image Processing, Face Detection Device And Electronic System Including The Same
CN105760846A (en) * 2016-03-01 2016-07-13 北京正安维视科技股份有限公司 Object detection and location method and system based on depth data
CN107111764A (en) * 2015-01-16 2017-08-29 高通股份有限公司 By the event of depth triggering of the object in the visual field of imaging device
CN107742296A (en) * 2017-09-11 2018-02-27 广东欧珀移动通信有限公司 Dynamic image generation method and electronic installation
WO2018120038A1 (en) * 2016-12-30 2018-07-05 深圳前海达闼云端智能科技有限公司 Method and device for target detection
CN109165339A (en) * 2018-07-12 2019-01-08 西安艾润物联网技术服务有限责任公司 Service push method and Related product
CN109658433A (en) * 2018-12-05 2019-04-19 青岛小鸟看看科技有限公司 Image background modeling and foreground extracting method, device and electronic equipment
CN110135382A (en) * 2019-05-22 2019-08-16 北京华捷艾米科技有限公司 A kind of human body detecting method and device
CN110136174A (en) * 2019-05-22 2019-08-16 北京华捷艾米科技有限公司 A kind of target object tracking and device
CN112020725A (en) * 2018-05-03 2020-12-01 罗伯特·博世有限公司 Method and apparatus for determining depth information image from input image

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110705552B (en) * 2019-10-11 2022-05-06 沈阳民航东北凯亚有限公司 Luggage tray identification method and device

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101236599A (en) * 2007-12-29 2008-08-06 浙江工业大学 Human face recognition detection device based on multi- video camera information integration
CN101657825A (en) * 2006-05-11 2010-02-24 普莱姆传感有限公司 Modeling of humanoid forms from depth maps
US20110081044A1 (en) * 2009-10-07 2011-04-07 Microsoft Corporation Systems And Methods For Removing A Background Of An Image

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101657825A (en) * 2006-05-11 2010-02-24 普莱姆传感有限公司 Modeling of humanoid forms from depth maps
CN101236599A (en) * 2007-12-29 2008-08-06 浙江工业大学 Human face recognition detection device based on multi- video camera information integration
US20110081044A1 (en) * 2009-10-07 2011-04-07 Microsoft Corporation Systems And Methods For Removing A Background Of An Image

Cited By (20)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105225217A (en) * 2014-06-23 2016-01-06 株式会社理光 Based on background model update method and the system of the degree of depth
CN105225217B (en) * 2014-06-23 2018-04-10 株式会社理光 Background model update method and system based on depth
CN105447895A (en) * 2014-09-22 2016-03-30 酷派软件技术(深圳)有限公司 Hierarchical picture pasting method, device and terminal equipment
CN105678696A (en) * 2014-11-20 2016-06-15 联想(北京)有限公司 Image acquisition method and electronic equipment
CN105744151A (en) * 2014-12-24 2016-07-06 三星电子株式会社 Method Of Face Detection, Method Of Image Processing, Face Detection Device And Electronic System Including The Same
CN105744151B (en) * 2014-12-24 2020-09-04 三星电子株式会社 Face detection method, face detection device, and image pickup apparatus
CN107111764B (en) * 2015-01-16 2021-07-16 高通股份有限公司 Events triggered by the depth of an object in the field of view of an imaging device
CN107111764A (en) * 2015-01-16 2017-08-29 高通股份有限公司 By the event of depth triggering of the object in the visual field of imaging device
CN105760846B (en) * 2016-03-01 2019-02-15 北京正安维视科技股份有限公司 Target detection and localization method and system based on depth data
CN105760846A (en) * 2016-03-01 2016-07-13 北京正安维视科技股份有限公司 Object detection and location method and system based on depth data
WO2018120038A1 (en) * 2016-12-30 2018-07-05 深圳前海达闼云端智能科技有限公司 Method and device for target detection
CN107742296A (en) * 2017-09-11 2018-02-27 广东欧珀移动通信有限公司 Dynamic image generation method and electronic installation
CN112020725A (en) * 2018-05-03 2020-12-01 罗伯特·博世有限公司 Method and apparatus for determining depth information image from input image
CN109165339A (en) * 2018-07-12 2019-01-08 西安艾润物联网技术服务有限责任公司 Service push method and Related product
CN113536129A (en) * 2018-07-12 2021-10-22 西安艾润物联网技术服务有限责任公司 Service push method and related product
CN109658433A (en) * 2018-12-05 2019-04-19 青岛小鸟看看科技有限公司 Image background modeling and foreground extracting method, device and electronic equipment
CN110135382A (en) * 2019-05-22 2019-08-16 北京华捷艾米科技有限公司 A kind of human body detecting method and device
CN110136174A (en) * 2019-05-22 2019-08-16 北京华捷艾米科技有限公司 A kind of target object tracking and device
CN110136174B (en) * 2019-05-22 2021-06-22 北京华捷艾米科技有限公司 Target object tracking method and device
CN110135382B (en) * 2019-05-22 2021-07-27 北京华捷艾米科技有限公司 Human body detection method and device

Also Published As

Publication number Publication date
CN102855459B (en) 2015-11-25

Similar Documents

Publication Publication Date Title
CN102855459B (en) For the method and system of the detection validation of particular prospect object
Alemán-Flores et al. Automatic lens distortion correction using one-parameter division models
EP2858008A2 (en) Target detecting method and system
CN101147159A (en) Fast method of object detection by statistical template matching
JP6221390B2 (en) Image processing apparatus, program, and image processing method
CN107924461A (en) For multifactor characteristics of image registration and method, circuit, equipment, system and the correlation computer executable code of tracking
Esteban et al. Multi-stereo 3d object reconstruction
US20220138977A1 (en) Two-stage depth estimation machine learning algorithm and spherical warping layer for equi-rectangular projection stereo matching
CN103294989A (en) Method for discriminating between a real face and a two-dimensional image of the face in a biometric detection process
US7561732B1 (en) Method and apparatus for three-dimensional shape estimation using constrained disparity propagation
CN110992424B (en) Positioning method and system based on binocular vision
EP3054421A1 (en) Method of fast and robust camera location ordering
Wang et al. Depth map enhancement based on color and depth consistency
Jung et al. Object Detection and Tracking‐Based Camera Calibration for Normalized Human Height Estimation
KR20110021500A (en) Method for real-time moving object tracking and distance measurement and apparatus thereof
de Carvalho et al. Anomaly detection with a moving camera using multiscale video analysis
Ibisch et al. Arbitrary object localization and tracking via multiple-camera surveillance system embedded in a parking garage
CN116883945B (en) Personnel identification positioning method integrating target edge detection and scale invariant feature transformation
EP3076370B1 (en) Method and system for selecting optimum values for parameter set for disparity calculation
Allain et al. Crowd flow characterization with optimal control theory
Ying et al. Technique of measuring leading vehicle distance based on digital image processing theory
US20220245860A1 (en) Annotation of two-dimensional images
Kerdvibulvech Hybrid model of human hand motion for cybernetics application
Alcoverro et al. Connected operators on 3d data for human body analysis
Asmar et al. 2D occupancy-grid SLAM of structured indoor environments using a single camera

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant