CN102194128A - Method and device for detecting object based on two-value depth difference - Google Patents

Method and device for detecting object based on two-value depth difference Download PDF

Info

Publication number
CN102194128A
CN102194128A CN 201110126220 CN201110126220A CN102194128A CN 102194128 A CN102194128 A CN 102194128A CN 201110126220 CN201110126220 CN 201110126220 CN 201110126220 A CN201110126220 A CN 201110126220A CN 102194128 A CN102194128 A CN 102194128A
Authority
CN
China
Prior art keywords
depth difference
value
depth
difference
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN 201110126220
Other languages
Chinese (zh)
Other versions
CN102194128B (en
Inventor
于仕琪
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen University
Original Assignee
Shenzhen University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen University filed Critical Shenzhen University
Priority to CN 201110126220 priority Critical patent/CN102194128B/en
Publication of CN102194128A publication Critical patent/CN102194128A/en
Application granted granted Critical
Publication of CN102194128B publication Critical patent/CN102194128B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Abstract

The invention discloses a method for detecting an object based on a two-value depth difference. The method comprises the following steps of: acquiring two-value depth difference characteristics corresponding to the depth difference of the acquired depth images; and inputting the two-value depth difference characteristics into a preset classification model to obtain whether the depth images contain the object. The invention also provides a corresponding device. According to the method and the device for detecting the object based on the two-value depth difference, object detection is performed on the depth images (three-dimensional data), and the classification model is established on the basis of the two-value depth difference characteristics of the pixels of the depth images; and because the pixel values of the depth images are only related with the distance and are not related with the bright and the color of the surface of the object, the interference of illumination change and complex background can be removed, the detection accuracy of the object is high, and the error detection rate is low.

Description

Carry out the method and apparatus of object detection based on the two-value depth difference
Technical field
The present invention relates to image processing field, specially refer to a kind of method and apparatus that carries out object detection based on the two-value depth difference.
Background technology
At present the algorithm that image is carried out object detection all is to carry out on normal image (2-D data), the value representation of each pixel in the image be the brightness of object, want high as the brightness of the brightness ratio yellow-toned skin of white clothes.Therefore the pixel value of normal image is only relevant with the light intensity of the light intensity of color of object surface, reflection or emission, with the distance no direct relation of object to camera, therefore the defective that causes is to be difficult to overcome illumination variation and complex background interference, the people's who causes because of illumination in the normal image shade, the complex texture on the image background (as the body form of drawing on the wall), interference is caused to object detection in the capital, some non-object area mistakes are identified as object, false detection rate height.
Summary of the invention
Fundamental purpose of the present invention is to propose a kind ofly to carry out the method and apparatus of object detection based on the two-value depth difference, utilizes the pixel extraction two-value depth difference feature of depth image, to realize object detection, has reduced false detection rate.
The present invention proposes a kind ofly to carry out the method for object detection based on the two-value depth difference, comprising:
Obtain the two-value depth difference feature of the depth difference correspondence of institute's sampling depth image;
With the default disaggregated model of described two-value depth difference feature input, whether comprise object to draw described depth image.
Preferably, the described two-value depth difference feature of obtaining the depth difference correspondence of institute's sampling depth image comprises:
Two-value depth difference according to each pixel in the following formula compute depth image:
BD x ( x , y ) = 1 , G x ( x , y ) > M 0 , - M < G x ( x , y ) < M - 1 , G x ( x , y ) < - M , BD y ( x , y ) = 1 , G y ( x , y ) > M 0 , - M < G y ( x , y ) < M - 1 , G y ( x , y ) < - M
Described BD x(x y) is (x, y) the directions X two-value depth difference of position, BD y(x y) is (x, y) the Y direction two-value depth difference of position, described G x(x y) is (x, y) the directions X depth difference of position, G y(x, y) be (x, y) the Y direction depth difference of position, D (x, y) be (x, the y) depth value of position, described M are a natural number;
Add up the two-value depth difference of all pixels, form two-value depth difference feature.
Preferably, described G x(x, y) and G y(x y) obtains by following formula:
G x(x, y)=D (x+1, y)-D (x-1, y), G y(x, y)=D (x, y+1)-(x, y-1), wherein (x+1 y) is (x+1, y) depth value of position to D to D.
Preferably, described two-value depth difference feature is represented by two-value depth difference statistic histogram.
Preferably, before carrying out the described two-value depth difference feature of obtaining the depth image of being gathered, also comprise:
According to the depth image that comprises subject image, set up described disaggregated model.
The present invention also proposes a kind ofly to carry out the device of object detection based on the two-value depth difference, comprising:
Acquisition module is used to obtain the two-value depth difference feature of the depth difference correspondence of institute's sampling depth image; Load module is used for whether the default disaggregated model of described two-value depth difference feature input is comprised object to draw described depth image.
Preferably, described acquisition module comprises:
Computing unit is used for the two-value depth difference according to following each pixel of formula compute depth image:
BD x ( x , y ) = 1 , G x ( x , y ) > M 0 , - M < G x ( x , y ) < M - 1 , G x ( x , y ) < - M , BD y ( x , y ) = 1 , G y ( x , y ) > M 0 , - M < G y ( x , y ) < M - 1 , G y ( x , y ) < - M
Described BD x(x y) is (x, y) the directions X two-value depth difference of position, BD y(x y) is (x, y) the Y direction two-value depth difference of position, described G x(x y) is (x, y) the directions X depth difference of position, G y(x, y) be (x, y) the Y direction depth difference of position, D (x, y) be (x, the y) depth value of position, described M are a natural number;
Statistic unit is used to add up the two-value depth difference of all pixels, forms two-value depth difference feature.
Preferably, described computing unit obtains G by following formula x(x, y) and G y(x, y):
G x(x, y)=D (x+1, y)-D (x-1, y), G y(x, y)=D (x, y+1)-(x, y-1), wherein (x+1 y) is (x+1, y) depth value of position to D to D.
Preferably, described statistic unit is represented two-value depth difference feature by two-value depth difference statistic histogram.
Preferably, the described device that carries out object detection based on the two-value depth difference also comprises:
MBM is used for setting up described disaggregated model according to the depth image that comprises subject image.
A kind of method and apparatus that carries out object detection based on the two-value depth difference that the present invention proposes, on depth image (three-dimensional data), carry out object detection, two-value depth difference feature based on the pixel of depth image is set up disaggregated model, because the pixel value of depth image only and distance dependent, irrelevant with the brightness and the color of body surface, therefore the present invention's interference that can remove illumination variation and complex background makes object detection accuracy rate height, and false detection rate is low.
Description of drawings
Fig. 1 the present invention is based on the schematic flow sheet that the two-value depth difference is carried out method one embodiment of object detection;
Fig. 2 the present invention is based on the schematic flow sheet that the two-value depth difference is carried out obtaining step among method one embodiment of object detection;
Fig. 3 the present invention is based on the synoptic diagram that the two-value depth difference is carried out two-value depth difference among method one embodiment of object detection;
Fig. 4 the present invention is based on the two-value depth difference to carry out depth image example among method one embodiment of object detection;
Fig. 5 the present invention is based on the schematic flow sheet that the two-value depth difference is carried out the another embodiment of method of object detection;
Fig. 6 the present invention is based on the structural representation that the two-value depth difference is carried out device one embodiment of object detection;
Fig. 7 the present invention is based on the structural representation that the two-value depth difference is carried out acquisition module among device one embodiment of object detection;
Fig. 8 the present invention is based on the structural representation that the two-value depth difference is carried out the another embodiment of device of object detection.
The realization of the object of the invention, functional characteristics and advantage will be in conjunction with the embodiments, are described further with reference to accompanying drawing.
Embodiment
Should be appreciated that specific embodiment described herein only in order to explanation the present invention, and be not used in qualification the present invention.
With reference to Fig. 1, propose that the present invention is a kind of to carry out method one embodiment of object detection based on the two-value depth difference, comprising:
Step S10, obtain the two-value depth difference feature of the depth difference correspondence of institute's sampling depth image;
Utilizing equipment sampling depth images such as degree of depth camera or laser ranging scanner, can note the three-dimensional data of environment, also is depth information, and depth information is stored in the mode of depth image (three-dimensional data).The value representation object of each pixel is to the distance of camera in the depth image, and pixel value is big more, and the expression object is far away more from camera, and the pixel value of depth image only arrives the distance dependent of camera with object, and is irrelevant with the brightness and the color of body surface.Obtain the depth difference of this depth image according to the pixel of the depth image that collects, and then obtain two-value depth difference feature.Because under the far and near different situation of background, the shape of object is all identical at object, but the depth difference that calculates is different.In order to make no matter object is from the background distance, and identical depth difference is all arranged, present embodiment is introduced the two-value depth difference, promptly adds up object from the far and near asynchronous depth difference of background, and the gained depth difference is reduced two to three values, is called the two-value depth difference.
Step S11, with the default disaggregated model of described two-value depth difference feature input, whether comprise object to draw described depth image.
With the default disaggregated model of two-value depth difference input that extracts, disaggregated model can be a supporting vector machine model, can be AdaBoost model etc. also, to judge whether comprise object in the depth image, with the human body is example, can be whole or local as head, shoulder, upper half of human body etc.Disaggregated model is for setting in advance, and depth image that comprises object and environment simultaneously of normally default collection extracts characteristics of image to this depth image and trains also modeling acquisition.
A kind of method of carrying out object detection based on the two-value depth difference that the present invention proposes, on depth image (three-dimensional data), carry out object detection, two-value depth difference based on the pixel of depth image is set up model, because the pixel value of depth image only and distance dependent, irrelevant with the brightness and the color of body surface, therefore the present invention's interference that can remove illumination variation and complex background makes object detection accuracy rate height, and false detection rate is low.Whether the present invention can judge automatically has object to exist in the surrounding environment, have higher intelligently, can be applied to a plurality of fields such as automobile, robot or supervisory system, improves the intelligent of system.
With reference to Fig. 2, in one embodiment, step S10 can comprise:
Step S101, according to the two-value depth difference of each pixel in the following formula compute depth image:
BD x ( x , y ) = 1 , G x ( x , y ) > M 0 , - M < G x ( x , y ) < M - 1 , G x ( x , y ) < - M , BD y ( x , y ) = 1 , G y ( x , y ) > M 0 , - M < G y ( x , y ) < M - 1 , G y ( x , y ) < - M
Described BD x(x y) is (x, y) the directions X two-value depth difference of position, BD y(x y) is (x, y) the Y direction two-value depth difference of position, described G x(x y) is (x, y) the directions X depth difference of position, G y(x y) is that ((x y) is that (x, the y) depth value of position, described M are a natural number, can get 15 centimetres such as M to D for x, y) the Y direction depth difference of position.The two-value depth difference is expressed as (BD x, BD y), can obtain 9 kinds of different two-value depth difference patterns as shown in Figure 3: (0,0), (1,0), (1,2), (0,1), (1,1), (1,0), (1 ,-1), (0 ,-1) and (1 ,-1).Can there be other distortion in above-mentioned formula, as:
BD x ( x , y ) = N , G x ( x , y ) > M 0 , - M < G x ( x , y ) < M - N , G x ( x , y ) < - M , BD y ( x , y ) = N , G y ( x , y ) > M 0 , - M < G y ( x , y ) < M - N , G y ( x , y ) < - M
Also can be:
BD x ( x , y ) = N , G x ( x , y ) > M - N , G x ( x , y ) &le; - M , BD y ( x , y ) = N , G y ( x , y ) > M - N , G y ( x , y ) &le; - M
Step S102, add up the two-value depth difference of all pixels, form two-value depth difference feature.
Depth image as shown in Figure 4 supposes that selection area is the 64x128 pixel size, with selection area be divided into 4x8 16x16 pixel lattice.Two-value depth difference to all pixels in the lattice is added up, and can obtain a histogram, and histogram can be expressed as the vector of one 9 dimension.The vector of all grids in the zone (for example 32 lattices) is linked up, can form long (9x32=288 dimension) proper vector.This vector is exactly the proper vector in whole zone.
Present embodiment makes no matter object is from the background distance, and identical two-value depth difference is all arranged, and has reduced false drop rate.
In the above-described embodiments, described G x(x, y) and G y(x y) obtains by following formula:
G x(x, y)=D (x+1, y)-D (x-1, y), G y(x, y)=D (x, y+1)-(x, y-1), wherein (x+1 y) is (x+1, y) depth value of position to D to D.
Depth image can be divided into M*N zone; M and N are the natural number more than or equal to 1.All pixels in each zone are carried out depth difference to be calculated.Depth difference has both direction, and the depth difference of directions X (laterally) and Y direction (vertically) is respectively:
G x(x,y)=D(x+1,y)-D(x-1,y)
G y(x,y)=D(x,y+1)-D(x,y-1)
Wherein, G x(x y) is (x, y) the directions X depth difference of position, G y(x y) is that ((x y) is that (depth difference that calculates can be with having direction and big or small vector representation, also available depth difference histogram visual representation for x, the y) depth value of position to D for x, y) the Y direction depth difference of position.Add up the depth difference of each all pixel of zone, the depth difference histogram with M*N zone is linked to be a big vector (array) again, forms the depth difference of depth image.
In the above-described embodiments, described two-value depth difference feature is represented by two-value depth difference statistic histogram.
With reference to Fig. 5, propose that the present invention is a kind of to carry out the another embodiment of method of object detection based on the two-value depth difference, before step S10, also comprise:
Step S12, basis comprise the depth image of subject image, set up described disaggregated model.
Gather the depth image as training usefulness, this depth image comprises subject image.Mark out object area from depth image, object area is cut out, the depth image of the object area that cuts out can be cut into voluminous object regional depth image, as the positive sample of training.Marking out non-object area again from depth image is the environmental area, and non-object area is cut out, and can cut out a large amount of non-object areas, as the negative sample of training.All positive samples and negative sample are normalized to identical width and height.All positive negative samples are carried out the computing of two-value depth difference, and each sample obtains a two-value depth difference statistic histogram.With in the two-value depth difference statistic histogram input machine learning classification model of all samples (such as supporting vector machine model), carry out model training, obtain a disaggregated model at last, for object detection is prepared.
In the present embodiment, it is that object detection is prepared that disaggregated model is set up on the surface of three-dimensional body, when object enters in the photographed scene, can automatically object be separated from environment.
With reference to Fig. 6, propose to the present invention is based on device one embodiment that the two-value depth difference is carried out object detection, comprising:
Acquisition module 10 is used to obtain the two-value depth difference feature of the depth difference correspondence of institute's sampling depth image; Load module 20 is used for whether the default disaggregated model of described two-value depth difference feature input is comprised object to draw described depth image.
Utilizing equipment sampling depth images such as degree of depth camera or laser ranging scanner, can note the three-dimensional data of environment, also is depth information, and depth information is stored in the mode of depth image (three-dimensional data).The value representation object of each pixel is to the distance of camera in the depth image, and pixel value is big more, and the expression object is far away more from camera, and the pixel value of depth image only arrives the distance dependent of camera with object, and is irrelevant with the brightness and the color of body surface.Acquisition module 10 obtains the depth difference of this depth image according to the pixel of the depth image that collects, and then obtains two-value depth difference feature.Because under the far and near different situation of background, the shape of object is all identical at object, but the depth difference that calculates is different.In order to make no matter object is from the background distance, and identical depth difference is all arranged, present embodiment is introduced the two-value depth difference, promptly adds up object from the far and near asynchronous depth difference of background, and the gained depth difference is reduced two to three values, is called the two-value depth difference.
Load module 20 is with the default disaggregated model of two-value depth difference input that extracts, disaggregated model can be a supporting vector machine model, can be AdaBoost model etc. also, to judge whether comprise object in the depth image, with the human body is example, can be whole or local as head, shoulder, upper half of human body etc.Disaggregated model is for setting in advance, and depth image that comprises object and environment simultaneously of normally default collection extracts characteristics of image to this depth image and trains also modeling acquisition.
In the present embodiment, on depth image (three-dimensional data), carry out object detection, two-value depth difference based on the pixel of depth image is set up model, because the pixel value of depth image only and distance dependent, irrelevant with the brightness and the color of body surface, therefore the present invention's interference that can remove illumination variation and complex background makes object detection accuracy rate height, and false detection rate is low.Whether the present invention can judge automatically has object to exist in the surrounding environment, have higher intelligently, can be applied to a plurality of fields such as automobile, robot or supervisory system, improves the intelligent of system.
With reference to Fig. 7, in one embodiment, acquisition module 10 comprises:
Computing unit 11 is used for the two-value depth difference according to following each pixel of formula compute depth image:
BD x ( x , y ) = 1 , G x ( x , y ) > M 0 , - M < G x ( x , y ) < M - 1 , G x ( x , y ) < - M , BD y ( x , y ) = 1 , G y ( x , y ) > M 0 , - M < G y ( x , y ) < M - 1 , G y ( x , y ) < - M
Described BD x(x y) is (x, y) the directions X two-value depth difference of position, BD y(x y) is (x, y) the Y direction two-value depth difference of position, described G x(x y) is (x, y) the directions X depth difference of position, G y(x y) is that ((x y) is that (x, the y) depth value of position, described M are a natural number to D for x, y) the Y direction depth difference of position.The two-value depth difference is expressed as (BD x, BD y), can obtain 9 kinds of different two-value depth difference patterns as shown in Figure 3: (0,0), (1,0), (1,2), (0,1), (1,1), (1,0), (1 ,-1), (0 ,-1) and (1 ,-1).Can there be other distortion in above-mentioned formula, as:
BD x ( x , y ) = N , G x ( x , y ) > M 0 , - M < G x ( x , y ) < M - N , G x ( x , y ) < - M , BD y ( x , y ) = N , G y ( x , y ) > M 0 , - M < G y ( x , y ) < M - N , G y ( x , y ) < - M
Also can be:
BD x ( x , y ) = N , G x ( x , y ) > M - N , G x ( x , y ) &le; - M , BD y ( x , y ) = N , G y ( x , y ) > M - N , G y ( x , y ) &le; - M
Statistic unit 12 is used to add up the two-value depth difference of all pixels, forms two-value depth difference feature.
Depth image as shown in Figure 4 supposes that selection area is the 64x128 pixel size, with selection area be divided into 4x8 16x16 pixel lattice.Two-value depth difference to all pixels in the lattice is added up, and can obtain a histogram, and histogram can be expressed as the vector of one 9 dimension.The vector of all grids in the zone (for example 32 lattices) is linked up, can form long (9x32=288 dimension) proper vector.This vector is exactly the proper vector in whole zone.
Present embodiment makes no matter object is from the background distance, and identical two-value depth difference is all arranged, and has reduced false drop rate.
In the above-described embodiments, computing unit 11 obtains G by following formula x(x, y) and G y(x, y):
G x(x, y)=D (x+1, y)-D (x-1, y), G y(x, y)=D (x, y+1)-(x, y-1), wherein (x+1 y) is (x+1, y) depth value of position to D to D.
Depth image can be divided into M*N zone; M and N are the natural number more than or equal to 1.All pixels in each zone are carried out depth difference to be calculated.Depth difference has both direction, and the depth difference of directions X (laterally) and Y direction (vertically) is respectively:
G x(x,y)=D(x+1,y)-D(x-1,y)
G y(x,y)=D(x,y+1)-D(x,y-1)
Wherein, G x(x y) is (x, y) the directions X depth difference of position, G y(x y) is that ((x y) is that (depth difference that computing unit 11 calculates can be with having direction and big or small vector representation, also available depth difference histogram visual representation for x, the y) depth value of position to D for x, y) the Y direction depth difference of position.The depth difference of each all pixel of zone of computing unit 11 statistics, the depth difference histogram with M*N zone is linked to be a big vector (array) again, forms the depth difference of depth image.
In the above-described embodiments, statistic unit 12 is represented two-value depth difference feature by two-value depth difference statistic histogram.
With reference to Fig. 8, propose to the present invention is based on the another embodiment of device that the two-value depth difference is carried out object detection, in the above-described embodiments, also comprise:
MBM 30 is used for setting up described disaggregated model according to the depth image that comprises subject image.
The depth image that MBM 30 is gathered as training usefulness, this depth image comprises subject image.Mark out object area from depth image, object area is cut out, the depth image of the object area that cuts out can be cut into voluminous object regional depth image, as the positive sample of training.Marking out non-object area again from depth image is the environmental area, and non-object area is cut out, and can cut out a large amount of non-object areas, as the negative sample of training.All positive samples and negative sample are normalized to identical width and height.All positive negative samples are carried out the computing of two-value depth difference, and each sample obtains a two-value depth difference statistic histogram.With in the two-value depth difference statistic histogram input machine learning classification model of all samples (such as supporting vector machine model), carry out model training, obtain a disaggregated model at last, for object detection is prepared.
In the present embodiment, it is that object detection is prepared that disaggregated model is set up on the surface of three-dimensional body, when object enters in the photographed scene, can automatically object be separated from environment.
The above only is the preferred embodiments of the present invention; be not so limit claim of the present invention; every equivalent structure or equivalent flow process conversion that utilizes instructions of the present invention and accompanying drawing content to be done; or directly or indirectly be used in other relevant technical fields, all in like manner be included in the scope of patent protection of the present invention.

Claims (10)

1. one kind is carried out the method for object detection based on the two-value depth difference, it is characterized in that, comprising:
Obtain the two-value depth difference feature of the depth difference correspondence of institute's sampling depth image;
With the default disaggregated model of described two-value depth difference feature input, whether comprise object to draw described depth image.
2. as claimed in claim 1ly carry out the method for object detection, it is characterized in that the described two-value depth difference feature of obtaining the depth difference correspondence of institute's sampling depth image comprises based on the two-value depth difference:
Two-value depth difference according to each pixel in the following formula compute depth image:
BD x ( x , y ) = 1 , G x ( x , y ) > M 0 , - M < G x ( x , y ) < M - 1 , G x ( x , y ) < - M , BD y ( x , y ) = 1 , G y ( x , y ) > M 0 , - M < G y ( x , y ) < M - 1 , G y ( x , y ) < - M
Described BD x(x y) is (x, y) the directions X two-value depth difference of position, BD y(x y) is (x, y) the Y direction two-value depth difference of position, described G x(x y) is (x, y) the directions X depth difference of position, G y(x, y) be (x, y) the Y direction depth difference of position, D (x, y) be (x, the y) depth value of position, described M are a natural number;
Add up the two-value depth difference of all pixels, form two-value depth difference feature.
3. as claimed in claim 2ly carry out the method for object detection, it is characterized in that described G based on the two-value depth difference x(x, y) and G y(x y) obtains by following formula:
G x(x, y)=D (x+1, y)-D (x-1, y), G y(x, y)=D (x, y+1)-(x, y-1), wherein (x+1 y) is (x+1, y) depth value of position to D to D.
4. describedly carry out the method for object detection as claim 2 or 3, it is characterized in that described two-value depth difference feature is represented by two-value depth difference statistic histogram based on the two-value depth difference.
5. as each describedly carries out the method for object detection based on the two-value depth difference in the claim 1 to 3, it is characterized in that, before carrying out the described two-value depth difference feature of obtaining the depth image of being gathered, also comprise:
According to the depth image that comprises subject image, set up described disaggregated model.
6. one kind is carried out the device of object detection based on the two-value depth difference, it is characterized in that, comprising:
Acquisition module is used to obtain the two-value depth difference feature of the depth difference correspondence of institute's sampling depth image; Load module is used for whether the default disaggregated model of described two-value depth difference feature input is comprised object to draw described depth image.
7. as claimed in claim 6ly carry out the device of object detection, it is characterized in that described acquisition module comprises based on the two-value depth difference:
Computing unit is used for the two-value depth difference according to following each pixel of formula compute depth image:
BD x ( x , y ) = 1 , G x ( x , y ) > M 0 , - M < G x ( x , y ) < M - 1 , G x ( x , y ) < - M , BD y ( x , y ) = 1 , G y ( x , y ) > M 0 , - M < G y ( x , y ) < M - 1 , G y ( x , y ) < - M
Described BD x(x y) is (x, y) the directions X two-value depth difference of position, BD y(x y) is (x, y) the Y direction two-value depth difference of position, described G x(x y) is (x, y) the directions X depth difference of position, G y(x, y) be (x, y) the Y direction depth difference of position, D (x, y) be (x, the y) depth value of position, described M are a natural number;
Statistic unit is used to add up the two-value depth difference of all pixels, forms two-value depth difference feature.
8. as claimed in claim 7ly carry out the system of object detection, it is characterized in that described computing unit obtains G by following formula based on the two-value depth difference x(x, y) and G y(x, y):
G x(x, y)=D (x+1, y)-D (x-1, y), G y(x, y)=D (x, y+1)-(x, y-1), wherein (x+1 y) is (x+1, y) depth value of position to D to D.
9. describedly carry out the device of object detection as claim 7 or 8, it is characterized in that described statistic unit is represented two-value depth difference feature by two-value depth difference statistic histogram based on the two-value depth difference.
10. as each describedly carries out the device of object detection based on the two-value depth difference in the claim 6 to 8, it is characterized in that, also comprise:
MBM is used for setting up described disaggregated model according to the depth image that comprises subject image.
CN 201110126220 2011-05-16 2011-05-16 Method and device for detecting object based on two-value depth difference Expired - Fee Related CN102194128B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN 201110126220 CN102194128B (en) 2011-05-16 2011-05-16 Method and device for detecting object based on two-value depth difference

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN 201110126220 CN102194128B (en) 2011-05-16 2011-05-16 Method and device for detecting object based on two-value depth difference

Publications (2)

Publication Number Publication Date
CN102194128A true CN102194128A (en) 2011-09-21
CN102194128B CN102194128B (en) 2013-05-01

Family

ID=44602166

Family Applications (1)

Application Number Title Priority Date Filing Date
CN 201110126220 Expired - Fee Related CN102194128B (en) 2011-05-16 2011-05-16 Method and device for detecting object based on two-value depth difference

Country Status (1)

Country Link
CN (1) CN102194128B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104838337A (en) * 2012-10-12 2015-08-12 微软技术许可有限责任公司 Touchless input for a user interface

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2009008864A1 (en) * 2007-07-12 2009-01-15 Thomson Licensing System and method for three-dimensional object reconstruction from two-dimensional images

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2009008864A1 (en) * 2007-07-12 2009-01-15 Thomson Licensing System and method for three-dimensional object reconstruction from two-dimensional images

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
《光子学报》 20100331 王阿妮等 《序列图像中运动目标的自动提取方法》 第555-570页 1-10 第39卷, 第3期 *

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104838337A (en) * 2012-10-12 2015-08-12 微软技术许可有限责任公司 Touchless input for a user interface
CN104838337B (en) * 2012-10-12 2018-05-25 微软技术许可有限责任公司 It is inputted for the no touch of user interface
US10019074B2 (en) 2012-10-12 2018-07-10 Microsoft Technology Licensing, Llc Touchless input

Also Published As

Publication number Publication date
CN102194128B (en) 2013-05-01

Similar Documents

Publication Publication Date Title
CN102122390B (en) Method for detecting human body based on range image
CN110807353B (en) Substation foreign matter identification method, device and system based on deep learning
CN101430195B (en) Method for computing electric power line ice-covering thickness by using video image processing technology
CN105740910A (en) Vehicle object detection method and device
CN108009591A (en) A kind of contact network key component identification method based on deep learning
CN102622584B (en) Method for detecting mask faces in video monitor
CN105160310A (en) 3D (three-dimensional) convolutional neural network based human body behavior recognition method
CN110263665A (en) Safety cap recognition methods and system based on deep learning
CN109559310A (en) Power transmission and transformation inspection image quality evaluating method and system based on conspicuousness detection
CN108960067A (en) Real-time train driver motion recognition system and method based on deep learning
CN106096603A (en) A kind of dynamic flame detection method merging multiple features and device
CN104504381B (en) Non-rigid object detection method and its system
CN106297492A (en) A kind of Educational toy external member and utilize color and the method for outline identification programming module
CN107818303A (en) Unmanned plane oil-gas pipeline image automatic comparative analysis method, system and software memory
CN106504262A (en) A kind of small tiles intelligent locating method of multiple features fusion
CN106874913A (en) A kind of vegetable detection method
CN112819068A (en) Deep learning-based real-time detection method for ship operation violation behaviors
CN104700417A (en) Computer image based automatic identification method of timber knot flaws
CN114140665A (en) Dense small target detection method based on improved YOLOv5
CN106097833A (en) A kind of Educational toy external member and digit recognition method thereof
CN103914829B (en) Method for detecting edge of noisy image
CN112685812A (en) Dynamic supervision method, device, equipment and storage medium
CN114332004A (en) Method and device for detecting surface defects of ceramic tiles, electronic equipment and storage medium
CN103903265A (en) Method for detecting industrial product package breakage
CN101866422A (en) Method for extracting image attention by image based multi-characteristic integration

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20130501

Termination date: 20200516