CN103700163A - Target determining method and three-dimensional sensor - Google Patents

Target determining method and three-dimensional sensor Download PDF

Info

Publication number
CN103700163A
CN103700163A CN201310635024.XA CN201310635024A CN103700163A CN 103700163 A CN103700163 A CN 103700163A CN 201310635024 A CN201310635024 A CN 201310635024A CN 103700163 A CN103700163 A CN 103700163A
Authority
CN
China
Prior art keywords
information
moment
sub
distance
dimension sensor
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201310635024.XA
Other languages
Chinese (zh)
Other versions
CN103700163B (en
Inventor
不公告发明人
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
BEIJING YUEDONG SHUANGCHENG TECHNOLOGY Co Ltd
Original Assignee
BEIJING YUEDONG SHUANGCHENG TECHNOLOGY Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by BEIJING YUEDONG SHUANGCHENG TECHNOLOGY Co Ltd filed Critical BEIJING YUEDONG SHUANGCHENG TECHNOLOGY Co Ltd
Priority to CN201310635024.XA priority Critical patent/CN103700163B/en
Priority claimed from CN201310635024.XA external-priority patent/CN103700163B/en
Publication of CN103700163A publication Critical patent/CN103700163A/en
Application granted granted Critical
Publication of CN103700163B publication Critical patent/CN103700163B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Length Measuring Devices By Optical Means (AREA)
  • Image Analysis (AREA)

Abstract

The embodiment of the invention provides a target determining method. The target determining method comprises the following steps that a three-dimensional sensor obtains first information, wherein the first information is an image of at least one target in a first space and the depth of each pixel in the image; the three-dimensional sensor determines that a first part of the at least one target is a convex hull according to the first information; the three-dimensional sensor determines a value of a first distance as a value in a first threshold value according to the first information; the three-dimensional sensor determines that the position of the at least one target is changed according to the first information; the three-dimensional sensor determines that at least one target is at least one person. Furthermore, the invention provides the three-dimensional sensor. With the adoption of the technical scheme, the cost for identifying the target in a particular space is reduced.

Description

Determine method and the three-dimension sensor of object
Technical field
The embodiment of the present invention relates to electronic technology field, relates in particular to method and the three-dimension sensor of determining object.
Background technology
Specific space can hold a plurality of people.The people's that this space holds quantity may change along with variation constantly.For example, the office building of classroom, church, supermarket, airport, railway station, subway station, company can hold a plurality of people.While constantly changing, people's quantity may change.For example, people's quantity that weekend, supermarket held may be greater than the people's that this supermarket holds on working day quantity.
In prior art, can the people in space be identified.And then the quantity of the people in this space is added up.For example, the entrance and exit of the office building of company can arrange gate control system.Gate control system is associated with a plurality of cards.Described a plurality of card is a plurality of employees of corresponding the said firm respectively.Wish that the people who enters office building needs to swipe the card, thereby enter office building.Wish that the people who leaves office building needs to swipe the card, thereby leave office building.Generally, by swiping the card, enter or leave the employee of object Dou Shi the said firm of office building.Therefore, such scheme has been realized carrying out the identification of the object of office building.In addition, gate control system can be added up with the quantity of leaving the people of office building entering office building respectively, thereby determines the quantity of the people in a certain moment office building.
In technique scheme, be to realize the identification to the object in specific space, the card that the people who enters this space need to carry and use and gate control system are bound, cost compare is high.
Summary of the invention
The embodiment of the present invention provides method and the three-dimension sensor of definite object, contributes to reduce the cost that the object in particular space is identified.
The embodiment of the present invention provides a kind of method of definite object, comprising:
Three-dimension sensor obtains the first information, and the described first information is the image of at least one object in the first space and the degree of depth of each pixel in described image;
Described three-dimension sensor determines that according to the described first information first of described at least one object is a convex closure, and described first is a described first object middle distance ground part farthest;
Described three-dimension sensor determines that according to the described first information value of the first distance is the value in first threshold, and described the first distance is the distance on described first and described ground;
Described three-dimension sensor determines that according to the described first information variation has occurred in the position of described at least one object; With
Described in described three-dimension sensor is definite, at least one is to liking at least one people, and described at least one object and described at least one individual are corresponding one by one.
Alternatively, in technique scheme, described method also comprises:
Described three-dimension sensor determines that the quantity of people in described the first space is N, and N equals the quantity of described at least one object.
Alternatively, in technique scheme, described three-dimension sensor obtains the first information and comprises:
Described three-dimension sensor obtains a plurality of sub-information in very first time interval, the described first information comprises described a plurality of sub-information, described a plurality of sub-information is corresponding one by one with a plurality of moment, described a plurality of moment is the moment in described very first time interval, in described a plurality of sub-information, every sub-packets of information is drawn together the degree of depth of each pixel in the subimage of described at least one object and described subimage, described a plurality of sub-information comprises the first sub-information and the second sub-information, described first moment corresponding to sub-information was first moment, described first is constantly moment the earliest in described a plurality of moment, described second moment corresponding to sub-information was second moment, described second is constantly moment the latest in described a plurality of moment,
Described method also comprises:
Described three-dimension sensor is determined second distance according to the described first information, and described second distance is described at least one object in the described first position and the distance of described at least one object in the position in described second moment constantly; With
Described three-dimension sensor determines that the speed of described at least one object equals First Speed, and described First Speed equals described second distance divided by described very first time interval.
Alternatively, in technique scheme,
Described three-dimension sensor obtains the first information and comprises: described three-dimension sensor obtains a plurality of sub-information in very first time interval, the described first information comprises described a plurality of sub-information, described a plurality of sub-information is corresponding one by one with a plurality of moment, described a plurality of moment is the moment in described very first time interval, in described a plurality of sub-information, every sub-packets of information is drawn together the degree of depth of each pixel in the subimage of described at least one object and described subimage, described a plurality of sub-information comprises the first sub-information and the second sub-information, described first moment corresponding to sub-information was first moment, described first is constantly moment the earliest in described a plurality of moment, described second moment corresponding to sub-information was second moment, described second is constantly moment the latest in described a plurality of moment,
Described method also comprises:
Described three-dimension sensor is determined second distance according to the described first information, described second distance be N-1 distance and, the distance of at least one object position in any two adjacent moment in described a plurality of moment described in described N-1 distance comprises; With
Described three-dimension sensor determines that the speed of described at least one object equals First Speed, and described First Speed equals described second distance divided by described very first time interval.
Alternatively, in technique scheme,
Described first is described at least one individual's head or described at least one individual's head and shoulder.
The embodiment of the present invention provides a kind of three-dimension sensor, comprising:
The first acquiring unit, for obtaining the first information, the described first information is the image of at least one object in the first space and the degree of depth of each pixel in described image;
The first determining unit, for determining that according to the described first information first of described at least one object is a convex closure, described first is a described first object middle distance ground part farthest;
The second determining unit, for determining that according to the described first information value of the first distance is the value of first threshold, described the first distance is the distance on described first and described ground;
The 3rd determining unit, for determining that according to the described first information variation has occurred in the position of described at least one object; With
The 4th determining unit, for determine described at least one to as if at least one people, described at least one object and described at least one individual are corresponding one by one.
Alternatively, in technique scheme, also comprise:
The 5th determining unit, for determining that described the first space people's quantity is N, N equals the quantity of described at least one object.
Alternatively, in technique scheme, described the first acquiring unit specifically for:
In very first time interval, obtain a plurality of sub-information, the described first information comprises described a plurality of sub-information, described a plurality of sub-information is corresponding one by one with a plurality of moment, described a plurality of moment is the moment in described very first time interval, in described a plurality of sub-information, every sub-packets of information is drawn together the degree of depth of each pixel in the subimage of described at least one object and described subimage, described a plurality of sub-information comprises the first sub-information and the second sub-information, described first moment corresponding to sub-information was first moment, described first is constantly moment the earliest in described a plurality of moment, described second moment corresponding to sub-information was second moment, described second is constantly moment the latest in described a plurality of moment,
Described three-dimension sensor also comprises:
The 6th determining unit, for determining second distance according to the described first information, described second distance is described at least one object in the described first position and the distance of described at least one object in the position in described second moment constantly; With
The 7th determining unit, for determining that the speed of described at least one object equals First Speed, described First Speed equals described second distance divided by described very first time interval.
Alternatively, in technique scheme, described the first acquiring unit specifically for:
In very first time interval, obtain a plurality of sub-information, the described first information comprises described a plurality of sub-information, described a plurality of sub-information is corresponding one by one with a plurality of moment, described a plurality of moment is the moment in described very first time interval, in described a plurality of sub-information, every sub-packets of information is drawn together the degree of depth of each pixel in the subimage of described at least one object and described subimage, described a plurality of sub-information comprises the first sub-information and the second sub-information, described first moment corresponding to sub-information was first moment, described first is constantly moment the earliest in described a plurality of moment, described second moment corresponding to sub-information was second moment, described second is constantly moment the latest in described a plurality of moment,
Described three-dimension sensor also comprises:
The 6th determining unit, for determining second distance according to the described first information, described second distance be N-1 distance and, the distance of at least one object position in any two adjacent moment in described a plurality of moment described in described N-1 distance comprises; With
The 7th determining unit, for determining that the speed of described at least one object equals First Speed, described First Speed equals described second distance divided by described very first time interval.
Alternatively, in technique scheme,
Described first is described at least one individual's head or described at least one individual's head and shoulder.
In technique scheme, described three-dimension sensor determines that the first of described at least one object is a convex closure.People for standing on ground, sees from the top down, and described three-dimension sensor can perceive this people's head and shoulder.And, the corresponding convex closure of this people's head and shoulder.Therefore, described at least one object is consistent with the situation that stand on the people on ground.In addition, described three-dimension sensor determines that according to the described first information variation has occurred in the position of described at least one object.For the people who stand on ground, conventionally can be moved.Therefore, described at least one object is consistent with the situation that stand on the people on ground.Therefore, technique scheme contributes to reduce the cost that the object in specific space is identified.In addition, contribute to improve the accuracy of identification.
Accompanying drawing explanation
The schematic flow sheet of the method for a kind of definite object that Fig. 1 provides for the embodiment of the present invention;
The structural representation of a kind of three-dimension sensor that Fig. 2 provides for the embodiment of the present invention;
embodiment
The schematic flow sheet of the method for a kind of definite object that Fig. 1 provides for the embodiment of the present invention.Referring to Fig. 1, described method comprises:
S101, three-dimension sensor obtain the first information, and the described first information is the image of at least one object in the first space and the degree of depth of each pixel in described image.
For instance, described three-dimension sensor can comprise imageing sensor and depth transducer.Described imageing sensor can be video camera (video camera).Described depth transducer can be depth perception video camera (depth-sensing camera).
S102, described three-dimension sensor determine that according to the described first information first of described at least one object is a convex closure, and described first is a described first object middle distance ground part farthest.
For instance, described ground is the border in described the first space.
For instance, described three-dimension sensor can comprise predefined object (predefined object).Described predefined object can be stored in library of object (object library).Described predefined object can comprise convex closure.Described convex closure with see from the top down, three-dimension sensor can perceive people's head and shape corresponding to shoulder is consistent.Described three-dimension sensor can compare with the described predefined object in described library of object by the information of at least one object that described three-dimension sensor is detected, thereby described in identification, at least one object is consistent with described convex closure.
S103, described three-dimension sensor determine that according to the described first information value of the first distance is the value in first threshold, and described the first distance is the distance on described first and described ground.
For instance, described the first distance can comprise:
Described three-dimension sensor obtains the distance of described three-dimension sensor and described first.
Described three-dimension sensor obtains the distance on described three-dimension sensor and described ground.
Described three-dimension sensor obtains described the first distance, and described the first distance is the absolute value of the difference of the distance of described three-dimension sensor and described first and the distance on described three-dimension sensor and described ground.
For instance, the distance that described three-dimension sensor obtains described three-dimension sensor and described first can realize by following processing:
Described three-dimension sensor can send infrared ray and receive the surperficial backscattered light (backscattered light) from described first to described first.Described three-dimension sensor can be sent the ultrared moment and be received time interval (time interval) between moment of backscattered light by measurement determines the surperficial physical distance of described three-dimension sensor and described first.
For instance, the distance that described three-dimension sensor obtains described three-dimension sensor and described ground can realize by following processing:
Described three-dimension sensor can send infrared ray and receive the backscattered light from described ground to described ground.Described three-dimension sensor can be sent the ultrared moment and be received time interval between moment of backscattered light by measurement determines the physical distance on described three-dimension sensor and described ground.
For instance, described first threshold can be 1.3 meters to 2.2 meters.Described first threshold can be 1.5 meters to 2.0 meters.
S104, described three-dimension sensor determine that according to the described first information variation has occurred in the position of described at least one object.
For instance, described depth transducer can determine that the degree of depth of described at least one object changes, thereby determines that variation has occurred in the position of described at least one object.For example, described at least one object is not equal to the degree of depth of described at least one object in second moment in the degree of depth in first moment.Specifically, described depth transducer can determine that the degree of depth of certain pixel of the image of described at least one object changes, thereby determines that variation has occurred in the position of described at least one object.
S105, described three-dimension sensor determine described at least one to as if at least one people, described at least one object and described at least one individual are corresponding one by one.
For instance, described at least one individual can be one-man, can be also a plurality of people.
In technique scheme, described three-dimension sensor determines that the first of described at least one object is a convex closure.People for standing on ground, sees from the top down, and described three-dimension sensor can perceive this people's head and shoulder.And, the corresponding convex closure of this people's head and shoulder.Therefore, described at least one object is consistent with the situation that stand on the people on ground.In addition, described three-dimension sensor determines that according to the described first information variation has occurred in the position of described at least one object.For the people who stand on ground, conventionally can be moved.Therefore, described at least one object is consistent with the situation that stand on the people on ground.Therefore, technique scheme contributes to reduce the cost that the object in specific space is identified.In addition, contribute to improve the accuracy of identification.
Alternatively, in the method shown in Fig. 1, described method also comprises:
Described three-dimension sensor determines that the quantity of people in described the first space is N, and N equals the quantity of described at least one object.
Technique scheme contributes to realize the several quantitative statistics to the people in described the first space.
Alternatively, in technique scheme, described three-dimension sensor obtains the first information and comprises:
Described three-dimension sensor obtains a plurality of sub-information in very first time interval, the described first information comprises described a plurality of sub-information, described a plurality of sub-information is corresponding one by one with a plurality of moment, described a plurality of moment is the moment in described very first time interval, in described a plurality of sub-information, every sub-packets of information is drawn together the degree of depth of each pixel in the subimage of described at least one object and described subimage, described a plurality of sub-information comprises the first sub-information and the second sub-information, described first moment corresponding to sub-information was first moment, described first is constantly moment the earliest in described a plurality of moment, described second moment corresponding to sub-information was second moment, described second is constantly moment the latest in described a plurality of moment,
Described method also comprises:
Described three-dimension sensor is determined second distance according to the described first information, and described second distance is described at least one object in the described first position and the distance of described at least one object in the position in described second moment constantly; With
Described three-dimension sensor determines that the speed of described at least one object equals First Speed, and described First Speed equals described second distance divided by described very first time interval.
When described at least one object passes described the first space, the track of described at least one object in described the first space may be straight line.Technique scheme contributes to realize to be measured the speed of described at least one object.
Alternatively, in technique scheme,
Described three-dimension sensor obtains the first information and comprises: described three-dimension sensor obtains a plurality of sub-information in very first time interval, the described first information comprises described a plurality of sub-information, described a plurality of sub-information is corresponding one by one with a plurality of moment, described a plurality of moment is the moment in described very first time interval, in described a plurality of sub-information, every sub-packets of information is drawn together the degree of depth of each pixel in the subimage of described at least one object and described subimage, described a plurality of sub-information comprises the first sub-information and the second sub-information, described first moment corresponding to sub-information was first moment, described first is constantly moment the earliest in described a plurality of moment, described second moment corresponding to sub-information was second moment, described second is constantly moment the latest in described a plurality of moment,
Described method also comprises:
Described three-dimension sensor is determined second distance according to the described first information, described second distance be N-1 distance and, the distance of at least one object position in any two adjacent moment in described a plurality of moment described in described N-1 distance comprises; With
Described three-dimension sensor determines that the speed of described at least one object equals First Speed, and described First Speed equals described second distance divided by described very first time interval.
When described at least one object passes described the first space, the track of described at least one object in described the first space may not be straight line.For example described track may be curve or broken line.Technique scheme contributes to realize to be measured the speed of described at least one object.
Alternatively, in technique scheme,
Described first is described at least one individual's head or described at least one individual's head and shoulder.
Fig. 2 is the structural representation that the embodiment of the present invention provides a kind of three-dimension sensor.Described three-dimension sensor can be for the method shown in execution graph 1.Referring to Fig. 2, described three-dimension sensor comprises:
The first acquiring unit 201, for obtaining the first information, the described first information is the image of at least one object in the first space and the degree of depth of each pixel in described image.
For instance, described three-dimension sensor can comprise imageing sensor and depth transducer.Described imageing sensor can be video camera.Described depth transducer can be depth perception video camera.
For instance, described the first acquiring unit 201 can be described imageing sensor and described depth transducer.
The first determining unit 202, for determining that according to the described first information first of described at least one object is a convex closure, described first is a described first object middle distance ground part farthest.
For instance, described ground is the border in described the first space.
For instance, described three-dimension sensor can comprise predefined object.Described predefined object can be stored in library of object.Described predefined object can comprise convex closure.Described convex closure with see from the top down, three-dimension sensor can perceive people's head and shape corresponding to shoulder is consistent.Described three-dimension sensor can compare with the described predefined object in described library of object by the information of at least one object that described three-dimension sensor is detected, thereby described in identification, at least one object is consistent with described convex closure.
For instance, described the first determining unit 202 can be processor.Described processor can be CPU or ASIC.
The second determining unit 203, for determining that according to the described first information value of the first distance is the value of first threshold, described the first distance is the distance on described first and described ground.
For instance, described the first distance can comprise:
Described three-dimension sensor obtains the distance of described three-dimension sensor and described first.
Described three-dimension sensor obtains the distance on described three-dimension sensor and described ground.
Described three-dimension sensor obtains described the first distance, and described the first distance is the absolute value of the difference of the distance of described three-dimension sensor and described first and the distance on described three-dimension sensor and described ground.
For instance, the distance that described three-dimension sensor obtains described three-dimension sensor and described first can realize by following processing:
Described three-dimension sensor can send infrared ray and receive the surperficial backscattered light from described first to described first.Described three-dimension sensor can be sent the ultrared moment and be received time interval between moment of backscattered light by measurement determines the surperficial physical distance of described three-dimension sensor and described first.
For instance, the distance that described three-dimension sensor obtains described three-dimension sensor and described ground can realize by following processing:
Described three-dimension sensor can send infrared ray and receive the backscattered light from described ground to described ground.Described three-dimension sensor can be sent the ultrared moment and be received time interval between moment of backscattered light by measurement determines the physical distance on described three-dimension sensor and described ground.
For instance, described first threshold can be 1.3 meters to 2.2 meters.Described first threshold can be 1.5 meters to 2.0 meters.
For instance, described the second determining unit 203 can be described processor.
The 3rd determining unit 204, for determining that according to the described first information variation has occurred in the position of described at least one object.
For instance, described depth transducer can determine that the degree of depth of described at least one object changes, thereby determines that variation has occurred in the position of described at least one object.For example, described at least one object is not equal to the degree of depth of described at least one object in second moment in the degree of depth in first moment.Specifically, described depth transducer can determine that the degree of depth of certain pixel of the image of described at least one object changes, thereby determines that variation has occurred in the position of described at least one object.
For instance, described the 3rd determining unit 204 can be described processor.
The 4th determining unit 205, for determine described at least one to as if at least one people, described at least one object and described at least one individual are corresponding one by one.
For instance, described at least one individual can be one-man, can be also a plurality of people.
For instance, described the 4th determining unit 205 can be described processor.
In technique scheme, described three-dimension sensor determines that the first of described at least one object is a convex closure.People for standing on ground, sees from the top down, and described three-dimension sensor can perceive this people's head and shoulder.And, the corresponding convex closure of this people's head and shoulder.Therefore, described at least one object is consistent with the situation that stand on the people on ground.In addition, described three-dimension sensor determines that according to the described first information variation has occurred in the position of described at least one object.For the people who stand on ground, conventionally can be moved.Therefore, described at least one object is consistent with the situation that stand on the people on ground.Therefore, technique scheme contributes to reduce the cost that the object in specific space is identified.In addition, contribute to improve the accuracy of identification.
Alternatively, the 3D processing device shown in Fig. 2 also comprises:
The 5th determining unit, for determining that described the first space people's quantity is N, N equals the quantity of described at least one object.
Alternatively, in technique scheme, described the first acquiring unit specifically for:
In very first time interval, obtain a plurality of sub-information, the described first information comprises described a plurality of sub-information, described a plurality of sub-information is corresponding one by one with a plurality of moment, described a plurality of moment is the moment in described very first time interval, in described a plurality of sub-information, every sub-packets of information is drawn together the degree of depth of each pixel in the subimage of described at least one object and described subimage, described a plurality of sub-information comprises the first sub-information and the second sub-information, described first moment corresponding to sub-information was first moment, described first is constantly moment the earliest in described a plurality of moment, described second moment corresponding to sub-information was second moment, described second is constantly moment the latest in described a plurality of moment,
Described three-dimension sensor also comprises:
The 6th determining unit, for determining second distance according to the described first information, described second distance is described at least one object in the described first position and the distance of described at least one object in the position in described second moment constantly; With
The 7th determining unit, for determining that the speed of described at least one object equals First Speed, described First Speed equals described second distance divided by described very first time interval.
Alternatively, in technique scheme, described the first acquiring unit specifically for:
In very first time interval, obtain a plurality of sub-information, the described first information comprises described a plurality of sub-information, described a plurality of sub-information is corresponding one by one with a plurality of moment, described a plurality of moment is the moment in described very first time interval, in described a plurality of sub-information, every sub-packets of information is drawn together the degree of depth of each pixel in the subimage of described at least one object and described subimage, described a plurality of sub-information comprises the first sub-information and the second sub-information, described first moment corresponding to sub-information was first moment, described first is constantly moment the earliest in described a plurality of moment, described second moment corresponding to sub-information was second moment, described second is constantly moment the latest in described a plurality of moment,
Described three-dimension sensor also comprises:
The 6th determining unit, for determining second distance according to the described first information, described second distance be N-1 distance and, the distance of at least one object position in any two adjacent moment in described a plurality of moment described in described N-1 distance comprises; With
The 7th determining unit, for determining that the speed of described at least one object equals First Speed, described First Speed equals described second distance divided by described very first time interval.
Alternatively, in technique scheme,
Described first is described at least one individual's head or described at least one individual's head and shoulder.
The technical scheme that the embodiment of the present invention provides and/or a part for technical scheme can realize by the combination of hardware or computer program and hardware.Described hardware can be processor.Described computer program can be stored in non-written in water storage medium (non-transitory storage medium).Described processor can be stored in the described computer program in described non-written in water storage medium by access, the technical scheme that the execution embodiment of the present invention provides and/or a part for technical scheme.Described non-written in water storage medium can be DVD, CD, hard disk or Flash.Described processor can be CPU, ASIC or FPGA.
The above is only the preferred embodiment of the present invention; it should be pointed out that for those skilled in the art, under the premise without departing from the principles of the invention; can also make some improvements and modifications, these improvements and modifications also should be considered as protection scope of the present invention.

Claims (10)

1. a method for definite object, is characterized in that, comprising:
Three-dimension sensor obtains the first information, and the described first information is the image of at least one object in the first space and the degree of depth of each pixel in described image;
Described three-dimension sensor determines that according to the described first information first of described at least one object is a convex closure, and described first is a described first object middle distance ground part farthest;
Described three-dimension sensor determines that according to the described first information value of the first distance is the value in first threshold, and described the first distance is the distance on described first and described ground;
Described three-dimension sensor determines that according to the described first information variation has occurred in the position of described at least one object; With
Described in described three-dimension sensor is definite, at least one is to liking at least one people, and described at least one object and described at least one individual are corresponding one by one.
2. method according to claim 1, is characterized in that, described method also comprises:
Described three-dimension sensor determines that the quantity of people in described the first space is N, and N equals the quantity of described at least one object.
3. method according to claim 1 and 2, is characterized in that, described three-dimension sensor obtains the first information and comprises:
Described three-dimension sensor obtains a plurality of sub-information in very first time interval, the described first information comprises described a plurality of sub-information, described a plurality of sub-information is corresponding one by one with a plurality of moment, described a plurality of moment is the moment in described very first time interval, in described a plurality of sub-information, every sub-packets of information is drawn together the degree of depth of each pixel in the subimage of described at least one object and described subimage, described a plurality of sub-information comprises the first sub-information and the second sub-information, described first moment corresponding to sub-information was first moment, described first is constantly moment the earliest in described a plurality of moment, described second moment corresponding to sub-information was second moment, described second is constantly moment the latest in described a plurality of moment,
Described method also comprises:
Described three-dimension sensor is determined second distance according to the described first information, and described second distance is described at least one object in the described first position and the distance of described at least one object in the position in described second moment constantly; With
Described three-dimension sensor determines that the speed of described at least one object equals First Speed, and described First Speed equals described second distance divided by described very first time interval.
4. method according to claim 1 and 2, is characterized in that,
Described three-dimension sensor obtains the first information and comprises: described three-dimension sensor obtains a plurality of sub-information in very first time interval, the described first information comprises described a plurality of sub-information, described a plurality of sub-information is corresponding one by one with a plurality of moment, described a plurality of moment is the moment in described very first time interval, in described a plurality of sub-information, every sub-packets of information is drawn together the degree of depth of each pixel in the subimage of described at least one object and described subimage, described a plurality of sub-information comprises the first sub-information and the second sub-information, described first moment corresponding to sub-information was first moment, described first is constantly moment the earliest in described a plurality of moment, described second moment corresponding to sub-information was second moment, described second is constantly moment the latest in described a plurality of moment,
Described method also comprises:
Described three-dimension sensor is determined second distance according to the described first information, described second distance be N-1 distance and, the distance of at least one object position in any two adjacent moment in described a plurality of moment described in described N-1 distance comprises; With
Described three-dimension sensor determines that the speed of described at least one object equals First Speed, and described First Speed equals described second distance divided by described very first time interval.
5. according to arbitrary described method in claim 1 to 4, it is characterized in that,
Described first is described at least one individual's head or described at least one individual's head and shoulder.
6. a three-dimension sensor, is characterized in that, comprising:
The first acquiring unit, for obtaining the first information, the described first information is the image of at least one object in the first space and the degree of depth of each pixel in described image;
The first determining unit, for determining that according to the described first information first of described at least one object is a convex closure, described first is a described first object middle distance ground part farthest;
The second determining unit, for determining that according to the described first information value of the first distance is the value of first threshold, described the first distance is the distance on described first and described ground;
The 3rd determining unit, for determining that according to the described first information variation has occurred in the position of described at least one object; With
The 4th determining unit, for determine described at least one to as if at least one people, described at least one object and described at least one individual are corresponding one by one.
7. three-dimension sensor according to claim 6, is characterized in that, also comprises:
The 5th determining unit, for determining that described the first space people's quantity is N, N equals the quantity of described at least one object.
8. according to the three-dimension sensor described in claim 6 or 7, it is characterized in that, described the first acquiring unit specifically for:
In very first time interval, obtain a plurality of sub-information, the described first information comprises described a plurality of sub-information, described a plurality of sub-information is corresponding one by one with a plurality of moment, described a plurality of moment is the moment in described very first time interval, in described a plurality of sub-information, every sub-packets of information is drawn together the degree of depth of each pixel in the subimage of described at least one object and described subimage, described a plurality of sub-information comprises the first sub-information and the second sub-information, described first moment corresponding to sub-information was first moment, described first is constantly moment the earliest in described a plurality of moment, described second moment corresponding to sub-information was second moment, described second is constantly moment the latest in described a plurality of moment,
Described three-dimension sensor also comprises:
The 6th determining unit, for determining second distance according to the described first information, described second distance is described at least one object in the described first position and the distance of described at least one object in the position in described second moment constantly; With
The 7th determining unit, for determining that the speed of described at least one object equals First Speed, described First Speed equals described second distance divided by described very first time interval.
9. according to the three-dimension sensor described in claim 6 or 7, it is characterized in that, described the first acquiring unit specifically for:
In very first time interval, obtain a plurality of sub-information, the described first information comprises described a plurality of sub-information, described a plurality of sub-information is corresponding one by one with a plurality of moment, described a plurality of moment is the moment in described very first time interval, in described a plurality of sub-information, every sub-packets of information is drawn together the degree of depth of each pixel in the subimage of described at least one object and described subimage, described a plurality of sub-information comprises the first sub-information and the second sub-information, described first moment corresponding to sub-information was first moment, described first is constantly moment the earliest in described a plurality of moment, described second moment corresponding to sub-information was second moment, described second is constantly moment the latest in described a plurality of moment,
Described three-dimension sensor also comprises:
The 6th determining unit, for determining second distance according to the described first information, described second distance be N-1 distance and, the distance of at least one object position in any two adjacent moment in described a plurality of moment described in described N-1 distance comprises; With
The 7th determining unit, for determining that the speed of described at least one object equals First Speed, described First Speed equals described second distance divided by described very first time interval.
10. according to arbitrary described three-dimension sensor in claim 6 to 9, it is characterized in that,
Described first is described at least one individual's head or described at least one individual's head and shoulder.
CN201310635024.XA 2013-12-03 Determine method and the three-dimension sensor of object Expired - Fee Related CN103700163B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201310635024.XA CN103700163B (en) 2013-12-03 Determine method and the three-dimension sensor of object

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201310635024.XA CN103700163B (en) 2013-12-03 Determine method and the three-dimension sensor of object

Publications (2)

Publication Number Publication Date
CN103700163A true CN103700163A (en) 2014-04-02
CN103700163B CN103700163B (en) 2016-11-30

Family

ID=

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104333748A (en) * 2014-11-28 2015-02-04 广东欧珀移动通信有限公司 Method, device and terminal for obtaining image main object

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101657841A (en) * 2007-02-26 2010-02-24 索尼株式会社 Information extracting method, registering device, collating device and program
JP2011039792A (en) * 2009-08-11 2011-02-24 Hitachi Kokusai Electric Inc Counter for the number of persons
CN102204259A (en) * 2007-11-15 2011-09-28 微软国际控股私有有限公司 Dual mode depth imaging
CN102779359A (en) * 2012-07-13 2012-11-14 南京大学 Automatic ticket checking device for performing passage detection based on depth image
US20130123015A1 (en) * 2011-11-15 2013-05-16 Samsung Electronics Co., Ltd. Image sensor, operation method thereof and apparatuses incuding the same

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101657841A (en) * 2007-02-26 2010-02-24 索尼株式会社 Information extracting method, registering device, collating device and program
CN102204259A (en) * 2007-11-15 2011-09-28 微软国际控股私有有限公司 Dual mode depth imaging
JP2011039792A (en) * 2009-08-11 2011-02-24 Hitachi Kokusai Electric Inc Counter for the number of persons
US20130123015A1 (en) * 2011-11-15 2013-05-16 Samsung Electronics Co., Ltd. Image sensor, operation method thereof and apparatuses incuding the same
CN102779359A (en) * 2012-07-13 2012-11-14 南京大学 Automatic ticket checking device for performing passage detection based on depth image

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104333748A (en) * 2014-11-28 2015-02-04 广东欧珀移动通信有限公司 Method, device and terminal for obtaining image main object

Similar Documents

Publication Publication Date Title
US9514574B2 (en) System and method for determining the extent of a plane in an augmented reality environment
JP5603403B2 (en) Object counting method, object counting apparatus, and object counting program
CN104954736B (en) Retention analytical equipment, Retention analysis system and analysis method
US9690998B2 (en) Facial spoofing detection in image based biometrics
US20190205903A1 (en) Information-processing device, data analysis method, and recording medium
US9727791B2 (en) Person detection system, method, and non-transitory computer readable medium
US20150339531A1 (en) Identifying an obstacle in a route
CN105659200A (en) Method, apparatus, and system for displaying graphical user interface
JP2013232181A5 (en)
JP2011505610A (en) Method and apparatus for mapping distance sensor data to image sensor data
US9396553B2 (en) Vehicle dimension estimation from vehicle images
EP2672426A3 (en) Security by z-face detection
KR102096230B1 (en) Determining source lane of moving item merging into destination lane
EP2645290A3 (en) Devices and methods for unlocking a lock mode
CN109961472B (en) Method, system, storage medium and electronic device for generating 3D thermodynamic diagram
US20140214481A1 (en) Determining The Position Of A Consumer In A Retail Store Using Location Markers
EP3043539B1 (en) Incoming call processing method and mobile terminal
US20160063732A1 (en) Method and apparatus for determining a building location based on a building image
EP2790129A3 (en) Image recognition device, recording medium, and image recognition method
DE102012221921A1 (en) Locate a mobile device using optically detectable landmarks
EP2849030A3 (en) Information processing apparatus and information processing method
Liciotti et al. An intelligent RGB-D video system for bus passenger counting
KR102550673B1 (en) Method for visitor access statistics analysis and apparatus for the same
KR101641251B1 (en) Apparatus for detecting lane and method thereof
CN103700163B (en) Determine method and the three-dimension sensor of object

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
DD01 Delivery of document by public notice

Addressee: BEIJING YUEDONG SHUANGCHENG TECHNOLOGY CO., LTD.

Document name: Notification of Passing Preliminary Examination of the Application for Invention

DD01 Delivery of document by public notice

Addressee: BEIJING YUEDONG SHUANGCHENG TECHNOLOGY CO., LTD.

Document name: Notification of Publication and of Entering the Substantive Examination Stage of the Application for Invention

DD01 Delivery of document by public notice

Addressee: BEIJING YUEDONG SHUANGCHENG TECHNOLOGY CO., LTD.

Document name: Notification of an Office Action

DD01 Delivery of document by public notice

Addressee: BEIJING YUEDONG SHUANGCHENG TECHNOLOGY CO., LTD.

Document name: Notification to Go Through Formalities of Registration

C14 Grant of patent or utility model
GR01 Patent grant
DD01 Delivery of document by public notice

Addressee: Patent director of Beijing Yuedong Shuangcheng Technology Co.,Ltd.

Document name: payment instructions

DD01 Delivery of document by public notice
CF01 Termination of patent right due to non-payment of annual fee
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20161130

Termination date: 20201203