JP6296043B2 - Measuring device, measuring method and measuring program - Google Patents

Measuring device, measuring method and measuring program Download PDF

Info

Publication number
JP6296043B2
JP6296043B2 JP2015232565A JP2015232565A JP6296043B2 JP 6296043 B2 JP6296043 B2 JP 6296043B2 JP 2015232565 A JP2015232565 A JP 2015232565A JP 2015232565 A JP2015232565 A JP 2015232565A JP 6296043 B2 JP6296043 B2 JP 6296043B2
Authority
JP
Japan
Prior art keywords
head
calculation
calculated
shoulder
calculating
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
JP2015232565A
Other languages
Japanese (ja)
Other versions
JP2016075693A (en
Inventor
ドラジェン ブルシュチッチ
ドラジェン ブルシュチッチ
徹志 池田
徹志 池田
神田 崇行
崇行 神田
敬宏 宮下
敬宏 宮下
Original Assignee
株式会社国際電気通信基礎技術研究所
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority to JP2011074368 priority Critical
Priority to JP2011074368 priority
Application filed by 株式会社国際電気通信基礎技術研究所 filed Critical 株式会社国際電気通信基礎技術研究所
Publication of JP2016075693A publication Critical patent/JP2016075693A/en
Application granted granted Critical
Publication of JP6296043B2 publication Critical patent/JP6296043B2/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Description

  The present invention relates to a measuring apparatus, a measuring method, and a measuring program, and more particularly, a measuring apparatus and a measuring method for measuring an object with a three-dimensional distance measuring sensor (for example, a laser range finder, a laser range scanner, a 3D scanner, a range sensor, etc.). And measurement programs.

As this type of conventional apparatus, for example, one disclosed in Patent Document 1 is known. In this background art, a subject is measured with a plurality of laser range finders (LRF), the position and moving speed of the subject are estimated from the measurement results, and the body orientation and arm movement of the subject are further determined using a human shape model. Also estimate.
JP 2009-168578 A

  However, in the background art of Patent Document 1, any LRF is installed such that the scan surface is horizontal at the waist level. Therefore, since the measurement data from the LRF includes only distance information at the waist height, the three-dimensional shape and posture cannot be estimated.

  In addition, the background art of Patent Document 1 has a problem that it is difficult to separate each person based on measurement data from the LRF in a situation where a plurality of people are crowded.

  Therefore, a main object of the present invention is to provide a novel measuring apparatus, measuring method, and measuring program.

  Another object of the present invention is to provide a measuring apparatus, a measuring method, and a measuring program capable of obtaining a variety of information by measuring a plurality of objects with a three-dimensional distance measuring sensor.

  Another object of the present invention is to provide a measuring device, a measuring method, and a measuring program capable of measuring the direction of a human body and the direction of a head with a three-dimensional distance measuring sensor.

  The present invention employs the following configuration in order to solve the above problems. The reference numerals in parentheses, supplementary explanations, and the like indicate correspondence relationships with embodiments described later to help understanding of the present invention, and do not limit the present invention in any way.

1st invention measures the some object (T1, T2, ...) including a person, changing the inclination-angle ((alpha)) with respect to the horizontal surface of the scanning surface (Scn) of a three-dimensional distance measuring sensor (14). (10) Detection means (S115 to S119) for detecting a person's head and shoulders based on measurement data from the three-dimensional distance measurement sensor, and the positional relationship between the head and shoulder detected by the detection means. Calculation means (S121 to S135) for calculating the human body direction (θb) and head direction (θh) based on the calculation means, and estimation means (S29) for estimating the human posture based on the calculation results of the calculation means The three-dimensional distance measurement sensor is installed at a position higher than the human head apex (HTP), and the detection means sets the measurement data corresponding to the person among the measurement data from the three-dimensional distance measurement sensor in the height direction. Stratification means (S115) to be categorized, and an extraction of extracting an upper layer (HLY) including a head apex from the stratification result of the stratification means and a shoulder layer (SLY) below a predetermined number of layers from the overhead layer Means (S117), and the calculation means includes first calculation means (S121) for calculating the center point of the head based on the overhead layer extracted by the extraction means, and both calculation means based on the shoulder layer extracted by the extraction means. Second calculation means (S127, S129) for calculating a vector that is perpendicular to the line connecting the shoulders (SLN) and is calculated by the first calculation means and having the head center point in front, among the overhead layers extracted by the extraction means the one of the two third calculating means for calculating the occipital point (HRP) at the end portion in the direction indicated by the calculated vector by calculating means (HRA) (S131), overhead layer extracted by the extracting means Fourth calculating means for calculating a head before point (HFP) in the second direction indicated by the calculated vector by calculating means with respect to point after the calculated head foremost (HFA) to the calculating means (S133), and 5th calculation means (S135) which calculates the direction which goes to the prefrontal point calculated by the 4th calculation means from the back of the head calculated by the 3rd calculation means, and an estimation means is calculated by the 2nd calculation means The direction indicated by the vector is estimated as the body direction, and the direction calculated by the fifth calculation means is estimated as the head direction.

A second invention is a measurement device according to the first invention, wherein the three-dimensional distance measurement sensor is installed at a position higher than a human head vertex (HTP), and the detection means is a three-dimensional distance measurement sensor. Of the measurement data corresponding to the person in the height direction (S115), the overhead layer (HLY) including the head apex from the stratification result of the stratification means, and a predetermined value from the overhead layer Extraction means (S117) for extracting the lower shoulder layer (SLY) separated by several layers, the calculation means perpendicular to the line (SLN) connecting the shoulders based on the shoulder layer extracted by the extraction means First calculation means (S127, S129) for calculating a correct direction, out of the overhead layer extracted by the extraction means, the head point (HRP) at the rearmost part (HRA) with respect to the direction calculated by the first calculation means Second calculating means for calculating S131) Third calculation means for calculating the frontal point (HFP) in the forefront part (HFA) with respect to the back of the head calculated by the second calculation means among the overhead layers extracted by the extraction means (S133) , and a second and a fourth calculating means for calculating a direction toward the calculated head before point by the third calculation means from the calculated head after point by calculating means (S135), estimation means, first calculation means Is estimated as the body direction, and the direction calculated by the fourth calculation means is estimated as the head direction.

According to the first invention, the direction of the body and the direction of the head are measured as the posture of the person by the three-dimensional distance measurement sensor, and the posture can be estimated based on the measurement result. Specifically, the head and shoulder can be detected by layering measurement data corresponding to a person and extracting the uppermost overhead layer (HLY) and the shoulder layer (SLY) lower by a predetermined number of layers. Measurement data corresponding to a person can be obtained by, for example, clustering the measurement data from the three-dimensional distance measurement sensor. Also, the body direction can be calculated by obtaining a vector perpendicular to the line connecting the shoulders from the shoulder layer. Furthermore, the direction of the head can be calculated by obtaining the back of the head based on the body direction from the upper layer and then obtaining the direction from the back of the head toward the front of the head.

The second invention is a measuring apparatus according to the first invention, calculation means, determination means for shoulder layers to determine contains only one shoulder or including both shoulders (S123), and, When the discriminating means discriminates that the shoulder layer includes only one shoulder, both shoulders are left and right with respect to the head based on the one shoulder and the center point of the head detected by the first calculating means. Sixth calculation means (S125) for calculating the other shoulder so as to satisfy the condition that the head is in a symmetric position and the head is ahead of the line connecting both shoulders, and the second calculation means includes discrimination means If it is determined that the shoulder layer includes both shoulders, a vector perpendicular to the line connecting the shoulders is calculated. If it is determined that only one shoulder is included, the one shoulder and the sixth A vector perpendicular to the line connecting the other shoulders calculated by the calculation means. To calculate the Le.

According to the second invention, the center point of the head can be calculated from the overhead layer. In addition, when the shoulder layer includes only one shoulder, on the basis of the one shoulder and the center point of the head, the condition that “the shoulders are symmetrical with respect to the head” and “the head The other shoulder can be calculated to satisfy the condition that “is ahead of the line connecting the shoulders”. If the shoulder layer includes both shoulders, the vector perpendicular to the line connecting the shoulders is obtained, and if only one shoulder is included, the vector perpendicular to the line connecting the one shoulder and the other shoulder is calculated. Since the vector is obtained , the direction of the body can be calculated by detecting the head and at least one shoulder.

A third aspect of the invention is a measuring apparatus according to the second aspect of the invention, wherein the determination unit performs determination based on a distribution of a point group constituting the shoulder layer with respect to the center point of the head.

According to the third invention, it is determined whether the point group constituting the shoulder layer is evenly distributed on both sides with respect to the center point of the head, or is unevenly distributed on one side. Can be determined.

A fourth invention is a measuring apparatus according to the second or third invention, wherein the estimating means estimates the position of the center point of the head calculated by the first calculating means as the position of the person.

According to the fourth invention, the position of the person can be estimated from the center point of the head. In other words, by using the center point of the head obtained for position estimation, the body direction can be calculated even if only the head and one shoulder are detected.

A fifth invention is a measurement method for measuring a plurality of objects including a person while changing an inclination angle of a scan surface of a three-dimensional distance measurement sensor with respect to a horizontal plane, and is based on measurement data from the three-dimensional distance measurement sensor. A detection step for detecting a person's head and shoulder, a calculation step for calculating a person's body direction and head direction based on the positional relationship between the head and shoulder detected by the detection step, and a calculation result of the calculation step Including an estimation step for estimating a person's posture based on the 3D distance measurement sensor installed at a position higher than the person's head vertex, and the detection step corresponds to a person among the measurement data from the 3D distance measurement sensor. The measurement data to be layered in the height direction, and the upper layer including the head apex from the layering result of the layering step and a predetermined number of layers from the upper layer It includes extracting a layer of shoulder underlying the calculation step, a first calculation step of calculating the center point of the head on the basis of the overhead layer extracted by the extraction step, a layer of shoulder extracted by the extraction step the second calculation step of shoulders are calculated by first calculating step is perpendicular to the line connecting the second calculation step of calculating a vector for the center point of the head forward, overhead layer extracted by the extraction step based on the A third calculation step for calculating the back of the head in the direction of the calculated vector; a second calculation step for the back of the head calculated by the third calculation step among the overhead layers extracted by the extraction step depending on the fourth calculation step, and a third calculation step calculating a head before point located at the forefront in the direction of the calculated vector by Includes a fifth calculating step of calculating a direction toward the fourth head before point calculated by the calculating step from the calculated head after point estimating step, the direction of the direction of the calculated vector by the second calculation step the body And the direction calculated by the fifth calculation step is estimated as the head direction.

A sixth invention is a measurement program that is executed by a computer of a measurement device that measures a plurality of objects including a person while changing an inclination angle of a scan surface of a three-dimensional distance measurement sensor with respect to a horizontal plane. Detecting means for detecting a person's head and shoulders based on measurement data from a three-dimensional distance measuring sensor, a direction of a person's body based on a positional relationship between the head and shoulders detected by the detecting means, and The three-dimensional distance measurement sensor is installed at a position higher than the human head apex, and functions as a calculation means for calculating the head direction and an estimation means for estimating the posture of the person based on the calculation result of the calculation means. The means includes layering means for layering measurement data corresponding to a person among measurement data from a three-dimensional distance measurement sensor in the height direction, and layering of the layering means Comprises extracting means for extracting a layer of shoulders beneath the fruit apart predetermined several layers from overhead layer and the overhead layer containing head top, calculating means, the center of the head based on the overhead layer extracted by the extracting means second calculating a vector in which the first calculation means, is calculated by the first calculation means is perpendicular to the line connecting the shoulders on the basis of a layer of shoulder extracted by the extraction means the center point of the head to the front of calculating the point calculating means, third calculating means for calculating a point occipital at the end portion in the direction of the vector calculated by the second calculation means of the overhead layer extracted by the extraction means, among the overhead layer extracted by the extraction unit fourth calculating means for calculating a head before point located at the forefront in the direction of the third vector calculated by the second calculating means with respect to point after the calculated head by calculation means, and by the third calculating means Includes a fifth computing means for computing a direction toward the head before point calculated by the fourth calculating means from the calculation has been fog after point estimating means, the direction of the direction of the calculated vector by the second calculating means body And the direction calculated by the fifth calculation means is estimated as the head direction.

  According to the present invention, a measuring device, a measuring method, and a measuring program capable of obtaining a variety of information by measuring a plurality of objects with a three-dimensional distance measuring sensor are realized. In addition, a measurement device, a measurement method, and a measurement program that can measure the direction of the human body and the direction of the head with a three-dimensional distance measurement sensor are realized.

  The above object, other objects, features and advantages of the present invention will become more apparent from the following detailed description of embodiments with reference to the drawings.

It is a block diagram which shows the structure of the measuring device which is one Example of this invention. It is an illustration figure for explaining the measurement process by LRF, (A) shows a measurement field (scan plane), and (B) shows a relation between a subject and a measurement field. It is an illustration figure for demonstrating the coordinate transformation process performed to the measurement data from inclined LRF, (A) shows the relationship between a horizontal scanning surface and an inclined scanning surface, (B) is used for coordinate transformation. The parameter used is shown. It is an illustration figure (top view) which shows the example of arrangement | positioning to the shopping center of 16 LRF. It is an illustration figure which shows an example of the measurement area | region of one LRF. It is an illustration figure which shows the example of a scan by one LRF. It is an illustration figure which shows a memory map. It is a flowchart which shows a part of CPU operation | movement. It is a flowchart which shows another part of CPU operation | movement. It is a flowchart which shows the other part of CPU operation | movement. It is a flowchart which shows a part of others of CPU operation | movement. It is a flowchart which shows another part of CPU operation | movement. It is a flowchart which shows the other part of CPU operation | movement. It is an illustration figure which shows the body shape model above from a shoulder. It is an illustration figure which shows the skeleton model of a whole body, and the upper part from the shoulder of this skeleton model respond | corresponds with the shape model of FIG. It is an illustration figure which shows the data structure of the various information memorize | stored in memory, (A) is object information, (B) is group information, (C) is individual action information, (D) is group action information. And (E) shows an example of the data structure of the group action pattern information. It is an illustration figure which shows the relationship between the moving direction of the whole object, and the direction (line-of-sight direction) of a head. It is an illustration figure which shows the variable regarding a person's position and attitude | position. It is a flowchart which shows the method (when a head and both shoulders are detected) which measures a person's position and attitude | position (the direction of a body and the direction of a head) based on the output (3D scan data) of LRF. It is an illustration figure which shows stratification of 3D scanning corresponding to a human body. It is an illustration figure which shows one shape (when the measurement angle of RLF is 90 degree | times and a person is sideways with respect to LRF) corresponding to a human body. It is an illustration figure which shows the other shape (when the measurement angle of LRF is 90 degree | times and a person is facing front with respect to LRF) corresponding to a human body. It is an illustration figure which shows the other shape of 3D scanning corresponding to a human body (when the measurement angle of LRF is 0: the direction of a person is arbitrary). FIG. 10 is an illustrative view showing still another shape of a 3D scan corresponding to a human body (when an LRF measurement angle is 30 to 40 degrees and a person is laterally or obliquely with respect to the LRF). It is a flowchart which shows the method (when a head and one shoulder are detected) which measures a person's position and attitude | position (the direction of a body and the direction of a head) based on the output (3D scan data) of LRF. It is an illustration figure which shows arrangement | positioning of the other shoulder possible from the characteristic that "the head is ahead of a shoulder" with respect to the detection result (head and one shoulder) of FIG. 25, (A) is a person in LRF When (LRF is scanning the right side of the person), (B) shows that the person is pointing to the right side of the LRF (LRF is the left side of the person). Shows the layout when scanning). It is a flowchart which shows another part of CPU operation | movement. It is an illustration figure which shows the comparison with the result (establishment without tracking) and the result measured by the motion tracker which estimated a person's posture based on the output of LRF, (A) is a comparison about a body direction, (B) is a head. A comparison of the directions is shown. It is an illustration figure which shows a part of comparison with the result estimated while tracking the position and attitude | position of a person with the particle filter based on the output of LRF, and the result actually measured with the motion tracker, (A) is a comparison regarding the position x. , (B) shows a comparison for position y. It is an illustration figure which shows the other part of the comparison with the result estimated while tracking a person's position and posture with the particle filter based on the output of LRF, and the result measured with the motion tracker, (A) is the direction of the body Comparison with respect to θb, (B) shows comparison with respect to the head direction θh.

(First embodiment)
Referring to FIG. 1, a measuring apparatus 10 of this embodiment includes a computer 12, and a plurality of (here, 16) laser range finders (hereinafter referred to as “LRF”) 14 are connected to the computer 12. In the following description, the “laser range finder” will be described as an example of the “three-dimensional distance measurement sensor”. However, the present invention is not limited to such a case, and the three-dimensional distance from the sensor to the target is not limited. Another device may be used as long as the sensor can measure the distance. For example, it is also possible to use a Microsoft Kinect (registered trademark) sensor, a Panasonic three-dimensional distance image sensor D-IMAGEr, or the like. This type of sensor is sometimes called a laser range scanner, 3D scanner, range sensor, or the like.

  The computer 12 may be a general personal computer (PC), and includes an input device 12a such as a keyboard, a display device 12b such as a monitor, a CPU 12c, a memory 12d, and the like. In some cases, each LRF 14 may be connected to a secondary computer (not shown), and the secondary computer may be connected to the computer 12.

  In general, the LRF irradiates a laser beam, and measures the distance from the time until it is reflected by the target (object, human body, etc.) and returned. The LRF 14 of this embodiment includes a mirror (not shown) that rotates around an axis in a range of, for example, ± 45 degrees, and the path of the laser beam (angle β: see FIG. 2) is, for example, 0.6 by this rotating mirror. Measurements can be performed (scanning with laser light) while changing each time. Hereinafter, a plane scanned with the laser beam by the LRF 14 is referred to as a scan plane. The distance measurable by the LRF 14 is limited to a predetermined distance R (for example, 15 m) or less so that the laser beam does not affect human eyes. Therefore, the measurement region (scanning surface Scn) of the LRF 14 has a fan shape as shown in FIG. 2A, for example, a fan shape having a radius R and a central angle of 90 degrees. Note that the central angle of the scan surface Scn is not limited to 90 degrees, and may be, for example, 180 degrees or 360 degrees.

Furthermore, the LRF 14 also changes the inclination angle of the scan surface Scn as described above (that is, the angle α of the scan surface Scn with respect to the horizontal plane) by 0.6 degrees within a range of 24 degrees to 90 degrees, for example. That is, the distance d can be measured while changing the two angles α and β. Therefore, the distance information from the LRF 14 includes three variables d, α, and β, and such distance information is referred to as “three-dimensional distance information”.

  The computer 12 acquires the target three-dimensional distance information through the LRF 14. As shown in FIG. 2B, the three-dimensional distance information from the LRF 14 includes a distance d and an angle α for each point P on the contour line Ln of the cut surface Crs when the object Obj is cut by the scan surface Scn. , Β information (local coordinate system) is included. However, since the two-dimensional distance information of the shadow portion where the laser beam does not reach cannot be obtained from the LRF 14, if one or more LRFs 14 are necessary, It is necessary to arrange the object on the opposite side of the object or so as to surround the object.

  Therefore, if a plurality of LRFs 14 are installed at predetermined positions (known), the computer 12 acquires the target three-dimensional distance information from each of them, and the position (for example, a feature point such as the center of gravity, etc.) in the three-dimensional space (world coordinate system). Position coordinates (x, y, z)) and moving speed, as well as a three-dimensional shape and direction, and a posture (for example, the direction of a characteristic part such as a head, arm, leg) can be calculated.

  However, when the measurement is performed with the LRF 14, as shown in FIG. 3A, since the scan plane Scn is inclined from the horizontal plane, as shown in FIG. In comparison, the measured distance is apparently longer. Further, as shown in FIG. 4, the height (z) for measuring the object changes according to the distance to the object. Therefore, in addition to the conversion process from the local coordinate system to the world coordinate system, a coordinate system conversion process for correcting the inclination is also required.

  Specifically, with reference to FIG. 3A and FIG. 3B, it is assumed that the position vector x ′ = SP ′ after conversion of each measured point has the slope LRF 14i installed horizontally. From the position vector x = SP in this case, the following equation (Equation 1) is obtained.

  Here, n is a unit vector parallel to the axis of rotation of the sensor, and α is an inclination angle of the sensor from the horizontal plane. The coordinate transformation formula can be described as the following formula (Formula 2) using a rotation matrix.

Here, I 3 is a unit vector of 3 rows and 3 columns, and n elements are defined as n = (n x , n y , n z ) ( t n is transposition of n).

  As described above, the measurement apparatus 10 performs measurement while changing the inclination angle α of the scan surface Scn. Therefore, the measurement apparatus 10 performs a single measurement (for example, in a period in which the inclination angle α changes from a minimum value of 24 degrees to a maximum value of 90 degrees). ), The three-dimensional shape of the object can be calculated. In addition, even if the object is stopped, its 3D shape can be calculated (however, if the object is moving, measurement can be performed with various positional relationships, so that the calculation accuracy of the 3D shape is improved. preferable).

  In addition, by changing the inclination angle α in this way, the measurement area is expanded, and even when a plurality of objects are densely packed, one by one based on measurement data measured directly above or in the vicinity thereof. Can be easily separated into objects. And based on the measurement data measured with various inclination angles, the three-dimensional shape of each separated object can be calculated with high accuracy.

  In the measuring apparatus 10, the type (for example, human body or object) or attribute (for example, sex or adult / child distinction for the human body, shopping cart / baggage distinction for the object) from the three-dimensional shape calculated as described above. Etc.) and a profile (for example, height), or an action (for example, an action of walking while looking at a store or a guide board along a passage) from a posture (for example, the direction of the head) can be estimated.

  In particular, by performing the calculation and estimation as described above for a plurality of targets at the same time, it is possible to estimate the relationship between the targets (for example, whether they are friends, family members, or others), and friends and family members. It is also possible to analyze a group's behavior pattern such as companion (for example, a behavior pattern in which friends pay attention to a specific store or information board, or a family travels through a specific passage).

  An example of installation of 16 LRFs 14 in a shopping center is shown in FIG. FIG. 4 is a top view of the shopping center as viewed from above. In the shopping center, as a world coordinate system, for example, an X axis and a Y axis are defined along the long side and the short side of the floor surface with the point O on the floor surface as the origin, and both the X axis and the Y axis are defined. A Z axis (vertically upward) perpendicular to is defined.

  In this example, as can be seen from FIG. 4, the 16 LRFs 14 straddle the passage that penetrates the shopping center laterally (in the X-axis direction) and the plazas or open-type restaurants located at both ends thereof (here, 10 units). Are arranged in the aisle and 6 are in the plaza, etc., and the aisle and the plaza are covered as a measurement area. The measurement area indicated by the black frame has an area of about 1220 square meters, for example, and various restaurants and merchandise stores are lined up around the measurement area.

  Of the 16 LRFs 14, 10 (black circles) along the passage are installed at a low place, for example, 3.5 m, and 6 (white circles) such as a plaza are installed at a high place, for example, 10 m. Thus, efficient measurement can be performed by changing the height of each LRF 14 in accordance with the size of the measurement region that the LRF 14 is in charge of.

  For example, as shown in FIG. 5, each LRF 14 measures a range of 66 degrees (24 ° ≦ α ≦ 90 °) forward and 90 degrees (−45 ° ≦ β ≦ 45 °) from left to right. Can do. Therefore, if the installation height of a certain LRF14 is 8 m, the measurable range for a pedestrian with a height of 1.8 m is 12.4 m right and left immediately below the LRF14, and 66 ° (10.7 m) of the LRF14. It is 30.5m on the left and right in front.

  An example of scanning by one LRF 14 is shown in FIG. Referring to FIG. 6, each black dot represents a position where laser light periodically emitted from one LRF 14 while changing the angles α and β reaches the target human body or the floor. . A cross mark “+” indicates a position immediately below the LRF 14. By such laser scanning, the distance from the LRF 14 itself to each black spot is measured, and information such as the position and orientation of the target can be obtained from the measurement result.

When the angle resolution of the LRF 14 is 0.6 degrees × 0.6 degrees (that is, the changes in the angles α and β are in increments of 0.6 degrees), if the distance from the LRF 14 to the object is 8 m, the interval between the black spots is 8cm
As a result, a sufficient distance resolution for obtaining the position and posture of the human body can be obtained.

  By installing a plurality of such LRFs 14 and performing measurements from different directions, detailed measurements can be performed on isolated objects, and even a plurality of adjacent objects can be separated from each other. Since it becomes easy and measurement omission due to occlusion (being hidden behind another object) can be avoided, stable measurement can be performed.

  The configurations and numerical values shown in FIGS. 1 to 6 are merely examples, and the number of LRFs 14 and their arrangements, the installation height of each LRF 14, the angular range, the angular resolution, and the like may be changed as necessary. You can do it.

  Next, processing performed by the computer 12 of the measuring apparatus 10 will be described. First, an outline will be described. First, the computer 12 executes state estimation processing for estimating a target state (for example, position, moving direction, three-dimensional shape, posture, and the like) in real time using a particle filter based on measurement data from the 16 LRFs 14.

  A particle filter is a type of time-series filter that estimates the current target state by repeating prediction and observation. Specifically, the particle filter is observed by regarding the next state that can occur from the current state as many particles. The likelihood (similarity) between the states is obtained for each particle, and the result of weighted averaging of all particles according to the likelihood is estimated to be the current target state. Then, by generating new particles according to the weight and repeating the same processing, the target state can be estimated sequentially.

  In the state estimation process, the state of each target is estimated by a dedicated particle filter. Therefore, for example, in the state where ten targets are detected, ten particle filters operate in parallel, and when another target is detected, an eleventh particle filter is newly generated.

  In parallel with the state estimation processing as described above, the computer 12 also performs group estimation processing for estimating whether each target is “one person” or belongs to a group based on the state of each target, and each target is individually The individual action estimation process for estimating the action to be performed (for example, the action of looking at a store or a guide board) is also executed.

  Then, after the various estimation processes are completed, the computer 12 further analyzes the group behavior based on the estimation result. In this group behavior analysis process, each group is classified into categories such as “friends”, “family”, “couple”, etc., group behavior information is created by analyzing individual behavior information for each group, Analyzing behavior information for each category to create group behavior pattern information.

  Next, a specific processing procedure will be described using the memory map of FIG. 7 and the flowcharts of FIGS. 8 to 13 with reference to the illustrations of FIGS. In the series of processes as described above, the CPU 12c of the computer 12 executes processes according to the flow shown in FIGS. 8 to 13 based on the program and data shown in FIG. 7 stored in the memory 12d. It is realized by.

  Referring to FIG. 7, program area 20 and data area 22 are formed in memory 12 d, and measurement program 32 is stored in program area 22. The measurement program 32 is a software program that realizes measurement processing by the LRF 14 via the CPU 12c, and particularly corresponds to the main flow of FIG. 8 among the flows of FIGS.

  The measurement program 32 includes an estimation program 32a and an analysis program 32b which are sub software programs. The estimation program 32a is in charge of state estimation processing (FIG. 9), attribute / profile estimation processing (FIG. 10), individual behavior estimation processing (FIG. 11) and group estimation processing (FIG. 12) using a particle filter, and the analysis program 32b is In charge of group analysis processing (FIG. 13). Although not shown, the program area 20 also stores an input / output control program for controlling the input device 12a and the display device 12b to realize key input, image output, and the like.

  The data area 22 includes measurement data 34, conversion data 36, particle filter 38, target information 40, group information 42, individual action information 44, group action information 46, group action pattern information 48, and a three-dimensional shape model database (DB). 50 and map data 52 are stored. The measurement data 34 is data indicating the result of measurement by each LRF 14 (including three-dimensional distance information as described above). The conversion data 36 is data obtained by subjecting the measurement data 34 to two types of coordinate conversion processing as will be described later. The particle filter 38 is a time series filter for estimating the state of each target (T1, T2,...) Based on the conversion data 36, and includes a target T1 particle filter 38a, a target T2 particle filter 38b,. .

  The target information 40 is information indicating the state and attribute / profile of each target (T1, T2,...), And is created based on the estimation result by the particle filter 38. Specifically, as shown in FIG. 16A, the position, moving direction, three-dimensional shape and posture are described as states, and height, gender and adult / child distinction are described as attributes / profiles.

  The group information 42 is information indicating a group estimated from the target information 40. Specifically, as shown in FIG. 16B, elements and categories are described for each group (G1, G2,...). The From this group information 42, it can be seen that the groups T1 and T2 constitute a group G1, and that the two are friends. Moreover, group G2 is comprised by object T5-T7, and it turns out that three are families. Note that a subject that is not included in any group is considered to be “one person”.

  The individual action information 44 is information indicating individual actions of each target (T1, T2,...) And is estimated from the target information 40. In the individual action information 44, as shown in FIG. 16C, for each target (T1, T2,...), Along with the trajectory of the target on the map, the target action is “(target T1) store. It is described as “I saw A, B,... And the guide plate P”, “(Target T2) saw store A, C,.

  The group behavior information 46 is information indicating the behavior of each group (G1, G2,...), And is created based on the group information 42 and the individual behavior information 44. In the group action information 46, as shown in FIG. 16D, a trajectory and actions are described for each group (G1, G2,...). For example, the trajectory of the group G1 is an average of the trajectories of the objects T1 and T2 belonging to the group G1. The action of the group G1 is a union regarding the actions of the objects T1 and T2 belonging to the group G1, and specifically, “(the object T1) saw the stores A, B,... And the guide board P” and “( The object T2 is described as “I saw the stores A, B, C,... And the guide plate P” based on “I saw the stores A, C,.

The group action pattern information 48 is information indicating a pattern common to group actions belonging to the same category, and is created based on the group information 42 and the group action information 46. In the group behavior pattern information 48, as shown in FIG. 16E, elements and behavior patterns are described for each category (friends, families, couples). For example, the action pattern of “friends” is a product set related to the actions of the groups G1, G5,... Belonging to “friends”, such as “(a group of friends) see store A and information board P”. Is described as follows.

  The three-dimensional shape model DB 50 is a database in which two types of models M1 and M2 for estimating a target three-dimensional shape and posture are registered. The model M1 is a model showing the body shape above the shoulder as shown in FIG. 14. When the height z and the radius r in the direction θ are defined with the center position O of the head as the origin, r (z, θ) It is expressed as On the other hand, the model M2 is a model showing a whole body skeleton as shown in FIG. 15, and is shown by an ellipse of the head bone, a quadrilateral torso bone, and eight line segments. It is expressed by the bones of the limbs and the joints indicated by the eight black circles connecting these bones. In the model M2, the top from the shoulder corresponds to the model M1, and the lengths R1 to R4 and L1 to L4 of the bones are known (values calculated based on the measurement data 34 are set).

  Referring to FIG. 8, when the measurement process is started, the CPU 12c first performs an initial process in step S1. In the initial processing, for example, various information (40 to 49) in the data area 22 is initialized, the models M1 and M2 are registered in the three-dimensional shape model DB 50, and the map data 52 is taken in.

  When the initial processing is completed, the process proceeds to step S3, and the inclination angle (α) of each LRF 14 is controlled. At the first execution, an initial value (for example, 24 degrees) is set to the variable α. At the second and subsequent executions, α is increased by 0.6 degrees, and when α reaches the upper limit (for example, 90 degrees), α Is returned to the initial value and the same operation is repeated. Each LRF 14 changes the tilt angle of the scan surface according to the variable α designated by the CPU 12c.

  Next, in step S5, measurement data 34 is acquired from each LRF 14, and in step S7, coordinate conversion (see Equations 1 and 2) is performed on the acquired measurement data 34 to correct the inclination of the scan plane Scn. . Then, the coordinate-converted measurement data is spatially integrated in step S9. In other words, coordinate conversion from the local coordinate system to the world coordinate system is further performed on the measurement data subjected to coordinate conversion in step S7.

Next, in step S11, an object is detected based on the measurement data after the two types of coordinate transformations, that is, the transformation data 36. As a specific detection method, there is a method in which the position of the floor surface is input in advance and a set of a predetermined number or more of measurement points detected at a position higher than the floor surface is regarded as a target. Another method is to measure the same direction for a certain period of time, register the obtained maximum distance as the position of the background, and target a set of one measurement point that is more than a certain distance before the background. When the distance between objects is not close enough to touch, a set of measurement points gathered at a relatively close distance in the three-dimensional space can be separated and extracted as an individual object by clustering. is there. In any case, the three-dimensional measurement using the LRF 14 has an advantage that individual objects can be easily separated as compared with the three-dimensional measurement method using a camera. In particular, when measurement is performed while changing the inclination angle α of the scan plane Scn, the measurement area is expanded, and each object can be separated and the three-dimensional shape can be accurately calculated.

  Then, it progresses to step S13, and it is discriminate | determined whether the new object was detected, and if it is NO here, it will move to step S15 and will further discriminate | determine whether the already detected object has disappeared. If “NO” here as well, the process returns to step S3 and repeats the same processing as described above (for example, every 0.1 second: the same applies hereinafter).

If “YES” in the step S13, the process proceeds to a step S17 to generate the target particle filter 38, and thereafter, the process returns to the step S3 to repeat the same processing as described above. If “YES” in the step S15, the process proceeds to a step S19 to delete the target particle filter 38, and then returns to the step S3 to repeat the same processing as described above.

  Therefore, for example, in a state where two targets T1 and T2 are detected, two particle filters 38a and 38b for these targets T1 and T2 are stored in the data area 22, and the two state estimation processes by them are performed in parallel. It is running. In this state, when a new target T3 is detected, a particle filter 38c for the target T3 is added. On the other hand, when the already detected target (for example, T1) disappears, the target particle filter 38 (for example, the target T1) The particle filter 38a) is deleted.

  The state estimation processing of the objects T1, T2,... By the particle filters 38a, 38b,. That is, the same flow as in FIG. 9 is executed in parallel by the same number as the number of objects currently detected. Among these, the state estimation process of the target T1 by the particle filter 38a is as follows.

  Referring to FIG. 9, first, in step S21, each particle included in the particle filter 38a is initialized, and then the process proceeds to step S23 to generate a plurality of hypotheses relating to the state of the target T1. Here, the state includes a position, a moving direction, a three-dimensional shape, a posture (for example, a head direction), and the like. In step S25, any one hypothesis is assigned to each particle.

  Next, in step S27, it is determined whether or not new measurement data 34 has been acquired. When the measurement data 34 in the data area 22 and thus the conversion data 36 are updated by the above-described measurement processing (see FIG. 8), YES is determined in step S27, and a series of processing in steps S29 to S39 is executed.

  That is, in step S29, the likelihood of each particle with respect to the current measurement value is calculated based on the model M1, and in step S31, a weighted average is performed according to the calculated likelihood of all particles. In step S33, the weighted average value is output as an estimated value related to the state of the target T1. The target information 40 stored in the data area 22 is updated based on the estimated value output in this way.

  Further, in step S35, particles are selected based on the previously calculated likelihood, and after updating the set of particles, the process proceeds to step S37 to predict the state of each particle at the next measurement time point. And after updating a hypothesis based on a prediction result by step S39, it returns to step S25 and repeats the same process as the above.

  Thus, the position and orientation of the target T1 are specifically estimated as follows. In short, this estimation process is a procedure for obtaining the position of the model M1 and the front direction Α that best fits X, where X is the set of measurement points corresponding to the target T1 (the detection result of step S5). Note that the front direction 正面 of the model M1 may be regarded as the line-of-sight direction of the target T1.

That is, first, a set of measurement points (x, y, z) corresponding to the target T1 is set to X. Next, a certain position and direction, that is, (X c , Α) is given to the model M1 (see FIG. 13) expressed by the aforementioned r (z, θ). Then, an operation according to the following equation (Equation 3) is performed on each point Xi of X, and a set of operation results is set as X ′.

  For this X ′, an error error from the model M1, that is, r = r (θ, z) is defined as in the following equation (Equation 4).

Here, X ′ = (x ′, y ′, z ′) t , r ′ = √ {x ′ 2 + y ′ 2 } (where superscript t means transposition of a matrix).

After the preparation as described above, the error is calculated for various values of (X c , Α) using the particle filter, and (X c * , Α * ) having the minimum calculation result is calculated as the position of the target T1 and Estimated direction.

Note that (X c * , Α * ) can be obtained by, for example, a method of searching for a peripheral value using the center position <X c > of the measurement point given by the following equation (Equation 5) as an initial value. is there.

Here, <x> and <y> are averages of x and y coordinates of measurement points included in X, respectively, and z max is a maximum value of z coordinates of measurement points included in X. Specifically, various values in the vicinity of <X c> as X c, with respective selected all directions of various angles values as Alpha, calculates the error in the above equation (Equation 4), the calculation result A combination of Xc and Α that minimizes the value may be output as ( Xc * , Α * ).

  Here, the position and orientation above the shoulder are estimated as the position and orientation, but more specifically, the position and orientation of the head may be estimated. In this case, for example, a model (not shown) having the rotation angle of the head with respect to the shoulder as a parameter may be used. Although there are individual differences in the shape of a person's head, the orientation of the head can be estimated by using a model that reflects general characteristics such as the right and left objects. Further, such a model is prepared for each target T1, T2,..., And each model is dynamically changed based on the head shape measured from each target (the head for each target T1, T2,. The position and orientation of the head can be accurately estimated for each target. At that time, if it is assumed that the average direction of the head coincides with the moving direction, it is also possible to estimate the store or the signboard that each object viewed while walking.

  Further, when the density of the objects T1, T2,... Is relatively low (they are spaced so as not to overlap), the position and direction above the shoulder (head) determined using the model M1 are used as a reference. By applying the model M2 indicating the whole body skeleton, the postures of the arms and legs can be estimated further.

  The advantage of applying the model M2 in addition to the model M1 is that the height of each part such as an arm and a leg can be accurately estimated by the model M2 as a result of the height being known by the model M1. In addition, since the position and orientation above the shoulder (head) are known by the model M1, the number of possible posture candidates for each part such as arms and legs is reduced. As a result, the posture estimation by the model M2 is performed. The point which can be performed efficiently is also mentioned.

  In the estimation of the posture using the model M2, first, the posture of the head and the shoulder (torso) of the model M2 is determined so as to match the position and direction of the upper (head) from the shoulder obtained using the model M1. Next, candidates for possible postures (here, a set of eight joint angles) are calculated for each part such as an arm and a leg in the same manner as when the model M1 is used. Then, a set of joint angles that minimizes an error from the measurement point may be obtained and output as the posture of each part.

  For the moving direction and the three-dimensional shape, as in the above, first, possible candidates are calculated, and then using the particle filter 38a, the moving direction and the three-dimensional shape with the smallest error from the measurement point are obtained, What is necessary is just to describe them in the object information 40 as an estimation result.

  The state estimation processing of the objects T2, T3,... By the particle filters 38b, 30c,... Is executed in the same manner as described above, and the estimation results are reflected in the object information 40.

  The attribute / profile estimation processing based on the states (target information 40) of the targets T1, T2, T3,... Estimated as described above is executed according to the flow of FIG. That is, the same flow as in FIG. 10 is executed in parallel by the same number as the number of objects currently detected. Among these, the attribute / profile estimation process of the target T1 is as follows.

  First, in step S51, an attribute or profile of the target T1, here, height, sex, and adult / child are specified based on the repeatedly estimated three-dimensional shape. Then, it is determined in step S53 whether or not the identification has succeeded. If NO, the process returns to step S51 to repeat the same processing as described above. If “YES” in the step S53, the specifying result is described in the target information 40, and then the process returns to the step S51 to repeat the same processing as described above.

  Moreover, the individual action estimation process based on the state (target information 40) of the targets T1, T2, T3,... Estimated as described above is executed according to the flow of FIG. That is, the same flow as in FIG. 11 is executed in parallel by the same number as the number of objects currently detected. Among these, the individual action estimation process of the target T1 is as follows.

  Referring to FIG. 11, first, in step S61, a position repeatedly estimated (current position of target T1) is accumulated, and the accumulation result is described in individual action information 44 as a trajectory of target T1. Next, in step S63, two directions repeatedly estimated, that is, the moving direction and the direction from the shoulder to the top (head) are compared, and whether or not the shoulder to the top (head) is directed in a direction different from the moving direction. This is discriminated in the next step S65.

  When the upper part from the shoulder (head) rotates by a certain angle or more with respect to the moving direction and the state continues for a certain time or more, it is determined as YES in Step S65, and the process proceeds to Step S67. On the other hand, if the rotation angle above the shoulder (head) is less than a certain angle, or if the duration of the state is less than a certain time even if the rotation angle is equal to or greater than a certain angle, NO is determined in step S65. And return to step S61 to repeat the same processing as above.

  In step S67, based on the map data 52, for example, as shown in FIG. 17, a store or a guide board (here, the store A and the guide board P present in the direction from the shoulder to the top (head), that is, the line of sight. ). In step S69, based on the identification result, the behavior of the target T1, for example, “I saw the store A and the guide board P” is described in the individual behavior information 44. Then, it returns to step S61 and repeats the same process as the above.

Thereby, the locus | trajectory and action of object T1 described in the individual action information 44 are updated periodically. For example, the description about the action is “I saw the store A and the guide board P” → “I saw the store A, B and the guide board P” → “I saw the store A, B,. To change. Target T
The individual behavior estimation processes of 2, T3,... Are also executed in the same manner as described above, and the estimation results are reflected in the individual behavior information 44 (see FIG. 16C).

  Further, based on the states of the targets T1, T2, T3,... Estimated as described above, a group estimation process according to the flow of FIG. Referring to FIG. 12, first, in step S81, group information 42 is initialized. In this way, each object is considered as “one person” for the time being. Next, in step S83, the positions and movement directions of the objects T1, T2,... Are compared based on the object information 40, and in step S85, the positions are close to each other and the movement direction is the same. It is determined whether or not there is. If “NO” here, the process returns to the step S83 through a standby state for a predetermined time, and the same processing as described above is repeated.

  If “YES” in the step S85, the process proceeds to a step S87 to compare the postures (the direction from the shoulder to the head) between the objects, and in the step S89, the upper part from the shoulder to the head is a certain level or more. It is discriminated whether or not they face each other at the frequency of. If “NO” here, the process returns to the step S83 through a standby state for a predetermined time, and the same processing as described above is repeated.

  If “YES” in the step S89, the process proceeds to a step S91 to register the target in the group information 42 (see FIG. 16B) as one group. Then, it returns to step S83 and repeats the same process as the above.

  Thus, for example, when it is estimated that the targets T1 and T2 are one group, the group G1 having the targets T1 and T2 as elements is registered in the group information 42. Thereafter, for example, if it is estimated that the targets T5 to T7 are one group, the group G2 having the targets T5 to T7 as elements is additionally registered in the group information 42. Although illustration is omitted, when a target (for example, T1 and T2) estimated to be one group until a certain point in time disappears or is dispersed, the registration of the group (for example, G1) is deleted from the group information 42. The

  On the other hand, the group behavior analysis processing for analyzing the behavior of the groups G1, G2,... Is based on the individual behavior (individual behavior information 44) of the targets T1, T2, T3,. It is executed according to the flow.

  Referring to FIG. 13, in the first step S101, the height, gender and adult / child distinction described in the target information 40, and the number of elements described in the group information 42 (the currently detected target T1, Based on the total number of T2,..., Each group is classified into one of the categories of friends, families, and couples, and the result is added to the group information 42.

  In the next step S93, the individual action information 44 is analyzed for each group, and the group action information 46 is created. Specifically, the average of the trajectories of the targets T1 and T2 belonging to the group G1 is obtained, and the average trajectory is described in the group action information 46. Further, a union of the actions of the objects T1 and T2 is obtained, and the result is further described in the group action information 46.

  In the next step S95, the group action information 46 is analyzed for each category, and group action pattern information 48 is created. Specifically, the product set of these is obtained from the behavior of the group (for example, G1, G5,...) Classified as friends, and the result is used to determine the behavior pattern of the group of friends (for example, “Store A and Information Board P. It is described in the group action pattern information 48 as “see”. Similarly, a set of behaviors is obtained for families and couples, and the results are described in the group behavior pattern information 48 as behavior patterns of families and couples.

  In this way, when the analysis on the classification, behavior, and behavior pattern of each group is completed, this group analysis processing is terminated.

In the above flow, the target state is estimated by the particle filter 38, but may be estimated by another time series filter such as a Kalman filter, or by a method other than the time series filtering, for example, by dynamic programming. You may estimate by DP matching etc.

  As is clear from the above, the measuring apparatus 10 of this embodiment measures 16 objects (T1, T2,...) By arranging 16 LRFs 14 with the scan plane Scn inclined in the shopping center. As the number of LRFs 14, an appropriate number may be selected according to the measurement region, and may be one according to circumstances. The arrangement of the LRF 14 is determined so that the respective measurement areas overlap appropriately, but in some cases, there is no need for overlap.

  Specifically, the computer 12 of the measuring apparatus 10 registers the three-dimensional shape models (M1, M2) related to a plurality of objects in the DB 50 in step S1, and then controls the LRF 14 in step S3 to control the horizontal plane of the scan plane Scn. The measurement data 34 (conversion data) from the LRF 14 using the particle filters 38a, 38b,... Of the plurality of objects in steps S5 to S19 and S21 to S39 while changing the inclination angle α with respect to. 36) and the three-dimensional shape model 50 (M1, M2) are compared to estimate the state (position, moving direction, three-dimensional shape and posture) of each of the plurality of objects. Note that the change range of the inclination angle α is not limited to 24 degrees to 90 degrees, and an appropriate range (for example, 20 degrees to 80 degrees or 30 degrees to 150 degrees) in consideration of the balance between the ease of separation and the width of the measurement region. You can choose a degree).

  Thus, by measuring while changing the inclination angle α, the separation of the plurality of targets T1, T2,... Becomes easy, and the particle filters 38a, 38b,. By using it, each three-dimensional shape and posture can be accurately estimated. In addition, the three-dimensional shape can be estimated for both moving and stopped objects.

  In particular, by using the body shape model M1 above the shoulder as the three-dimensional shape model, the three-dimensional shape and orientation (次 元: see FIG. 17) of the top or head from the shoulder can be estimated, and the whole body skeleton model M2 By using together, it is possible to estimate the orientation of arms and legs.

  In addition, from the three-dimensional shape of the head, it is possible to estimate profiles such as height, gender, adult / child distinction, etc., and looking at specific stores and information boards from the head orientation Behavior (individual behavior) can also be estimated.

  Then, by comparing the position, moving direction, and posture (head orientation Α) among a plurality of objects T1, T2,... It is also possible to classify into a plurality of categories (friends, families, couples, etc.) based on the attributes of

  Further, group behavior information 46 can be created by analyzing the individual behavior information 44 for each group, or group behavior pattern information 48 can be created by analyzing the group behavior information 46 for each category.

In addition, since the position of each person can be measured even in a crowded situation, the exact number of people who visit a specific place can be counted. In addition, it is possible to obtain the movement trajectory of each person even in a crowded situation, and it is also possible to obtain a movement trajectory (average trajectory) for each group from the movement trajectory of each person. As a result, it becomes possible to investigate the cause that hinders the movement of a person, or to investigate the flow of a person when a new object is placed in the environment or when an event is performed.

  In addition, when conducting surveys as described above, it is possible to aggregate adults and children separately, or to aggregate by gender, and to analyze differences in behavior patterns depending on attributes such as adults / children and sex. It becomes possible.

  Furthermore, it is possible to know the direction of the upper body of the person from the position of the shoulder, and in particular, it is possible to estimate the object of interest of the person from the direction in which the face is facing. By obtaining the positions of objects that many people are interested in, it is possible to obtain grounds for considering the installation of guidance and advertisements.

  In addition, by performing group estimation and analyzing the behavior of the crowd in units of groups, it is possible to know behaviors that are closer to reality than when each person is treated as another person.

  In this way, the measuring device 10 can measure a plurality of objects T1, T2,... With the LRF 14, and obtain various information.

These pieces of information are useful, for example, when moving the robot in the environment. Specifically, the robot can move so as not to obstruct the movement of the person by performing a statistical analysis of the movement of the person in the environment to understand the difference in the movement of the person depending on the place and time. In addition, it becomes possible to provide services while closely following the movement of the person along the path of the person's normal movement. In addition, when moving to the place where the service is performed, control is performed to avoid the route that many people move, and control to avoid the direction in which many people are interested (that is, to avoid entering the human field of view as much as possible). It is also possible to perform. It is also possible to provide a service according to the person's interest based on the person's position and head orientation (line-of-sight direction: the direction in which the person showed interest) measured in real time. In addition, by recognizing a group of people, it is possible to propose a service suitable for people visiting the group, such as moving so as not to interrupt the group or introducing places where everyone can eat. Is possible.
(Second embodiment)
The hardware configuration of the measuring device 10 of this embodiment (second embodiment) is the same as that of the measuring device 10 of the previous embodiment (first embodiment). Therefore, FIG. 1 to FIG. 3 are used and detailed description is omitted. Further, since the basic operation and installation environment of the measuring apparatus 10 are the same as those in the previous embodiment, FIGS. 4 to 6 are used and detailed description thereof is omitted.

  The main difference in operation of the measuring apparatus 10 between this embodiment and the previous embodiment is that in estimating the person's posture, the latter is above the head or shoulder as shown in FIG. While the whole direction is detected (that is, the body direction and the head direction are not particularly distinguished), the former, as shown in FIG. 18, shows the body direction (θb) and the head direction. The point is to clearly distinguish the direction (θh).

  In terms of processing by the computer 12, among variables (variables indicating position, moving direction, three-dimensional shape and posture) used for state estimation processing (see FIG. 9) by the particle filter, The difference is that the number is one more than in the previous embodiment.

  Accordingly, the contents of the memory 12d and the processing of the CPU 12 are the same as those of the previous embodiment except for a part thereof, so that explanations of the common points are omitted or simplified with reference to FIGS. Hereinafter, it demonstrates in detail using FIGS. 18-30 centering on said difference.

FIG. 19 shows a method of measuring the position and posture (body direction and head direction) of a person based on the output (3D scan data) of the LRF 14 (when the head and both shoulders are detected).
This position & orientation measurement method is mainly composed of three stages: (A) detection of the upper head and shoulders, (B) feature extraction, and (C) calculation of position and orientation.
(A) Detection of the upper head and shoulders The 3D scan data (hereinafter simply referred to as “3D scan”) corresponding to one subject (at least above the shoulder of the human body) is layered, and the upper layer data (“overhead layer”) and shoulder The data of the layer (“shoulder layer”) is extracted.

  Specifically, as shown in FIG. 20, the measurement apparatus 10 performs a 3D scan (SC) of the human body based on the output of the LRF 14 with a predetermined width (for example, 10 cm) from the top point, that is, the head vertex (HTP). Slicing horizontally, the first layer from the top (set of points that fall in the range of 0-10 cm with the top vertex being 0) and the fourth layer (set of points that fall in the range of 30-40 cm) Extract as overhead layer (HLY) and shoulder layer (SLY).

  The slice width may be a fixed value, but is preferably a parameter that varies depending on the height (adult / child). For example, when the height is 170 cm (adult), the height is 10 cm, and when the height is 100 cm (child). It is set to 6.5 cm.

  Alternatively, instead of slicing evenly at regular intervals as described above, slicing may be unevenly spaced at intervals according to the human body shape (for example, the first layer is 10 cm, the fourth layer is 13 cm, etc.). Alternatively, after slicing evenly, the number of points belonging to each layer is counted, and the width of each layer is adjusted so that the ratio matches the shape of the human body.

  A typical 3D scan shape corresponding to the human body of FIG. 20 is shown in FIGS. 21-24 with black and gray hatching on the overhead and shoulder layers, respectively. FIG. 21 shows the shape when the measurement angle of the LRF 14 is 90 degrees and the person is sideways with respect to the LRF 14. FIG. 22 shows the shape when the measurement angle of the LRF 14 is 90 degrees and the person is facing the front with respect to the LRF 14. FIG. 23 shows the shape when the measurement angle of the LRF 14 is 0 and the person is facing the front with respect to the LRF 14. FIG. 24 shows the shape when the measurement angle of the LRF 14 is 30 to 40 degrees and the person is with respect to the LRF 14. The shapes in the case of landscape or oblique orientation are shown.

  In this embodiment, the “measurement angle” refers to an angle formed by the optical axis of the LRF 14 with respect to the vertical line. The angle is 0 degrees when the LRF 14 is directed downward and 90 degrees when the LRF 14 is directed sideways. There is a relationship of “measurement angle = 90 ° −α” with the “inclination angle” (α: see FIG. 3A) used in the previous embodiment.

When the measurement region of the LRF 14 is set as shown in FIG. 5, that is, when the LRF 14 is installed at a height of 8 m and a range of 0 to 66 degrees is measured with a swing angle of ± 45 degrees with respect to the vertical line. Is a 3D scan as shown in FIG. In the example of FIG. 23, all or most of the upper part of the head and both the left and right shoulders can be visually recognized. In the example of FIG. 24, only a part of the upper part of the head and one shoulder can be visually recognized. Here, as shown in FIG. 23, it is assumed that all or most of the upper part of the head and both shoulders have been detected.
(B) Feature extraction First, (1) “the center point of the head” is obtained from the overhead layer, and (2) “body direction vector” is obtained from the shoulder layer. Next, (3) from the overhead layer, the “posterior point of the head” (also referred to as “back of head point”: see FIG. 18) that is the most backward with respect to the body direction vector is obtained, and (4) from the overhead layer Then, the “frontal point of the head” (also referred to as the “frontal point”) that is farthest from the “rear point of the head” is obtained.

More specifically, referring also to FIG. 18, the measuring apparatus 10 obtains (1) the center point of the overhead layer (P: the average of points included in the overhead layer or the center of gravity of the overhead layer) “Head center point”
And (2) A principal component analysis is performed on the shoulder layer, and the second principal component is defined as “body direction”. In this case, the first main component is the direction of the line connecting both shoulders. The front-rear direction is determined based on the human body characteristic that “the head is generally ahead of the shoulder”. Thereby, a vector indicating the direction of the body (body direction vector Vb) is obtained.

  Next, the measuring apparatus 10 identifies (3) a certain number of points (for example, 1/5 of the entire overhead layer) from the rearmost one with respect to the body direction vector Vb in the overhead layer. The center point of the occipital region (HRA) composed of the determined points is obtained, and this is set as the “occipital point” (HRP). The ratio of 1/5 is merely an example, and may be changed as appropriate (for example, 25%).

  When determining the occipital region as described above, as shown in FIG. 18, the boundary line BN1 that divides the overhead layer into the occipital region HRA and the other regions is generally the same as the line SLN connecting both shoulders. It does not match. In another embodiment, the region behind the line SLN connecting the shoulders in the overhead layer may be defined as the occipital region (the boundary line BN1 in this case always coincides with the line SLN connecting the shoulders). : Not shown). The line SLN connecting the shoulders can be obtained as, for example, a straight line that passes through the center point of the shoulder layer and is parallel to the first principal component of the shoulder layer. In addition, the boundary line BN1 is not limited to a straight line, but may be a curved line (may be partially or entirely curved or bent).

  Next, the measuring apparatus 10 specifies (4) a certain number of points (for example, 1/5 points of the entire overhead layer) in the order of distance from the back of the head in the overhead layer, and is configured by the identified points. The center point of the prefrontal region (HFA) to be obtained is obtained, and this is set as the “frontal point” (HFP). The ratio of 1/5 may also be changed as appropriate (for example, 30%).

In this case, the boundary line BN2 that divides the overhead layer into the frontal region and the other region is a curve (arc) as shown in FIG. 18, but may be a straight line in other embodiments. For example, with respect to the center point P of the head, a straight line symmetrical to the boundary line BN1 on the occipital region side is obtained and defined as the boundary line BN2. In this case, a region in front of the boundary line BN2 in the overhead layer is a prefrontal region.
(C) Calculation of position and posture The center point of the head obtained in (1) in the previous stage is the person's “position” P (x, y), and the direction of the body direction vector Vb obtained in (2) is “body”. Direction θb, and the direction from the back of the head determined in (3) to the front of the head determined in (4) is referred to as “head direction” θh. Specifically, the measuring device 10 sets the calculation result to variables (x, y, θb, and θh) that indicate positions and postures among variables that indicate a person's state.

On the other hand, a measurement method when only a part of the upper head and one shoulder are detected is as shown in FIG. 25, for example. This measurement method is obtained by changing a part of the measurement method of FIG. 19 described above, and description of common parts is omitted or simplified.
(A) Detection of upper head and shoulder As a result of layering the 3D scan and extracting the first and fourth layers by the same procedure as described above, as shown in FIG. Suppose that it was detected.
(B) Feature extraction First, (1) “the center point of the head” is obtained from the overhead layer in the same procedure as described above. Also, (2a) the other shoulder (should not be seen by being hidden by the head) is calculated from the upper layer and the shoulder layer (that is, the head and one shoulder). Symmetry of the human body, that is, the characteristic that “both shoulders are symmetrical with respect to the head” and “the head is generally ahead of the shoulder (the head is ahead of the line connecting the shoulders)” In view of the above feature, there are two possible positions of the other shoulder with respect to the detected position of the head and one shoulder as shown in FIGS. 26 (A) and 26 (B). Which arrangement is appropriate is determined by the front-rear direction. The front-rear direction can be determined based on the moving direction of the human body feature point, for example, the center point of the head. As an example, the front-rear direction may be determined from a comparison between the center point of the head obtained in the current frame and the center point of the head obtained in the previous frame, and an arrangement that matches this direction may be adopted.

Next, (2b) a vector perpendicular to the line (SLN1 or SLN2: see FIG. 26 (A) and FIG. 26 (B)) connecting the detected shoulder and the calculated shoulder is obtained, Is the body direction vector (Vb). Then, in the same procedure as described above, (3) the farthest posterior point with respect to the body direction vector from the overhead layer is obtained, and (4) the farthest from the overhead layer to the posterior point. Find a certain frontal point.
(C) Calculation of position and posture The center point of the head obtained in (1) in the previous stage is the person's “position” P (x, y), and the direction of the body direction vector obtained in (2b) is “ The direction “θb” is set, and the direction from the back of the head determined in (3) to the front of the head determined in (4) is “head direction” θh.

  In this case, since only a part of the head is included in the overhead layer, a position error occurs. However, according to the verification result using the motion tracker, the error is only about 10 cm at the maximum. The body direction error is an average of 10 degrees (5 degrees if both shoulders are visible), and the head direction error is an average of 20 degrees (if the measurement angle is small or the person is facing LRF14) 10 degrees) (details will be described later).

  The position and orientation measurement method as described above is realized by the computer 12 (CPU 12c) operating according to the flow of FIG. The flow of FIG. 27 is a process for obtaining a measurement value related to the current position and orientation that is input to the particle filter when the likelihood calculation is performed in step S29 in the flow of FIG. 9 (hereinafter referred to as “position & orientation measurement process”). ). The flow in FIG. 27 is executed for each target detected in step S11 of the flow in FIG. 8 as in the flow in FIG. Therefore, for example, when three subjects are detected at the same time, three sets of the flow of FIG. 9 and the flow of FIG. 27 are executed in parallel.

  Referring to FIG. 27, first, the CPU 12c executes an initial process in step S111. Specifically, “position”, “body direction” and “head direction” (indicating variables x, y, bθ and θh) are initialized, temporary data such as 3D scan, overhead layer and shoulder layer are stored. The holding area (not shown, but formed in, for example, the data area 22 of the memory 12d shown in FIG. 7) is cleared.

  When the initial processing is completed, the process proceeds to step S113, and the target 3D scan data is acquired from the data area 22. The 3D scan data of each target is data obtained by clustering the output of the LRF 14 and is included in the target information 40 stored in the data area 22. Next, a layering process (see FIG. 20) is performed on the 3D scan data acquired in step S113 in step S115. Then, an overhead layer (HLY) and a shoulder layer (SLY) are extracted from the stratification result in step S117. The specific procedure for layering the 3D scan and extracting the overhead layer and the shoulder layer has been described above and will not be described.

  Next, based on the extraction result in step S117, it is determined in step S119 whether a head and a shoulder have been detected. Specifically, for example, the number of points included in the extracted head layer is counted, and if the number exceeds the first predetermined number, it is considered that the head has been detected, otherwise the head is not detected. It is considered to be. Similarly, the number of points included in the shoulder layer is counted, and if the number exceeds the second predetermined number, it is considered that the shoulder has been detected, and otherwise the shoulder is not detected. The determination result in step S119 is YES when it is considered that both the head and the shoulder have been detected, and NO when it is considered that at least one of them has not been detected.

  If “NO” in the step S119, the process returns to the step S113 to repeat the same processing as described above. The loop returning from step S113 to S119 and returning to S113, or the loop returning from step S113 to S135 and returning to S131 (described later) is executed once per frame (for example, every 1/30 seconds). . This period is an example, and may be changed as appropriate (for example, once every 6 frames). The period may vary with time.

  If “YES” in the step S119, the process proceeds to a step S121, the center point (P) of the head is calculated from the overhead layer extracted in the step S117, and the result is set to “position” (set to the variables x and y). Since the specific calculation method was demonstrated previously, it abbreviate | omits. In the next step S123, it is determined whether or not the detected shoulders are both shoulders. Specifically, in the shoulder layer, if two regions that are substantially equal and separated by a certain distance or more are formed on both sides (left and right) across the “center of the head”, YES (both shoulders) and Determine. On the other hand, if only a single region is formed on one side of the “head center point”, or two regions are formed on both sides, they are not equal to each other (one is too small) or the distance is If it is too close, NO (is one shoulder) is determined.

  If “NO” in the step S123, the process proceeds to the step S127 through the step S125. If “YES” in the step S123, the process immediately proceeds to a step S127. In step S125, the other shoulder (should not be seen by being hidden by the head) is calculated based on the upper layer, that is, one shoulder extracted in step S117 and the center point of the head calculated in step S121 (FIG. 26 ( (See A) and FIG. 26B). Since the specific calculation method was demonstrated previously, it abbreviate | omits.

  In the above description, when the position of the other shoulder shown in FIG. 26 (A) or FIG. 26 (B) is appropriate is determined based on the front and back directions, it is obtained in the current frame. The front and rear direction is determined by determining the moving direction from the comparison between the center point of the head and that obtained in the previous frame, but the front and rear direction can also be determined from the three-dimensional shape described in the target information 40. For example, when a face surface or a nose protrusion can be detected from a three-dimensional shape, the face or nose with respect to the center point of the head may be regarded as the front.

  In step S127, a vector perpendicular to the line connecting the detected (or estimated) shoulders, that is, a body direction vector Vb is calculated by principal component analysis. Since the specific calculation method was demonstrated previously, it abbreviate | omits. It should be noted that, instead of principal component analysis, even if the center point of one shoulder and the center point of the other shoulder are respectively obtained and the normal vector of the line connecting these two center points is calculated, the body direction vector Vb is obtained. Can be sought. The direction of the body direction vector Vb thus calculated is set as “body direction” (set to the variable θb) in step S129, and the process proceeds to step S131.

  In step S131, an occipital point (HRP: see FIG. 18) is calculated based on the overhead layer extracted in step S117 and the body direction calculated in step S129. In the next step S133, the overhead layer extracted in step S117. And the frontal point (HFP: see FIG. 18) is calculated based on the backside point calculated in step S131. Since the specific calculation method of the occipital point and the occipital point has already been described, it will be omitted.

  Then, the direction from the back of the head calculated in step S131 toward the front of the head calculated in step S133 is obtained, and this is set as the “head direction” (set to the variable θh). Then, it returns to step S113 and repeats the same process as the above.

As a result, the current “position”, “body direction”, and “head direction” of the target are repeatedly (periodically) calculated, and each calculation result is correlated with the current time information to obtain the target information. 40 is added. Thus, in the target information 40, measurement values (information indicating temporal changes in the variables x, y, θb, and θh) indicating the “position”, “body direction”, and “head direction” of the target are described. Result.

  In the state estimation process by the particle filter shown in FIG. 9, the following points are changed. In step S23, when generating a plurality of hypotheses relating to the state of the target (position, moving direction, three-dimensional shape and posture), the above-mentioned “position” is used as the position, and the “body direction” and “ The “head direction” is adopted. In step S29, when calculating the likelihood of each particle for the current measurement value based on the three-dimensional shape model, the “position”, “body direction” of the target and Reference is made to the measurement values (values of variables x, y, θb and θh) relating to the “head direction”.

The likelihood calculation in step S29 in this embodiment is performed as follows, for example. A SIR (Sampling Importance Resampling) particle filter is used as the particle filter. Thereby, continuous tracking can be performed during transition between measurement areas of different LRFs 14, and a smoother estimation result can be obtained. In addition, it becomes easy to use another type of laser range finder (a range sensor) such as a 2D range scanner or a combination of different types of sensors. However, in some cases, a particle filter other than SIR, for example, an SIS (Importance Sampling) particle filter may be used.

The state of the particle is given by the position (x, y), the velocity v, the moving direction θ m , the body direction θ b and the head direction θ h (the above θb and θh are shown below using subscripts). Θ b and θ h ). These are variables defined in the world coordinate system (X, Y, Z) shown in FIGS. 4 and 18, and the moving direction θ m , the body direction θ b and the head direction θ h are defined in a predetermined direction (for example, X-axis direction). The motion model used in the filter prediction step involves updating these variables by adding zero mean Gaussian noise. As noise parameters of the velocity, movement direction, body direction and head direction regarding the particle m, σ m v = 0.2 [m / s 2 ], σ m m = σ m b = σ m h = 0.2 [ rad]
Is used. The predicted position (x ′, y ′) is calculated based on the predicted speed and moving direction.

  In order to determine the weight of the particle m, the input likelihood p (z | m) is defined as the following equation (6) as a combination of five independent likelihood models (using a simple notation). .

p = p xy p b p h p bm p hb (6)
The likelihood regarding the position is defined by a Gaussian function p xy (z | m) to N (d xy , σ l xy ). Here, d xy is the Euclidean distance between the measured and predicted head positions. The dispersion parameter σ l xy is set to 0.2 [m] in the experiment.

The likelihood regarding the direction of the body is defined by Gaussian centered on the estimated body direction p b (z | m) to N (d b , σ l b1 ). Where d b = | θ b −θ ′ b | is the absolute difference between the extracted and predicted body direction (normalized to [−π, π]) and the variance is σ l b1 = 0.5 [rad] is set. Similarly, the likelihood regarding the direction of the head, p h (z | m) ~N (d h, σ l h), d h = | θ h -θ 'h | and σ l h = 0.5 [rad ] Is given.

The last two terms of equation (6) lead to practical limitations and choices about the estimated values, and the terms of p bm (m) are aligned and show a slight tendency towards the direction of movement. give. This reflects the fact that people walk exclusively forward, and this is defined as:

p bm to w bm1 N (d bm , σ l b m) + w bm2 (7)
Where d bm = | θ ′ b −θ ′ m | is the absolute difference between the predicted body direction and the direction of movement, w bm1 = 0.2, w bm2 = 0.8, and σ l bm = 0.5 [rad].

The phb (m) term in equation (6) is plausible , focusing on the difference between the body direction and the head direction—most people cannot turn their heads more than 90 degrees from the front, This assumption seems reasonable-try to check first. In addition, “people tend to turn their heads forward” (experimental evidence is “J. Schrammel, E. Mattheiss, S. Dobelt, L. Paletta, A. Almer, and M. Tscheligi,“ Attentional behavior of users on the move towards pervasive advertising media, ”in Pervasive Advertising, J. Muller, F. Alt, and D. Michelis, Eds. Springer, 2011, ch. 14”) adopt. this is,
It is modeled as the following equation (8).

If d hb <π, then p hb = 1−d hb / (2π), otherwise p hb = 0.
8)
Where d hb is the absolute difference (in radians) between the body direction and the head direction. This model limits the effects of large errors and outliers in head direction estimation and aligns the head direction with the body direction when estimation cannot be performed.

  Thus, when estimating the posture of the object, the “body direction” and the “head direction” are clearly distinguished. In addition, by using such an estimation result, it is possible to perform finer estimation or improve estimation accuracy in the individual action estimation process (see FIG. 11) or the group estimation process (see FIG. 12). .

  Specifically, in the individual action estimation process of FIG. 11, each of “body direction” and “head direction” is compared with the movement direction in step S63, and “body direction” and “head direction” are compared in step S65. It is possible to distinguish whether the head and the body are facing the store or only the head is facing the store by discriminating whether each of “is directed in a direction different from the moving direction. For example, assuming that the degree of interest is high when the head and body are facing the store, and the degree of interest is low when only the head is facing the store, in step S69, the target is Along with seeing the store, the degree of interest can be described.

  In the group estimation process of FIG. 12, not only “head direction” but also “body direction” are compared in step S87, and in step S89, both the head and body face in opposite directions (ie, In step S91, a high-frequency object that faces each body is registered in the group information 42, and the registration of the object that faces only the face is avoided more accurately. High group information 42 is obtained.

Finally, the results of the verification experiment are noted. First, the estimation result (no tracking) is verified. In the experiment, the Kinect TM sensor was attached to a height of about 2.5 m (that is, the height limited by the ceiling) of the column. In addition, the Vicon motion capture system was installed at the same location, and markers were attached to the head and shoulders, and the measured values of posture and head direction were also obtained. A total of 12 subjects (9 men and 3 women) participated in the experiment. The subject was asked to repeat the movement of standing in front of the sensor and turning his head left and right at eight different body angles (45 degree steps). The same process was repeated at four different distances from the sensor: 0.7 m (sensor angle 33 degrees); and 2 m, 2.5 m and 3 m (sensor angle 63 degrees).

The error in position estimation mainly depends on the partial field of view of the head layer, but was small in all experiments (RMSE (Root Mean Square Error), that is, the root mean square error was 7.5 c on average.
m). For the trunk angle and head angle, the average RMSE values for all tracks were 26.8 degrees and 36.8 degrees, respectively. We analyzed the estimation results of the torso and head angles in more detail.

  Moreover, a comparative example of the estimation result of the body and head directions and the measured data from the motion tracker is shown in FIGS. 28 (A) and 28 (B). In FIG. 28 (A), the estimated body direction agrees well with the output of the motion tracker in many parts. However, a larger deviation occurs in the part where the sensor captures only the side of the person, that is, in the vicinity of 10 seconds and 30 seconds. As described above, this is due to the fact that one shoulder is hidden behind the head and cannot be seen. Even in the head direction estimation shown in FIG. 28B, a large error is seen because the person's side is viewed, but in part, there is also an influence of the error caused by the body direction estimation.

Next, the tracking result is verified. In the experiment, a configuration having a Vicon tracker and two Kinect sensors was adopted. These sensors were installed at a measurement angle of 55 degrees at a height of 2.5 m and cooperated to cover an area of about 2 × 3 m. This area substantially coincides with the tracking area of the motion tracker. Subjects were allowed to move freely and look around the area.

  FIG. 29A, FIG. 29B, FIG. 30A, and FIG. 30B show the variables x, y, θb, and θh for one execution example, and tracking with a particle filter and measurement with a motion tracker. The result compared with is shown. From these comparisons, it can be seen that the position (x, y) and the body direction (θb) follow the tracker output well, and the transition between the sensors (around y = 0 m) is also smooth. The tracking error in the head direction (θh) is larger than expected, because the estimation error is large. In general, the RMSE value in the tracking run was comparable to or somewhat better than that at the time of estimation.

As described above, in this embodiment, the method of estimating the position of the person and the direction of the body and head using the LRF 14 which is a kind of 3D range sensor has been described. The method consists essentially of the following steps: 1) Extraction of the overhead and shoulder layers-the position is the average of the overhead layers, the body direction is perpendicular to the shoulder line; and 2) the head Calculation of back and front points-head orientation is the direction of the line connecting these two points.

  According to this method, stable position estimation (error 10 cm or less) can be performed in an area where the measurement result of the sensor is reliable. The same applies to the direction of the body and head. In our verification test, the mean error is 10 degrees for body direction, 5 degrees if both shoulders are visible, and the head direction is 10 for small measurement angles or when the subject is facing the sensor. %, 20% in other cases.

  In addition, by using a particle filter (preferably an SIR particle filter), continuous and smooth tracking by a plurality of sensors can be performed. It does not depend on a specific type of range sensor and is easy to apply to tracking large areas, thus providing accurate information over 360 degrees for body and head orientation in various environments.

10 ... Measuring device 12 ... Computer 14 ... Laser range finder (LRF)
52 ... 3D shape model database (DB)
M1 ... Body shape model above shoulder M2 ... Skeletal model of whole body T1, T2 ... Target Scn ... Scan plane P ... Human position (center point of overhead layer)
HTP ... head apex HLY, SLY ... overhead layer, shoulder layer SN, SLN1, SLN2 ... lines connecting both shoulders HRA, HFA ... occipital region, prefrontal region HRP, HFP ... occipital point, prefrontal point θb, θh ... Body direction, head direction

Claims (6)

  1. A measuring device that measures a plurality of objects including a person while changing an inclination angle of a scanning surface of a three-dimensional distance measuring sensor with respect to a horizontal plane,
    Detection means for detecting the head and shoulders of the person based on measurement data from the three-dimensional distance measurement sensor;
    Calculation means for calculating the direction of the person's body and head based on the positional relationship between the head and shoulder detected by the detection means, and estimation regarding the posture of the person based on the calculation result of the calculation means An estimation means for performing
    The three-dimensional distance measuring sensor is installed at a position higher than the head apex of the person,
    The detection means includes
    Of the measurement data from the three-dimensional distance measurement sensor, the stratification means for stratifying the measurement data corresponding to the person in the height direction, the overhead layer including the top vertex from the stratification result of the stratification means, and the An extraction means for extracting a shoulder layer below the overhead layer by a predetermined number of layers;
    The calculating means includes
    First calculation means for calculating a center point of the head based on the overhead layer extracted by the extraction means;
    Second calculation means for calculating a vector calculated by the first calculation means and having a head center point forward based on a shoulder layer extracted by the extraction means and perpendicular to a line connecting both shoulders;
    Third calculation means for calculating a back head point at the rearmost part in the direction of the vector calculated by the second calculation means among the overhead layers extracted by the extraction means;
    A first frontal point that is the frontmost point in the direction of the vector calculated by the second calculating unit is calculated with respect to the back point calculated by the third calculating unit among the overhead layers extracted by the extracting unit. 4 calculation means, and fifth calculation means for calculating a direction from the back of the head calculated by the third calculation means toward the front of the head calculated by the fourth calculation means,
    The measurement device is configured to estimate a vector direction calculated by the second calculation unit as the body direction and estimate a direction calculated by the fifth calculation unit as the head direction.
  2. The calculating means includes
    Determining means for determining whether the shoulder layer includes both shoulders or only one shoulder; and when the determining means determines that the shoulder layer includes only one shoulder, the one shoulder And the center point of the head detected by the first calculation means so as to satisfy the condition that both shoulders are symmetric with respect to the head and the head is ahead of the line connecting the shoulders. Further includes sixth calculation means for calculating the other shoulder,
    The second calculating means calculates a vector perpendicular to a line connecting the shoulders when the determining means determines that the shoulder layer includes both shoulders, and determines that only one shoulder is included. 2. The measuring apparatus according to claim 1, wherein a vector perpendicular to a line connecting the one shoulder and the other shoulder calculated by the sixth calculating means is calculated.
  3.   The measuring apparatus according to claim 2, wherein the determination unit performs determination based on a distribution of points constituting the shoulder layer with respect to a center point of the head.
  4. The measuring apparatus according to claim 2, wherein the estimation unit estimates the position of the center point of the head calculated by the first calculation unit as the position of the person.
  5. A measurement method for measuring a plurality of objects including a person while changing an inclination angle of a scan plane of a three-dimensional distance measurement sensor with respect to a horizontal plane,
    A detection step of detecting the person's head and shoulders based on measurement data from the three-dimensional distance measurement sensor;
    A calculation step for calculating a body direction and a head direction of the person based on the positional relationship between the head and shoulder detected by the detection step; and an estimation relating to the posture of the person based on the calculation result of the calculation step Including an estimation step for
    The three-dimensional distance measuring sensor is installed at a position higher than the head apex of the person,
    The detecting step includes
    A stratification step of stratifying measurement data corresponding to the person among measurement data from the three-dimensional distance measurement sensor in a height direction, and an overhead layer including the head apex from the stratification result of the stratification step, and An extraction step for extracting a shoulder layer below the overhead layer by a predetermined number of layers;
    The calculation step includes:
    A first calculation step of calculating a center point of the head based on the overhead layer extracted by the extraction step;
    A second calculation step for calculating a vector which is calculated by the first calculation step and has the head center point forward based on the shoulder layer extracted by the extraction step and perpendicular to the line connecting the shoulders;
    A third calculation step of calculating a back-of-head point at the rearmost part in the direction of the vector calculated by the second calculation step among the overhead layers extracted by the extraction step;
    A first forefront point in the direction of the vector calculated by the second calculation step with respect to the back point calculated by the third calculation step among the overhead layers extracted by the extraction step is calculated. 4 calculation steps, and a fifth calculation step of calculating a direction from the back of the head calculated by the third calculation step toward the front of the head calculated by the fourth calculation step,
    The estimation method is a measurement method in which the vector direction calculated in the second calculation step is estimated as the body direction, and the direction calculated in the fifth calculation step is estimated as the head direction.
  6. A measurement program that is executed by a computer of a measurement device that measures a plurality of objects including a person while changing an inclination angle of a scanning surface of a three-dimensional distance measurement sensor with respect to a horizontal plane,
    The measurement program causes the computer to
    Detection means for detecting the head and shoulders of the person based on measurement data from the three-dimensional distance measurement sensor;
    Calculation means for calculating the direction of the person's body and head based on the positional relationship between the head and shoulder detected by the detection means, and estimation regarding the posture of the person based on the calculation result of the calculation means Function as an estimation means to
    The three-dimensional distance measuring sensor is installed at a position higher than the head apex of the person,
    The detection means includes
    Of the measurement data from the three-dimensional distance measurement sensor, the stratification means for stratifying the measurement data corresponding to the person in the height direction, the overhead layer including the top vertex from the stratification result of the stratification means, and the An extraction means for extracting a shoulder layer below the overhead layer by a predetermined number of layers;
    The calculating means includes
    First calculation means for calculating a center point of the head based on the overhead layer extracted by the extraction means;
    Second calculation means for calculating a vector calculated by the first calculation means and having a head center point forward based on a shoulder layer extracted by the extraction means and perpendicular to a line connecting both shoulders;
    Third calculation means for calculating a back head point at the rearmost part in the direction of the vector calculated by the second calculation means among the overhead layers extracted by the extraction means;
    A first frontal point that is the frontmost point in the direction of the vector calculated by the second calculating unit is calculated with respect to the back point calculated by the third calculating unit among the overhead layers extracted by the extracting unit. 4 calculation means, and fifth calculation means for calculating a direction from the back of the head calculated by the third calculation means toward the front of the head calculated by the fourth calculation means,
    The measurement program, wherein the estimation means estimates the vector direction calculated by the second calculation means as the body direction, and estimates the direction calculated by the fifth calculation means as the head direction.
JP2015232565A 2011-03-30 2015-11-30 Measuring device, measuring method and measuring program Active JP6296043B2 (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
JP2011074368 2011-03-30
JP2011074368 2011-03-30

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
JP2012033610 Division 2012-02-20

Publications (2)

Publication Number Publication Date
JP2016075693A JP2016075693A (en) 2016-05-12
JP6296043B2 true JP6296043B2 (en) 2018-03-20

Family

ID=47268423

Family Applications (2)

Application Number Title Priority Date Filing Date
JP2012033610A Active JP5953484B2 (en) 2011-03-30 2012-02-20 Measuring device, measuring method and measuring program
JP2015232565A Active JP6296043B2 (en) 2011-03-30 2015-11-30 Measuring device, measuring method and measuring program

Family Applications Before (1)

Application Number Title Priority Date Filing Date
JP2012033610A Active JP5953484B2 (en) 2011-03-30 2012-02-20 Measuring device, measuring method and measuring program

Country Status (1)

Country Link
JP (2) JP5953484B2 (en)

Families Citing this family (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP6142307B2 (en) * 2013-09-27 2017-06-07 株式会社国際電気通信基礎技術研究所 Attention target estimation system, robot and control program
JP6217373B2 (en) 2013-12-13 2017-10-25 富士通株式会社 Operation determination method, operation determination apparatus, and operation determination program
KR101558258B1 (en) * 2015-02-04 2015-10-12 (주)유디피 People counter using TOF camera and counting method thereof
WO2017056245A1 (en) * 2015-09-30 2017-04-06 楽天株式会社 Information processing device, information processing method, and information processing program
JP6713619B2 (en) * 2017-03-30 2020-06-24 株式会社エクォス・リサーチ Body orientation estimation device and body orientation estimation program
CN107869993A (en) * 2017-11-07 2018-04-03 西北工业大学 Small satellite attitude method of estimation based on adaptive iteration particle filter
WO2020026677A1 (en) * 2018-07-31 2020-02-06 株式会社ニコン Detection device, processing device, detection method, and processing program
CN108726306B (en) * 2018-08-17 2020-05-22 六安富华智能信息科技有限公司 Elevator protection system with distributed airbags
JP2020035096A (en) * 2018-08-28 2020-03-05 清水建設株式会社 Information service system and information service method
RU2714525C1 (en) * 2019-07-30 2020-02-18 федеральное государственное автономное образовательное учреждение высшего образования "Казанский (Приволжский) федеральный университет" (ФГАОУ ВО КФУ) Method for determining the mean square error of spatial coordinates of points of the analyzed object from image processing obtained by different cameras with arbitrary values of orientation elements

Family Cites Families (20)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5086404A (en) * 1988-09-02 1992-02-04 Claussen Claus Frenz Device for simultaneous continuous and separate recording and measurement of head and body movements during standing, walking and stepping
JPH08161453A (en) * 1994-12-02 1996-06-21 Tokai Rika Co Ltd Counter for number of persons
JPH09237348A (en) * 1996-02-29 1997-09-09 Sanyo Electric Co Ltd Method for estimating posture of human body
JP4644992B2 (en) * 2001-08-10 2011-03-09 パナソニック電工株式会社 Human body detection method using range image
JP2004185363A (en) * 2002-12-04 2004-07-02 Ishikawajima Harima Heavy Ind Co Ltd Object distinction method by area sensor
JP4100239B2 (en) * 2003-04-22 2008-06-11 松下電工株式会社 Obstacle detection device and autonomous mobile robot using the same device, obstacle detection method, and obstacle detection program
JP4085959B2 (en) * 2003-11-14 2008-05-14 コニカミノルタホールディングス株式会社 Object detection device, object detection method, and recording medium
JP4475060B2 (en) * 2004-08-24 2010-06-09 パナソニック電工株式会社 Human detection device
JP4466366B2 (en) * 2004-12-27 2010-05-26 パナソニック電工株式会社 Human body detection method and human body detection apparatus using range image
JP4691701B2 (en) * 2005-01-26 2011-06-01 株式会社Ihi Number detection device and method
JP4799910B2 (en) * 2005-06-02 2011-10-26 ノルベルト・リンク Apparatus and method for detecting a person in a survey area
JP4559311B2 (en) * 2005-06-27 2010-10-06 日本電信電話株式会社 Action identification device, action identification method, program, and storage medium
JP5187878B2 (en) * 2007-01-31 2013-04-24 国立大学法人 東京大学 Object measurement system
JP4900103B2 (en) * 2007-07-18 2012-03-21 マツダ株式会社 Pedestrian detection device
JP4631036B2 (en) * 2007-10-29 2011-02-23 国立大学法人 東京大学 Passer-by behavior analysis device, passer-by behavior analysis method, and program thereof
JP2009168578A (en) * 2008-01-15 2009-07-30 Advanced Telecommunication Research Institute International Measurement device and measurement method
JP5418938B2 (en) * 2009-03-04 2014-02-19 株式会社国際電気通信基礎技術研究所 Group behavior estimation apparatus and service providing system
JP5396192B2 (en) * 2009-08-11 2014-01-22 株式会社日立国際電気 People counter
JP2011226880A (en) * 2010-04-19 2011-11-10 Advanced Telecommunication Research Institute International Measuring device, measuring method and measuring program
JP5762730B2 (en) * 2010-12-09 2015-08-12 パナソニック株式会社 Human detection device and human detection method

Also Published As

Publication number Publication date
JP2016075693A (en) 2016-05-12
JP5953484B2 (en) 2016-07-20
JP2012215555A (en) 2012-11-08

Similar Documents

Publication Publication Date Title
US10496103B2 (en) Fault-tolerance to provide robust tracking for autonomous and non-autonomous positional awareness
US10055851B2 (en) Determining dimension of target object in an image using reference object
Chang et al. Argoverse: 3d tracking and forecasting with rich maps
US9196045B2 (en) Analysis of three-dimensional scenes
Boltes et al. Collecting pedestrian trajectories
Simo-Serra et al. Single image 3D human pose estimation from noisy observations
Bian et al. Fall detection based on body part tracking using a depth camera
US8775916B2 (en) Validation analysis of human target
Xiao et al. Vehicle detection and tracking in wide field-of-view aerial video
US20160216772A1 (en) Behavior recognition system
Choi et al. Detecting and tracking people using an RGB-D camera via multiple detector fusion
JP6065427B2 (en) Object tracking method and object tracking apparatus
Zhang et al. Real-time human motion tracking using multiple depth cameras
EP2769574B1 (en) Tracking activity, velocity, and heading using sensors in mobile devices or other systems
JP4979840B2 (en) Moving body detection apparatus and moving body detection method
US10007850B2 (en) System and method for event monitoring and detection
JP5845365B2 (en) Improvements in or related to 3D proximity interaction
US9037396B2 (en) Simultaneous localization and mapping for a mobile robot
US20160078303A1 (en) Unified framework for precise vision-aided navigation
US8824802B2 (en) Method and system for gesture recognition
Reitmayr et al. Initialisation for visual tracking in urban environments
JP5858261B2 (en) Virtual golf simulation apparatus, and sensing apparatus and sensing method used therefor
JP5255623B2 (en) Volume recognition method and system
US9990726B2 (en) Method of determining a position and orientation of a device associated with a capturing device for capturing at least one image
KR100926783B1 (en) Method for self-localization of a robot based on object recognition and environment information around the recognized object

Legal Events

Date Code Title Description
A977 Report on retrieval

Free format text: JAPANESE INTERMEDIATE CODE: A971007

Effective date: 20161110

A131 Notification of reasons for refusal

Free format text: JAPANESE INTERMEDIATE CODE: A131

Effective date: 20161122

A521 Written amendment

Free format text: JAPANESE INTERMEDIATE CODE: A523

Effective date: 20170113

A131 Notification of reasons for refusal

Free format text: JAPANESE INTERMEDIATE CODE: A131

Effective date: 20170613

A521 Written amendment

Free format text: JAPANESE INTERMEDIATE CODE: A523

Effective date: 20170808

TRDD Decision of grant or rejection written
A01 Written decision to grant a patent or to grant a registration (utility model)

Free format text: JAPANESE INTERMEDIATE CODE: A01

Effective date: 20180116

A61 First payment of annual fees (during grant procedure)

Free format text: JAPANESE INTERMEDIATE CODE: A61

Effective date: 20180205

R150 Certificate of patent or registration of utility model

Ref document number: 6296043

Country of ref document: JP

Free format text: JAPANESE INTERMEDIATE CODE: R150