WO2023145216A1 - Dispositif de surveillance à porter sur soi - Google Patents

Dispositif de surveillance à porter sur soi Download PDF

Info

Publication number
WO2023145216A1
WO2023145216A1 PCT/JP2022/042987 JP2022042987W WO2023145216A1 WO 2023145216 A1 WO2023145216 A1 WO 2023145216A1 JP 2022042987 W JP2022042987 W JP 2022042987W WO 2023145216 A1 WO2023145216 A1 WO 2023145216A1
Authority
WO
WIPO (PCT)
Prior art keywords
case
monitoring device
sensor
wearable monitoring
distance sensor
Prior art date
Application number
PCT/JP2022/042987
Other languages
English (en)
Japanese (ja)
Inventor
弘純 山口
聡仁 廣森
ハマダ モハメド モハメド エルサイド リズク
Original Assignee
国立大学法人大阪大学
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 国立大学法人大阪大学 filed Critical 国立大学法人大阪大学
Publication of WO2023145216A1 publication Critical patent/WO2023145216A1/fr

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/87Combinations of systems using electromagnetic waves other than radio waves
    • G01S17/875Combinations of systems using electromagnetic waves other than radio waves for determining attitude
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/88Lidar systems specially adapted for specific applications
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S7/00Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
    • G01S7/48Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S17/00
    • G01S7/481Constructional features, e.g. arrangements of optical elements

Definitions

  • the present invention relates to wearable monitoring devices that monitor surrounding conditions.
  • Non-Patent Document 1 describes a people flow detection technology using LiDAR (Light Detection and Ranging) that scans the measurement area with a laser beam and captures the reflected signal from the ankle part of the human body as a three-dimensional point group signal. It is In addition, a human tracking system that measures the position and behavior of people in the measurement target area by distributing multiple LiDARs in space and estimating the position of people using statistical methods based on such measurement data is a product. has been made Furthermore, Non-Patent Literature 2 describes that many companies are working on the development of AI processing (edge AI) on the mobile device side as an issue.
  • AI processing edge AI
  • Non-Patent Document 1 and the conventional people-tracking system are based on the premise of being implemented on a computer with sufficient processing power, and attention is paid to miniaturization and weight reduction. It is not broken.
  • Non-Patent Document 2 is based on the premise that most mobile devices are smartphones, which have powerful processing capabilities and image processing by dedicated hardware equivalent to GPUs (Graphics Processing Units). It is not intended for development.
  • the present invention has been made in view of the above, and provides a small and power-saving wearable monitoring device that monitors the surrounding situation in real time using a three-dimensional distance sensor for low privacy invasion.
  • a wearable monitoring device includes a three-dimensional distance sensor having a light emitting unit that periodically emits light for distance measurement to a front area and a light receiving unit that receives reflected light from a target among the emitted light, and a control module for performing information processing on a person in the target based on three-dimensional point cloud data indicating the distance to each reflection point of the target acquired by the three-dimensional distance sensor; and a small case containing a control module, the control module including determination means for determining the behavior of the person based on the three-dimensional point cloud data.
  • the present invention since at least a three-dimensional distance sensor and a control module for information processing are installed in a small case, it is excellent in portability and can easily acquire information on surrounding people in a desired area. becomes.
  • a three-dimensional distance sensor that uses laser light instead of acquiring information on people around the user in the form of an image obtained by the image capturing means, less invasion of privacy is ensured.
  • the three-dimensional distance sensor is operated periodically, power consumption can be saved as compared with the case where it is simply operated continuously.
  • by performing the distance measurement operation periodically it becomes possible to cope with the shake and orientation of the case while carrying it, so that the distance measurement operation can be performed more stably.
  • the distance measuring light may be pulsed, sinusoidal, or scanning beam light, or may be laser light, infrared light, or other light having a predetermined dot pattern.
  • Distance measurement can include a mode performed from a time difference, a phase difference, and a dot pattern difference between transmission and reception.
  • the present invention it is possible to stably monitor the surrounding situation in real time with low privacy invasion, while being power-saving and portable.
  • FIG. 1 It is a figure which shows one Embodiment of the wearable monitoring device which concerns on this invention
  • (A) is a figure which shows an example of the accommodation state in the case of each member
  • (B) is a figure which shows a neck hanging state.
  • (A) is a plan view explaining laser scanning
  • (B) is a side view explaining raster scanning
  • (C) is a target by laser scanning It is a top view explaining detection of a surface.
  • FIG. 12A is a diagram for explaining the correspondence between the y-axis acceleration (acceleration in the direction of gravity) during walking and the determination of the front;
  • FIG. (B) is the detection data during walking of the y-axis acceleration sensor corresponding to the direction of gravity in the inertial sensor 11, and
  • (C) is the timing when the mobile communication terminal 100 faces the front of the owner.
  • (D) is an image of one frame (in which an alley is shown) and detection data for three steps during a period Tf including , an image of one frame (the alley is not shown) and detection data for three steps.
  • FIG. 12A is a diagram for explaining the correspondence between the y-axis acceleration (acceleration in the direction of gravity) during walking and the determination of the front;
  • FIG. (B) is the detection data during walking of the y-axis acceleration sensor corresponding to the direction of gravity in the inertial sensor 11, and
  • (C) is the timing when the mobile communication terminal 100 faces the front of the owner.
  • FIG. 5 is an explanatory diagram showing the relationship between a vanishing point and front determination
  • FIG. 10 is an explanatory diagram for creating a learned model based on a feature amount vector extracted from inertial data and a judgment value
  • It is a flow chart which shows pre-learning and data collection processing. It is a flow chart which shows pre-learning and feature-value vector creation processing.
  • 9 is a flowchart showing an example of front determination processing; It is a flow chart which shows an example of action classification processing. It is a flow chart which shows an example of face-to-face action processing.
  • FIG. 1 is a diagram showing an embodiment of a wearable monitoring device 10 (hereinafter referred to as the device 10) according to the present invention
  • (A) is a diagram showing an example of the housing state of each member in a case
  • (B) is a diagram showing a neck hanging state.
  • the device 10 includes a case 10a made of, for example, resin and having a predetermined shape, in which necessary members are installed.
  • a case 10a made of, for example, resin and having a predetermined shape, in which necessary members are installed.
  • Various shapes can be adopted for the case 10a, and in this embodiment, it has a plate shape, for example, a rectangular parallelepiped shape.
  • the size of the rectangular parallelepiped shape is a size that can be carried, for example, it may correspond to the size of a breast pocket, and as an example, it is assumed that the width (left and right direction) is 6 cm, the height is 9 cm, and the depth (front and back direction) is about 2 to 4 cm. are doing.
  • Inertial sensor 11, three-dimensional distance sensor 12, air volume sensor 13, and power supply unit 14 are arranged in case 10a, and control unit 15 (control module) whose circuit is designed on a flexible printed circuit board, for example, is arranged. ing.
  • the inertial sensor 11 is a small sensor that detects acceleration and angular velocity with respect to the x-axis (horizontal direction of the case 10a), the y-axis (vertical direction of the case 10a), and the z-axis (front-to-rear direction of the case 10a). (Micro Electro Mechanical Systems) equipped with each sensor.
  • the three-dimensional distance sensor 12 is a small three-dimensional sensor that measures the distance to a target (reflection point) based on a signal obtained by emitting infrared laser light to a forward area and receiving reflected light from the target.
  • a target reflection point
  • MEMS Micro Electro Mechanical Systems
  • LiDAR Light Detection and Ranging
  • the light projecting/receiving window 121 corresponds to a laser beam emitting portion and an incident portion, and the front-rear direction (z-axis) is aligned with the front surface of the case 10a.
  • a light emitting part and a light receiving part are arranged as is known.
  • the three-dimensional distance sensor 12 is configured with LiDAR
  • the time difference (dToF) from the emission of the pulsed laser light to the light reception is used, or the phase difference (iToF) of the received laser light with respect to the emitted light (sin wave) can be used to measure the distance to a target (reflection point).
  • these measurement methods using LiDAR are well known, so descriptions thereof will be omitted.
  • the three-dimensional distance sensor 12 is not limited to MEMS as long as it can be manufactured in a small size.
  • a sensor of the Structured Light system which is included in one form of LiDAR, can be adopted.
  • This Structured Light type sensor uses infrared laser light to output a predetermined dot pattern, such as a grid or stripe pattern, from a dot projector (equivalent to a light emitting unit) (projecting it onto a target) to detect the unevenness of the target.
  • a dot projector equivalent to a light emitting unit
  • the image sensor equivalent to the light receiving part placed at a predetermined relative position.
  • the three-dimensional shape of the surface of the target is obtained as three-dimensional point cloud data.
  • FIG. 2A and 2B are diagrams for explaining a method of target detection by the three-dimensional distance sensor 12.
  • FIG. 2A is a plan view for explaining laser scanning
  • FIG. 2B is a side view for explaining raster scanning
  • FIG. It is a top view explaining the detection of the target object surface by laser scanning.
  • the three-dimensional distance sensor 12 periodically transmits pulsed laser beams b1, . . . bj, b(j+1) .
  • the three-dimensional distance sensor 12 sweeps the laser beam in two-dimensional (x-axis, y-axis) directions by gradually tilting a MEMS mirror (not shown) to raster scan the laser beam.
  • B shown in the drawing indicates the measurement area.
  • a plurality of light-receiving data are obtained from the target as three-dimensional point cloud data with high resolution so that, for example, the uneven shape of the surface of a person's face can be measured.
  • air volume sensor 13 can be employed, preferably a known thermal flow sensor configured by MEMS (Micro Electro Mechanical Systems), for example.
  • the airflow sensor 13 is arranged at a proper place in the case 10a in a direction aligned with the front-rear direction.
  • the air volume sensor 13 has, for example, a rectangular parallelepiped shape, and a tubular flow path 131 is formed across the front and rear.
  • the flow path 131 has a front end exposed on the front surface of the case 10a and a rear end exposed on the rear surface of the case 10a (not visible in FIG. 1A). It is formed so that the surrounding air flows through.
  • a heater (not shown) and a pair of temperature sensors are arranged on the front and rear sides of the heater in the middle of the flow path 131, and the flow rate of the air flow, that is, the surrounding air volume is obtained by converting the temperature difference between the pair of temperature sensors. is. Based on the calculated air volume, the air flow or stagnation state around the case 10a can be known. Further, the orientation of the air volume sensor 13 with respect to the case 10a can be set according to the application. Moreover, it may be bent in an oblique direction or in the middle.
  • the power supply unit 14 can adopt a predetermined size, such as an AA size (JIS) dry battery or a cylindrical lithium battery (for example, a diameter of 18.5 mm and a length of 65.3 mm). is.
  • the power supply unit 14 contains a required number of batteries, for example, four batteries in FIG.
  • the control unit 15 is for low power consumption, and is configured, for example, by including a processor (CPU), peripheral components, input/output interfaces, and necessary connectors on a single printed circuit board.
  • the case 10a has a suspension member, for example, through-holes 10b formed on the left and right sides of the upper portion of the case 10a. As shown, it can be hung around the neck of the holder. In a normal posture in which the holder is seated or standing still, the back surface of the flat plate-shaped case 10a is in contact with the user's chest, and the forward direction (z-axis) of the case 10a is used. It is made to match the front direction of the person. On the other hand, the case 10a may swing somewhat during walking. Depending on the application, the sensor may be attached to the front part of the waist using a belt, etc., and the movements of the feet of other people in the vicinity may be measured intensively.
  • a suspension member for example, through-holes 10b formed on the left and right sides of the upper portion of the case 10a. As shown, it can be hung around the neck of the holder. In a normal posture in which the holder is seated or standing still, the back surface of the flat plate-shaped
  • FIG. 3 is a configuration diagram showing functional blocks of the device 10 and some functional blocks of the mobile communication terminal 30.
  • the inertial sensor 11 , the three-dimensional distance sensor 12 and the airflow sensor 13 are connected to the controller 15 .
  • the control unit 15 efficiently supplies power from the power supply unit 14 to itself and necessary sensors and the like at appropriate timings.
  • the storage unit 16 includes a control program storage unit 160 that stores various control programs for monitoring, learned model storage units 161 and 162 that store learned models described later, and measurement data that stores measurement data of each sensor.
  • a data storage unit 163 and a storage unit as a work area for temporarily storing data in process are provided.
  • a smart phone is applicable as the mobile communication terminal 30 .
  • the mobile communication terminal 30 transmits measurement result data or monitor results of the device 10 to the device 10 via the short-range communication unit 157 (see FIG. 4) such as Wi-Fi and the short-range communication unit 321. It leads to the owner's smartphone, and the monitor result information is used more effectively via the smartphone.
  • the mobile communication terminal 30 has a display unit 311 arranged in the center of the main body 31 and a speaker 312 arranged in the upper part thereof.
  • FIG. 4 is a functional configuration diagram mainly of the control unit 15 of the device 10.
  • the control unit 15 controls the front determination unit 151, the step determination unit 152, the point cloud segment creation unit 153, the classification determination unit 154, the action classification determination unit 155, the timer unit 156, and the near It functions as the distance communication unit 157 .
  • the control unit 15 uses the learned model stored in the learned model storage unit 161 to execute front determination processing.
  • the control unit 15 uses each learned model stored in the learned model storage unit 162 to execute the classification estimation process and the action classification estimation process. Details will be described later.
  • the front determination unit 151 detects the timing when the case 10a hanging from the holder's neck faces the front of the holder while walking.
  • the direction of the case 10a may continue to swing while walking, but at a certain timing it faces the front of the holder. Therefore, by causing the three-dimensional distance sensor 12 to perform a measurement operation at the timing when the case 10a faces the front direction of the owner, the target located in the front direction of the owner can be stably detected while saving power. measure accurately.
  • machine learning is used to detect more accurate timing.
  • a method of generating a learned model applied to machine learning will be described below.
  • Various machine learning methods can be used as the machine learning method, and supervised learning may be used to reduce the processing load.
  • supervised learning a problem with an answer is given to a program that simulates, for example, a neuron network, and the parameters (e.g. weighting factors) in the program are gradually and automatically adjusted so that the answer calculated by the program approaches the prepared answer.
  • a machine learning method that learns by adjusting.
  • a trained model includes at least parameters after adjustment, and may also include a program.
  • FIG. 5 to 7 are diagrams for explaining front determination.
  • FIG. 5 is a diagram for explaining the y-axis acceleration (acceleration in the direction of gravity) and front determination during walking.
  • FIG. 6 is an explanatory diagram showing the relationship between the vanishing point and front determination.
  • FIG. 7 is an explanatory diagram for creating a learned model based on feature amount vectors extracted from inertia data and judgment values. 7, 5, and 6 require a landscape image for judging the front. Therefore, as shown in FIG.
  • a known mobile communication terminal 100 model name: Pixel 3, manufactured by Google Inc.
  • a camera 102 that can be used was hung from the neck with a string having the same length as the neck strap 20 .
  • (A) is a one-frame image showing an alley in a street, showing the shake state of the mobile communication terminal 100
  • (B) is the y-axis corresponding to the direction of gravity in the inertial sensor 11.
  • This is data detected by the acceleration sensor during walking
  • (C) is an image of one frame (showing an alley) during a period Tf including the timing when the mobile communication terminal 100 faces the front of the owner.
  • Three steps are detected data
  • (D) is a certain one-frame image (the alley is not shown) and detection of three steps during the period Ts when the mobile communication terminal 100 is not facing the owner's front direction. Data.
  • FIG. 5 and FIG. 6 show the data of straight walking for about 30 seconds from the stationary state to the start of walking, using the inertial sensor 101 and the camera 102 mounted on the mobile communication terminal 100.
  • FIG. 6 As shown in FIG. 6, "perspective lines” indicating perspective are extracted from the frame image, and the “vanishing point", which is the intersection of the tips of the lines, is located in the central area of the image frame.
  • a binary value (“0” or “1”) indicating whether or not the smartphone, that is, the mobile communication terminal 100 is facing the front of the owner, depending on whether or not it is located in the central area of each of the three equal divisions in the direction. to judge.
  • This binary value is called a determination value (see FIG. 7). It should be noted that depending on the application, the front direction may be determined by determining whether or not it is located in the central area of the trisection in the horizontal direction.
  • the walking period for one step during the period Tf including the timing when the mobile communication terminal 100 faces the front of the owner, one leg
  • the period Ts during which the mobile communication terminal 100 is not facing the owner's front is the period from immediately after the landing of the one leg to the middle stage of the stance.
  • each measurement value from the latest to the past predetermined time width (time window) is measured for each axis
  • each measurement value is standardized (normalized) for each axis
  • a preset feature amount is extracted.
  • the feature values are the mean value (mean), standard deviation (std), minimum value (min), maximum value (max), range ([0],[n-1 ]), and the six axes are combined to form a feature amount vector.
  • the parameters in the program are adjusted by repeating the execution of the machine learning program by inputting a plurality of extracted feature quantity vectors and judgment values as answers. Then, when a certain amount of learning is completed, it is stored in the learned model storage unit 161 as a learned model.
  • FIG. 8 is a flowchart showing pre-learning/data collection processing.
  • FIG. 9 is a flowchart showing learned model creation processing.
  • FIG. 10 is a flowchart showing front determination processing.
  • the pre-learning/data collection processing shown in FIG. 8 is performed using the mobile communication terminal 100 .
  • the vanishing point of each time slot t is calculated from the frame images obtained in each time slot t obtained in step S5 of FIG.
  • a perspective line extraction process is executed, and the intersection point (address) is obtained as the vanishing point. Then, it is determined whether or not the vanishing point is located in the central area of the image frame based on the comparison of both addresses. A determination value of "0" is set (step S11).
  • step S13 average value, standard deviation, maximum value, and minimum value calculated from the detection data to create an (n+4)-dimensional vector
  • step S15 combine the created vector data for 6 axes. to create a feature amount vector
  • a learned model is created by combining the combined feature vector and the determination value indicating whether or not the face is facing forward in pairs for each time slot t (step S17).
  • step S19 it is determined whether or not data acquisition for a predetermined amount, for example, one step to the left or right has been completed. If data acquisition has not been completed, the next time slot t is set (step S21), Similar processing is repeated. On the other hand, if data acquisition has ended (Yes in step S19), this flow ends.
  • the determination of the position of the vanishing point in the frame image in step S11 may be performed automatically using a determination program, or may be set by a supervised learning method.
  • the learning process in FIGS. 8 and 9 is not limited to one time, and is usually repeated multiple times to improve the accuracy of determination when actually applied (see, for example, FIG. 10), and to make it highly practical. can be done.
  • the camera 102 of the mobile communication terminal 100 is used in the pre-learning to shoot an image, and learning whether or not to determine whether or not the photographed image is the front face is performed.
  • pre-learning regarding the determination of the front may be performed using the same shape as the case 10a.
  • the judgment value may be obtained by referring to measurement data of the three-dimensional distance sensor 12 obtained by walking a predetermined place for pre-learning.
  • FIG. 10 shows front face determination processing in an actual usage scene of the device 10.
  • time is measured, and each time a time slot t is reached (steps S31, S33), detection data from the inertial sensor 11 is acquired (step S35).
  • a feature amount vector is calculated from the acquired detection data (step S37) and input to the front determination module (front determination unit 151) (step S39).
  • the front determination module determines whether or not the device 10 is facing the front of the owner with respect to the inertia data acquired at the current time slot t, based on the input feature amount vector and the learned model (step S41). ).
  • the measurement operation of the three-dimensional distance sensor 12 is instructed (step S43).
  • the operation timing of the three-dimensional distance sensor 12 during walking is determined.
  • the acceleration and angular velocity of each of the three axes of the inertial sensor 11 are used as the method for judging the front during walking. Acceleration or angular velocity other than that may be used.
  • the step determination unit 152 determines whether the person wearing the device 10 is walking.
  • Various determination methods are known for determining whether or not a person is walking.
  • a determination method using the inertial sensor 11 will be described. All or part of the sensor data of the inertial sensor 11 may be used.
  • the detection value of the acceleration sensor in the y-axis direction and the detection value of the angular velocity sensor in the z-axis direction periodically increase and decrease according to walking to determine whether the user is walking or not. .
  • the point cloud segment creation unit 153 cuts out (extracts) point clouds corresponding to a person, an object, and a background (such as a wall) in target units from the three-dimensional point cloud data acquired by the three-dimensional distance sensor 12 as point cloud segments. ).
  • the three-dimensional distance sensor 12 When a LiDAR using dToF is adopted as the three-dimensional distance sensor 12, as shown in FIG.
  • the distance to the reflection point p on the surface of the nearest target (eg, person 41) to be measured is accurately measured according to dToF.
  • a set of measured distance information of each reflection point p is defined as three-dimensional point cloud data.
  • the three-dimensional point cloud data includes at least the distance d and direction (scan angle ⁇ and raster scan angle) to each reflection point p on the surface of the target (person 41, background 51, other objects, etc.).
  • the point cloud segment creation unit 153 cuts out three-dimensional point cloud data within a range that can be regarded as an individual from the state of distance change on the target surface from the three-dimensional point cloud data, and forms a point cloud segment.
  • the size (diameter) of targets for people, objects, and backgrounds, the number of reflection points, the surface shape, and the degree of discontinuity (separation distance) between adjacent 3D point cloud data. may be included.
  • the 3D point cloud data may indicate almost the whole body, or may indicate the upper body or facial parts, and the point cloud segment extraction conditions are appropriately set according to them. can be
  • the classification determination unit 154 determines whether the extracted point cloud segment corresponds to a person, an object, or the background.
  • the classification determination unit 154 may be a method of determining in which category it is classified in comparison with a preset determination condition, but in this embodiment, machine learning is applied and a trained model obtained in advance is used. I'm trying to make a judgment by doing.
  • machine learning when acquiring the parameters of the trained model, the processing load of the program (for machine learning) is reduced by appropriately compressing the data by utilizing the feature value related to the point cloud segment.
  • the person, object, and background are set as a Gaussian mixture model (GMM), or more preferably as a further compressed Fisher vector feature value After creating it, I try to input it into a machine learning program.
  • GMM Gaussian mixture model
  • these Fisher vector feature values are linked to the answers and input to the support vector machine (SVM) to adjust the parameters of the support vector machine (SVM) classification decision program.
  • SVM support vector machine
  • the learned model storage unit 162 as a learned model for classification determination.
  • the classification determination unit 154 identifies point cloud segments corresponding to people. Then, the behavior of the person identified by the classification determination unit 154 is determined by the behavior classification determination unit 155 .
  • the behavior classification determination unit 155 determines the behavior of the person identified by the classification determination unit 154 based on the point cloud segment. In addition, the action classification determination unit 155 estimates, for example, the position and movement speed as action elements for classifying human actions.
  • the moving speed is obtained by performing so-called multi-object tracking (MOT) using a Kalman filter on the difference between the current and the previously acquired positions of the person.
  • MOT multi-object tracking
  • the Kalman filter is used to estimate time-varying values (position and motion velocity) from observations with discrete errors. is a method of individually labeling multiple point cloud segments to perform motion-based tracking.
  • the other person's action classification includes, for example, "relative position and movement speed” and “relative stop position” of the other person, and the point cloud segment focuses on facial parts. Then, from the movement of the lips, etc., “talking (silent)", “eating”, and other various behavioral patterns are exemplified, and feature values are set for each be done.
  • Knowing the movements of others around you while walking is especially useful for visually impaired people as a movement support. For example, through the speaker 312, a voice guidance such as "There is a person stopped 1 m ahead.”
  • the air flow (stagnation) between the two is determined by the air volume sensor 13 and the timer 156. and close time can also be a factor.
  • the behavior classification determination unit 155 determines the behavior of the person from each piece of information on the position and movement speed of the person based on the point cloud segment.
  • the behavior classification determination unit 155 may be a method of determining which category is classified by comparing with a preset determination condition, but in this embodiment, machine learning is applied to obtain a learned model obtained in advance. I use it to judge.
  • machine learning is applied to obtain a learned model obtained in advance. I use it to judge.
  • feature amounts feature amounts corresponding to various types of actions as described above are created and applied.
  • a well-known support vector machine (SVM) classification method for light load based on supervised learning can be adopted.
  • SVM support vector machine
  • GMM Gaussian mixture model
  • I try to input it into a machine learning program.
  • SVM support vector machine
  • the parameters of the support vector machine (SVM) program for action classification determination are adjusted. be done.
  • the behavior classification determination program and the adjusted parameters at least the adjusted parameters are separately stored in the learned model storage unit 162 as a learned model for behavior classification determination.
  • the behavior classification determination unit 155 obtains behavior classification determination results corresponding to human behavior based on the point group segments measured by the three-dimensional distance sensor 12.
  • FIG. 11 is a flowchart showing an example of action classification processing.
  • an operation instruction is input to the three-dimensional distance sensor 12 (step S51). For example, when it is determined that the user is walking, the input of the action instruction is performed every time the device 10 faces the front of the owner (every step). while sitting, standing still, or sitting), it is performed periodically (for example, several times/second).
  • step S65 If there is no operation instruction input, the process skips to step S65, and if there is an operation instruction input, the three-dimensional distance sensor 12 is operated to acquire three-dimensional point cloud data (step S53). Next, a point cloud segment is obtained by cutting out the point 3D group data of each target (step S55), and then a feature value vector generated from the obtained point cloud segment is created, and a classification judgment program , it is determined whether the point cloud segment is a person, an object, or a background (step S57).
  • step S59 the position and speed of the point cloud segment determined to be a person are estimated (step S59), and a feature value vector including the position and speed is created and input to the action classification determination program, so that the point cloud The action content of the segment is determined (step S61).
  • step S63 processing is executed according to the determination result (step S63).
  • the processing according to the determination result corresponds to the output form of the determination result, for example, recording in the recording unit, outputting as audio or image, and notifying on the mobile communication terminal 30 side via short-distance communication.
  • step S65 it is determined whether or not the process is finished (step S65). If not finished, the process returns to step S51.
  • FIG. 12 is a flowchart showing an example of face-to-face behavior processing.
  • Face-to-face behavior processing is one aspect of behavior classification, and refers to monitoring of the degree of closeness in a face-to-face state with another person.
  • the point cloud segment is acquired from the classification determination unit 154, and the action classification is determined based on it (step S73).
  • the degree of closeness to the risk of infection with infectious diseases can be monitored.
  • a warning for example, "It's two dense" is issued (step S75).
  • the higher the retention level of wind and air currents, the greater the risk, and a shorter closed time may be set. Further, when there is another person at the meeting position, the shorter the meeting distance and the longer the conversation time, the higher the risk, and the shorter the close time may be set.
  • Monitoring and warning are notified by the device 10 side in a mode in which the device 10 is equipped with an output unit for sound and images, while when performed via the mobile communication terminal 30, the output unit of the mobile communication terminal 30 is also used. do it. Then, when the monitoring operation (step S77) is completed, this flow is exited.
  • the device 10 is hung from the neck by the owner, but the present invention can also be applied to use the device 10 as it is installed or attached to a predetermined location. That is, the device 10 is attached to a suitable place on a wall surface or shelf in a retail store, home, or facility, or placed using a jig to grasp the shopping behavior of the retail store, measure the flow of people, or It is also possible to grasp the behavior inside and inside the facility.
  • the device 10 is arranged toward a target shopping corner in the store, the number of people gathered or the number of people purchasing are counted, and the flow of people is measured by judging and counting the number of people crossing the measurement area.
  • action grasping the device 10 is placed facing the target area, and actions of people passing through or appearing in the area are grasped.
  • the wearable monitoring device has a light emitting unit that periodically emits light for distance measurement to a front area and a light receiving unit that receives reflected light from a target among the emitted light.
  • a dimensional distance sensor a control module for performing information processing on a person in the target based on three-dimensional point cloud data indicating the distance to each reflection point of the target acquired by the three-dimensional distance sensor; It is preferable that a three-dimensional distance sensor and a small case containing the control module are provided, and the control module includes determination means for determining the behavior of the person based on the three-dimensional point cloud data.
  • the present invention since at least a three-dimensional distance sensor and a control module for information processing are installed in a small case, it is excellent in portability and can easily acquire information on surrounding people in a desired area. becomes.
  • a three-dimensional distance sensor that uses laser light instead of acquiring information on people around the user in the form of an image obtained by the image capturing means, less invasion of privacy is ensured.
  • the three-dimensional distance sensor is operated periodically, power consumption can be saved as compared with the case where it is simply operated continuously.
  • by performing the distance measurement operation periodically it becomes possible to cope with the shake and orientation of the case while carrying it, so that the distance measurement operation can be performed more stably.
  • the distance measuring light may be pulsed, sinusoidal, or scanning beam light, or may be laser light, infrared light, or other light having a predetermined dot pattern.
  • Distance measurement can include a mode performed from a time difference, a phase difference, and a dot pattern difference between transmission and reception.
  • the case further incorporates a power supply unit having a battery for supplying operating power to the three-dimensional distance sensor and the control module.
  • a power supply unit having a battery for supplying operating power to the three-dimensional distance sensor and the control module.
  • the present device can be configured only by the case without externally attaching a battery.
  • the case is preferably breast pocket size. According to this configuration, such a small size reduces the weight burden and enables natural walking even when carried.
  • the present invention includes an inertial sensor installed in the case, the case includes a suspension member, the inertial sensor detects vibration of the case during suspension, and the control module is configured to: It is preferable to include instruction means for instructing the three-dimensional distance sensor to perform a measurement operation based on the detected direction of deflection of the case.
  • the case may shake while walking, for example. can be performed at a timing in which is oriented in a desired direction.
  • the inertial sensor preferably includes at least one of an acceleration sensor and an angular velocity sensor. According to this configuration, it is possible to know the movement and orientation of the case accurately one by one, so that the measurement operation of the three-dimensional distance sensor can be performed at more appropriate timing.
  • the instruction means instructs the three-dimensional distance sensor to perform the measurement operation at the timing when the case is detected to face forward. preferably. According to this configuration, it is possible to more appropriately know the timing at which the case faces forward in response to periodic shaking such as during walking.
  • the instruction means instructs the measurement operation of the three-dimensional distance sensor at a predetermined cycle when the shake of the case is not detected.
  • the case when the case is not shaken, it is considered that the case is oriented in a direction that enables the measurement operation of the three-dimensional distance sensor, so that the measurement operation can be performed at a preset predetermined cycle. Become. In this case, it can be set shorter or longer than the timing of the measurement operation during walking depending on the application.
  • control module includes a clock means, and the determination means clocks the detection time by the clock means when a person is detected in the front direction of the case.
  • the determination means clocks the detection time by the clock means when a person is detected in the front direction of the case.
  • the present invention further comprises an air flow sensor arranged with a part thereof exposed on the outer surface of the case for measuring the flow rate of the air flow around the case, and the determination means determines whether a person is present in the front direction of the case. Preferably, if detected, the flow rate of the airflow is measured by the airflow sensor. According to this configuration, if it is determined that there is another person at the facing position, and if the air flow between the facing positions is small, the risk of infection with infectious diseases increases, so the detection time is reduced. Monitoring and warning can be provided by timing.
  • the determination means evaluates the dense state using information on at least one of the detection time and the flow rate of the airflow. According to this configuration, the dense state time evaluation is performed according to the stagnation state of the air flow.
  • the present invention includes a short-range communication unit that is installed in the case and is capable of short-range communication with a specific mobile communication terminal equipped with a notification unit, and transmits the three-dimensional point cloud data via the short-range communication unit. It is preferable that the information is transmitted to the specific mobile communication terminal.
  • an advanced processing program for example, a dedicated application program registered in a communication mobile terminal such as a smart phone is used to more accurately and accurately process 3D point cloud data, for example, a voice guide from the notification unit. can be notified by a communication mobile terminal such as a smart phone.
  • wearable monitoring device 10a case 10b through hole (suspension member) REFERENCE SIGNS LIST 11 inertial sensor 12 three-dimensional distance sensor 13 air volume sensor 14 power supply section 15 control section (control module) 151 front determination unit (instruction means) 152 step determination unit 153 point cloud segment creation unit 154 classification determination unit (determination means) 155 action classification determination unit (determination means) 156 timekeeping unit (timekeeping means) 157 short-range communication unit 161, 162 trained model storage unit

Landscapes

  • Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Electromagnetism (AREA)
  • General Physics & Mathematics (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Optical Radar Systems And Details Thereof (AREA)

Abstract

Le dispositif de surveillance à porter sur soi (10) de l'invention comprend : un capteur de distance tridimensionnel (12) pour émettre périodiquement une lumière laser vers une région avant et recevoir une lumière réfléchie par une cible ; un module de commande (15) pour effectuer le traitement des informations relatives à une personne se trouvant sur la cible sur la base de données de groupes de points tridimensionnels indiquant la distance de chaque point de réflexion sur la cible acquises par le capteur de distance tridimensionnel (12) ; et un boîtier cubique (10a) dans lequel le capteur de distance tridimensionnel (12) et le module de commande (15) sont disposés. Le module de commande (15) effectue une détermination d'extraction de la personne et une détermination d'action de la personne sur la base des données du groupe de points tridimensionnels. La situation environnante est ainsi surveillée en temps réel de manière stable, avec une invasion minimale de la vie privée, et de manière à économiser l'énergie et à être suffisamment petit pour être porté sur soi.
PCT/JP2022/042987 2022-01-26 2022-11-21 Dispositif de surveillance à porter sur soi WO2023145216A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2022010090 2022-01-26
JP2022-010090 2022-01-26

Publications (1)

Publication Number Publication Date
WO2023145216A1 true WO2023145216A1 (fr) 2023-08-03

Family

ID=87471409

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2022/042987 WO2023145216A1 (fr) 2022-01-26 2022-11-21 Dispositif de surveillance à porter sur soi

Country Status (1)

Country Link
WO (1) WO2023145216A1 (fr)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2001160137A (ja) * 1999-09-22 2001-06-12 Fuji Heavy Ind Ltd 監視システムの距離補正装置、および監視システムの消失点補正装置
EP3157233A1 (fr) * 2015-10-13 2017-04-19 Thomson Licensing Dispositif portatif, procédé pour faire fonctionner un tel dispositif et programme informatique
US20180114063A1 (en) * 2016-10-26 2018-04-26 Orcam Technologies Ltd. Providing a social media recommendation based on data captured by a wearable device
JP2019016351A (ja) * 2017-07-07 2019-01-31 トヨタ自動車株式会社 歩行者安全メッセージに基づく人密度推定
JP2020507177A (ja) * 2017-02-09 2020-03-05 ライング オーローク オーストラリア ピーティーワイ リミテッドLaing O’Rourke Australia Pty Ltd 定義されたオブジェクトを識別するためのシステム
US20210312684A1 (en) * 2020-04-03 2021-10-07 Magic Leap, Inc. Avatar customization for optimal gaze discrimination

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2001160137A (ja) * 1999-09-22 2001-06-12 Fuji Heavy Ind Ltd 監視システムの距離補正装置、および監視システムの消失点補正装置
EP3157233A1 (fr) * 2015-10-13 2017-04-19 Thomson Licensing Dispositif portatif, procédé pour faire fonctionner un tel dispositif et programme informatique
US20180114063A1 (en) * 2016-10-26 2018-04-26 Orcam Technologies Ltd. Providing a social media recommendation based on data captured by a wearable device
JP2020507177A (ja) * 2017-02-09 2020-03-05 ライング オーローク オーストラリア ピーティーワイ リミテッドLaing O’Rourke Australia Pty Ltd 定義されたオブジェクトを識別するためのシステム
JP2019016351A (ja) * 2017-07-07 2019-01-31 トヨタ自動車株式会社 歩行者安全メッセージに基づく人密度推定
US20210312684A1 (en) * 2020-04-03 2021-10-07 Magic Leap, Inc. Avatar customization for optimal gaze discrimination

Similar Documents

Publication Publication Date Title
US11044402B1 (en) Power management for optical position tracking devices
JP2022002144A (ja) 拡張現実のためのシステムおよび方法
US9747697B2 (en) System and method for tracking
US8854594B2 (en) System and method for tracking
EP2915025B1 (fr) Dispositif informatique et de commande de type montre sans fil et procédé pour imagerie en 3d, cartographie, réseau social et interfaçage
CN102549619B (zh) 人类跟踪系统
CN102665838B (zh) 用于确定和跟踪目标的肢端的方法和系统
CA3073920C (fr) Detection, estimation et evitement de collision
US20140306874A1 (en) Near-plane segmentation using pulsed light source
CN113454518A (zh) 多相机交叉现实设备
CN105190703A (zh) 使用光度立体来进行3d环境建模
US9682482B2 (en) Autonomous moving device and control method of autonomous moving device
US20130069939A1 (en) Character image processing apparatus and method for footskate cleanup in real time animation
JP2022537817A (ja) 動的オクルージョンのための高速の手のメッシュ化
WO2023145216A1 (fr) Dispositif de surveillance à porter sur soi
EP3646147B1 (fr) Appareil d'affichage de réalité assisté par ordinateur
US11233937B1 (en) Autonomously motile device with image capture
WO2021177471A1 (fr) Dispositif de détection, dispositif de suivi, programme de détection et programme de suivi
CN112883913A (zh) 一种儿童刷牙训练教导系统和方法及电动牙刷
JP2003346150A (ja) 床面認識装置及び床面認識方法並びにロボット装置
TWI413018B (zh) 體積識別方法及系統
Yang 3D Sensing and Tracking of Human Gait
US11188811B2 (en) Communication apparatus
JP7244011B2 (ja) 注意状態監視システム、注意状態監視装置、注意状態監視方法、およびコンピュータプログラム
CN105164617A (zh) 自主nui设备的自发现

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22924068

Country of ref document: EP

Kind code of ref document: A1

WWE Wipo information: entry into national phase

Ref document number: 2023576645

Country of ref document: JP