WO2024097300A1 - Cartographie d'une aire de vie à l'aide d'un lidar - Google Patents
Cartographie d'une aire de vie à l'aide d'un lidar Download PDFInfo
- Publication number
- WO2024097300A1 WO2024097300A1 PCT/US2023/036608 US2023036608W WO2024097300A1 WO 2024097300 A1 WO2024097300 A1 WO 2024097300A1 US 2023036608 W US2023036608 W US 2023036608W WO 2024097300 A1 WO2024097300 A1 WO 2024097300A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- living area
- lidar
- features
- neural network
- data
- Prior art date
Links
- 238000013507 mapping Methods 0.000 title claims abstract description 16
- 238000013528 artificial neural network Methods 0.000 claims abstract description 75
- 238000012549 training Methods 0.000 claims abstract description 31
- 238000012544 monitoring process Methods 0.000 claims abstract description 11
- 238000000034 method Methods 0.000 claims description 23
- 238000012545 processing Methods 0.000 claims description 17
- 230000000007 visual effect Effects 0.000 claims description 8
- 238000007670 refining Methods 0.000 claims description 3
- 238000010586 diagram Methods 0.000 description 9
- 230000003068 static effect Effects 0.000 description 5
- 238000001514 detection method Methods 0.000 description 4
- 230000000694 effects Effects 0.000 description 4
- 238000011156 evaluation Methods 0.000 description 3
- 230000005484 gravity Effects 0.000 description 3
- 230000007246 mechanism Effects 0.000 description 3
- 230000008569 process Effects 0.000 description 3
- 230000029058 respiratory gaseous exchange Effects 0.000 description 3
- 238000012360 testing method Methods 0.000 description 3
- 238000012790 confirmation Methods 0.000 description 2
- 238000010801 machine learning Methods 0.000 description 2
- 238000012805 post-processing Methods 0.000 description 2
- 230000000306 recurrent effect Effects 0.000 description 2
- 230000006403 short-term memory Effects 0.000 description 2
- 238000009987 spinning Methods 0.000 description 2
- 230000006399 behavior Effects 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 238000004891 communication Methods 0.000 description 1
- 238000013480 data collection Methods 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 238000001914 filtration Methods 0.000 description 1
- 238000005111 flow chemistry technique Methods 0.000 description 1
- 230000006870 function Effects 0.000 description 1
- 238000009434 installation Methods 0.000 description 1
- 230000015654 memory Effects 0.000 description 1
- 230000004044 response Effects 0.000 description 1
- 230000007704 transition Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S13/00—Systems using the reflection or reradiation of radio waves, e.g. radar systems; Analogous systems using reflection or reradiation of waves whose nature or wavelength is irrelevant or unspecified
- G01S13/66—Radar-tracking systems; Analogous systems
- G01S13/72—Radar-tracking systems; Analogous systems for two-dimensional tracking, e.g. combination of angle and range tracking, track-while-scan radar
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S13/00—Systems using the reflection or reradiation of radio waves, e.g. radar systems; Analogous systems using reflection or reradiation of waves whose nature or wavelength is irrelevant or unspecified
- G01S13/02—Systems using reflection of radio waves, e.g. primary radar systems; Analogous systems
- G01S13/04—Systems determining presence of a target
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S13/00—Systems using the reflection or reradiation of radio waves, e.g. radar systems; Analogous systems using reflection or reradiation of waves whose nature or wavelength is irrelevant or unspecified
- G01S13/02—Systems using reflection of radio waves, e.g. primary radar systems; Analogous systems
- G01S13/50—Systems of measurement based on relative movement of target
- G01S13/52—Discriminating between fixed and moving objects or between objects moving at different speeds
- G01S13/56—Discriminating between fixed and moving objects or between objects moving at different speeds for presence detection
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S13/00—Systems using the reflection or reradiation of radio waves, e.g. radar systems; Analogous systems using reflection or reradiation of waves whose nature or wavelength is irrelevant or unspecified
- G01S13/86—Combinations of radar systems with non-radar systems, e.g. sonar, direction finder
- G01S13/865—Combination of radar systems with lidar systems
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S17/00—Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
- G01S17/88—Lidar systems specially adapted for specific applications
- G01S17/89—Lidar systems specially adapted for specific applications for mapping or imaging
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S7/00—Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
- G01S7/02—Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S13/00
- G01S7/41—Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S13/00 using analysis of echo signal for target characterisation; Target signature; Target cross-section
- G01S7/415—Identification of targets based on measurements of movement associated with the target
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S7/00—Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
- G01S7/02—Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S13/00
- G01S7/41—Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S13/00 using analysis of echo signal for target characterisation; Target signature; Target cross-section
- G01S7/417—Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S13/00 using analysis of echo signal for target characterisation; Target signature; Target cross-section involving the use of neural networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/044—Recurrent networks, e.g. Hopfield networks
- G06N3/0442—Recurrent networks, e.g. Hopfield networks characterised by memory or gating, e.g. long short-term memory [LSTM] or gated recurrent units [GRU]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
- G06N3/09—Supervised learning
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S17/00—Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
- G01S17/86—Combinations of lidar systems with systems other than lidar, radar or sonar, e.g. with direction finders
Definitions
- This application is directed to the field of remote monitoring of activities using interconnected hardware, software, and cloud components, and more particularly to using LIDAR to map a living area that is monitored using an ultra-wideband radar and a geometric and Al model trained on appropriate samples.
- the systems monitor movement of the occupants throughout a living area and, in many cases, are configured to alert caregivers if detected movement patterns indicate certain conditions or situations.
- radar is used to detect movements of the occupants by dynamic modeling of user states with a point cloud, which corresponds to body movements of one or more users generated by radar(s) of tracking device(s).
- Point clouds may be determined based on range azimuth and range elevation heat maps from the radar tracking device(s). Density of the point cloud and accuracy of the representation may depend on various characteristics of a radar device. For a variety of commercially available inexpensive radars, a detection device may generate on average 50-120 points for a person walking across a room and a geometric approximation of the body may be limited to a dynamically changing bounding box of the point cloud.
- a user point cloud may reflect substantially two categories of user state: moving (such as walking or moving in a cart across the room, turning in a bed or in a chair, performing exercises such as squats, etc.) and static (standing, sitting, laying down in the bed or on the floor after falling). Density of the user point cloud in a static state may be significantly lower than in a moving state because body movement in a static state is essentially restricted to a chest movement due to breathing and to heartbeats.
- Point cloud density along with dimensions and position of a bounding box and a center of gravity of the point cloud, may be used to determine user state by direct geometric processing, by Al methods, such as LSTM (Long short-term memory) based recurrent neural network classifiers, by combined di rect/AI methods or otherwise.
- Al methods such as LSTM (Long short-term memory) based recurrent neural network classifiers, by combined di rect/AI methods or otherwise.
- False targets Prior to using radar to monitor occupants in a living space, it is useful to have information about the living space, including the layout of the space and possible location(s) of sources of movement that are independent of movement of the occupant(s) being monitored, called “false targets”. False targets can include, for example, a rotating fan or moving curtains in front of open windows. These types of movements may be detected by radar sensor(s) and characterized as point clouds, but are not related to any movement of any occupant being monitored. Another type of false targets includes multipath targets, which are caused by signals reflecting off humans and onto walls, thereby causing false targets to appear on walls which could be mistaken for humans. Thus, being able to eliminate point clouds generated by false targets would tend to increase the accuracy of using point clouds to measure movement and position of one or more occupants in a living area.
- Mapping dimensions and features of a living area improves performance of an automatic monitoring system by allowing the system to more easily detect specific activities by the occupants. For example, knowing a specific location of a bed in a living area allows the system to more accurately determine when an occupant has gone to bed.
- mapping a living area prior to using radar to monitor occupants in the living area is performed manually, where someone physically measures the space and identifies features like doors, windows, specific furniture, etc.
- mapping a living area automatically provides for greater efficiency and reduces the likelihood of errors that could be introduced by a manual process.
- automatically mapping features of a living area for subsequent monitoring of occupants using radar includes training a neural network to identify features of a living area based on LIDAR data corresponding to the living area using training data containing previously captured LIDAR data sets and corresponding living area features for the previously captured LIDAR data sets, manually obtaining LIDAR data that corresponds to the living area by physically scanning the living area, and applying the LIDAR data to the neural network to obtain features of the living area based on the LIDAR data.
- Automatically mapping features of a living area for subsequent monitoring of occupants using radar may also include presenting a user with a visual depiction of LIDAR data and prompting the user to confirm a detected one of the features based on the visual depiction of the LIDAR data presented to the user.
- Prompting the user may be performed in connection with training the neural network.
- Prompting the user may be performed in connection with refining the neural network after the neural network has been trained.
- Physically scanning the living area may be performed by a surveyor with a LIDAR device.
- the LIDAR device may be a standalone device or a hand-held device. The surveyor may move around the living area while pointing the hand-held LIDAR device in different directions to acquire data.
- the LIDAR device may be a smartphone with LIDAR capability or may be a 3-D camera.
- the mapped features may include dimensions of the living area, locations of doors and windows within the living area, and locations of false targets.
- the false targets may include window curtains and/or a fan.
- the LIDAR data may be two-dimensional data that has been subject to additional processing to approximate depth from two-dimensional images.
- a non-transitory computer readable medium contains software that, when executed, automatically maps features of a living area for subsequent monitoring of occupants using radar.
- the software includes executable code that trains a neural network to identify features of a living area based on LIDAR data corresponding to the living area using training data containing previously captured LIDAR data sets and corresponding living area features for the previously captured LIDAR data sets, the LIDAR data that corresponds to the living area being manually obtained by physically scanning the living area and executable code that applies the LIDAR data to the neural network to obtain features of the living area based on the LIDAR data.
- the software may also include executable code that presents a user with a visual depiction of LIDAR data and executable code that prompts the user to confirm a detected one of the features based on the visual depiction of the LIDAR data presented to the user.
- Prompting the user may be performed in connection with training the neural network.
- Prompting the user may be performed in connection with refining the neural network after the neural network has been trained.
- Physically scanning the living area may be performed by a surveyor with a LIDAR device.
- the LIDAR device may be a standalone device or a hand-held device. The surveyor may move around the living area while pointing the hand-held LIDAR device in different directions to acquire data.
- the LIDAR device may be a smartphone with LIDAR capability or may be a 3-D camera.
- the mapped features may include dimensions of the living area, locations of doors and windows within the living area, and locations of false targets.
- the false targets may include window curtains and/or a fan.
- the LIDAR data may be two-dimensional data that has been subject to additional processing to approximate depth from two-dimensional images.
- a system facilitates using radar to monitor one or more occupants a living area by mapping the living area to provide data about the living area to facilitate subsequent analysis of data provided by one or more radar detectors.
- Mapping the living area includes determining a number of characteristics of the living area, such as dimensions of the living area, the location of any objects that could present difficulty to radar-based detection, any sources of constant or intermittent motion, such as a rotating fan or moving curtains due to an open windows (i.e., false targets), and features of the living area that can facilitate analysis of movement and location of occupants of the living area.
- An algorithm such as a machine learning algorithm, may use a learning period (e.g., one to several days or less) to automatically identify the location of false targets and provide bounding boxes around identified areas.
- the algorithm may look for characteristic patterns of motion and analyze living area usage such as where a human enters and exits a room of the living area, thus identifying the doors of the living area.
- the system may also dimensions of the living area and may identify features of the living area based on shape and location, such as identifying a bed based on a detected shape.
- the system may use LIDAR or similar to collect data at the time of installation using a smart phone with a built in LIDAR scanner, such as the iPhone Pro smart phone provided by Apple, Inc.
- the system may use a device other than a smart phone, including a handheld 3-D camera/LIDAR scanner and/or a standalone LIDAR scanner.
- Computer vision algorithms may use the LIDAR data to identify the location of false targets.
- the system that maps a living area may provide data that is used in connection with a system that uses radar to automatically track occupants within one or multiple rooms of a living area.
- the system that uses radar to automatically track occupants may detect a physical state of an occupant, such as walking, standing, sitting, laying down, falling, leaving a room, etc. and may capture and monitor vital signs of the occupant, such as breathing and heart rates.
- the system that uses radar to automatically track occupants may also identify customary routes and routines of an occupant, such as falls or significant deviations from customary routine, using audio recording and automatic voice communications with occupants to confirm dangerous states, and may issue alerts and warnings to caretakers upon detection and confirmation of dangerous and risk-bearing states.
- FIGs. 1A and IB are a schematic illustrations of a user surveying a living area according to an embodiment of the system described herein.
- FIG. 2 is a schematic illustration LIDAR data corresponding to a living area according to an embodiment of the system described herein.
- FIG. 3A is a schematic illustration of training a neural network to detect features of a living area based on LIDAR data corresponding to a living area according to an embodiment of the system described herein.
- FIG. 3B is a schematic illustration of using a neural network to detect features of a living area based on LIDAR data corresponding to a living area according to an embodiment of the system described herein.
- FIG. 3C is a schematic illustration of prompting a user for confirmation of a feature detected in a living area based on LIDAR data according to an embodiment of the system described herein.
- FIG. 4 is a schematic diagram of a user moving in a living area that contains false targets and other features according to an embodiment of the system described herein.
- FIG. 5 is a schematic illustration of using a neural network to detect a user moving in a living area that contains false targets and other features according to an embodiment of the system described herein.
- FIG. 6 is a flow diagram illustrating processing performed in connection with training a neural network to detect features of a living area based on LIDAR data corresponding to a living area according to an embodiment of the system described herein.
- FIG. 7 is a flow diagram illustrating processing performed in connection with using a neural network to detect features of a living area based on LIDAR data corresponding to a living area according to an embodiment of the system described herein.
- the system described herein provides a mechanism for automatically mapping a living area prior to continuous, noninvasive and comprehensive monitoring of occupants the living area using an ultra-wideband radar-based, internet enabled tracking device and new Al intense geometric methods of processing point cloud for detecting occupant state, analyzing occupant behavior, identifying harmful states and conditions, and alerting caretakers when necessary.
- FIG. 1A is a schematic illustration of a living area 110 (a room in this case) having a radar-based tracking device 120 plugged into an AC outlet on a wall.
- a different radar-based tracking device 120' that is placed in a higher position on the wall (e.g., 1.0 - 1.4 meters from floor) may be used instead or in addition to the tracking device 120.
- the living area 110 has a door 130a and a window 130b and is furnished with a bed 140a, a table 140b, a couple of chairs 140c, 140d, and a bookshelf 140e.
- the tracking device 120 and/or the tracking device 120' may use radar to detect movement of one or more occupants in the living area 110.
- the living area 110 also includes potential false targets, such as curtain panels 150a, 150b near the window 130b, where the curtain panels 150a, 150b may move when the window 130b is open.
- the living area 110 also includes another potential false target, a rotating fan 160 on the table 140b. Movement of the curtain panels 150a, 150b and the fan 160 may be detected by the tracking device 120 even though the curtain panels 150a, 150b and the fan 160 are generally unrelated to movement of an occupant in the living area 110.
- a surveyor/user 170 in the living area 110 uses a hand-held LIDAR device 180 to map the living area 110 prior to the living area 110 being occupied and possibly even prior to the detector 120 being instal led/activated.
- the hand-held LIDAR device 180 may be a smartphone made by Apple, Inc. with LIDAR capabilities or any other appropriate LIDAR device, such as a handheld LIDAR scanner.
- a 3D camera such as the Intel® RealSenseTM Depth Camera D457 provided by Intel Corporation, or may be a 2D camera with video capability and additional processing that approximates depth based on 2D images from a camera or a smartphone without LIDAR capability. See, for example, the article at https://keras.io/examples/vision/depth_estimation/, which explains monocular depth estimation.
- the surveyor 170 moves around the living area 110 while pointing the hand-held LIDAR device 180 in different directions to acquire data.
- the acquired data includes locations of possible false radar targets, such as the curtain panels 150a, 150b and the fan 160, locations and shapes of the door 130a and the window 130b, and locations and shapes of the bed 140a, the table 140b, the chairs 140c, 140d, and the bookshelf 140e.
- any movement detected by the device 180 is likely to be a false radar target.
- a fan may be detected and identified as a potential false target based on the shape of the fan even though fan may not be operational at the time of the survey.
- the curtain panels 150a, 150b may be detected and identified as a potential false target even though the curtain panels 150a, 150b may not be moving at the time of the survey.
- the system may also identify multipath reflections by comparing radar tracking signals with the survey signals and eliminating the multipath reflections that are present in the radar signals but not present in the survey signals.
- the survey signals indicate the presence of areas where multipath radar reflections are likely to show up and thus may be ignored.
- the hand-held LIDAR device 180 may also detect entrances/exits for the living area 110 (e.g., the door 130a) based on criteria such as expected dimensions, location, characteristics (e.g., presence of doorknob, material, unobstructed when door open, etc.).
- the location of entrances/exits for the living area 110 may be useful in connection with algorithms that detect movement of occupants since the number of occupants may change in response to someone passing through an entrance/exit of the living area 110.
- the hand-held LIDAR device 180 may also be used to determine overall dimensions of the living area 110, which are useful in connection with subsequently detecting movement in the living area 110.
- conventional Al techniques are used to identify and evaluate possible false radar targets and other features of the living area 110 based on the LIDAR data.
- the possible false radar targets and other features are then provided to algorithms that detect occupant point cloud density and dimensions and positions of bounding boxes and centers of gravity of the point clouds for the occupant(s).
- the algorithms eliminate false radar target point cloud data from detection of point clouds for occupants.
- FIG. IB schematic illustration showing the living area 110 and other components discussed above in connection with FIG. IB.
- the surveyor/user 170 in the living area 110 uses a standalone LIDAR device 180a to map the living area 110.
- the standalone device 180a may be a mounted 3D camera or 2D camera with video capability where the 2D data is processed to approximate depth, described above.
- the standalone device 180a may be dedicated LIDAR scanner/sensor (spinning or non-spinning), such as the Alpha Prime device provided by Velodyne Lidar, Inc. or the Pro3 device provided by Matterport, Inc.
- an image 200 illustrates a visual depiction of collected LIDAR data for a living area, such as the living area 110 shown in FIGs. 1A and IB.
- the LIDAR data corresponds to both static and moving features of the living area.
- the LIDAR data includes a portion 202 that corresponds to a bed in the living area and a portion 204 that corresponds to a curtain in the living area. Note that although the curtain may have been moving when the LIDAR data was collected, the bed was almost certainly not moving at the time of data collection.
- the LIDAR data may be used to provide a map of features of the living area that is subsequently used to facilitate detecting movement and position of one or more occupants of the living area.
- a diagram 300 illustrates training a neural network 302 to analyze LIDAR data (such as the data illustrated in FIG. 2) to be able to construct a map of a living area based on LIDAR data.
- the neural network 302 may be any type of neural network technology with appropriate Al/neural network mechanisms, such as LSTM (Long short-term memory) based recurrent neural network classifiers.
- Training data 304 is provided to the neural network 302 to train the neural network 302.
- the training data 304 may contain previously captured LIDAR data sets and resulting living area mappings corresponding to the previously captured LIDAR data sets.
- the training data set 304 is applied to the neural network 302 to cause the neural network to be trained and adapt to an operational configuration in a conventional fashion. In some cases, the neural network may be configured to look for planes.
- the neural network 302 is shown after being trained.
- the neural network 302 receives and processes LIDAR data 306 (like the LIDAR data illustrated in FIG. 2) to provide a data set 308 that maps features of a living area, including dimensions of the living area, locations of possible false targets, locations of different pieces of furniture, such as the bed, locations of doors and windows, etc.
- LIDAR data 306 like the LIDAR data illustrated in FIG. 2
- the training illustrated in connection with FIG. 3A causes the neural network 302 to adapt to be able to reliably process the LIDAR data 306 to provide the data set 308 that maps features of the living area.
- a diagram 300 shows a screen that may be presented to a user (e.g., the surveyor 170) either during a training phase of the neural network 302 (illustrated in FIG. 3A) or used for refinement of the neural network 302 after the neural network 302 has been deployed (illustrated in FIG. 3B).
- the user is presented with an visual depiction corresponding to acquired LIDAR data and is prompted with a dialog box 312 to help identify a feature that the neural network 302 has identified as a bed.
- the image is similar to the image shown in connection with FIG. 2 and discussed above.
- the user can make the determination presented by dialog box by either viewing the LIDAR data image on the screen or from actual knowledge of the layout of the living area.
- the surveyor 170 can choose a yes button 314 or a no button 316 based on viewing the physical item in the living area.
- the neural network 302 uses the answer provided by the user to further adapt the neural network 302 to be able to provide better results.
- the neural network may be configured to provide information regarding relationships between features and to provide information about how particular features were identified.
- the living area 110 is shown with an occupant walking, with denser point clouds and relatively larger size of bounding boxes 410, 420.
- the occupant is also shown standing near the bookshelf 140e with a bounding box 430 showing a larger portion of the body of the occupant, which does not generate points for a point cloud due to being in a static position, while the chest portion of the occupant generates points for a point cloud due to breathing by the occupant.
- the occupant is also shown sitting on the chair 140c where only the chest portion 440 and the actual point cloud and the bounding box of the chest portion 440 are shown.
- the occupant is also shown laying down on the bed 140a, which is similar to standing, but only the chest portion 450 generates points within the body 460.
- FIG. 4 also shows bounding boxes 470, 480, that are not related to the occupant or to movement of the occupant. Instead, the bounding box 470 relates to movement of the curtain panels 150a, 150b and the bounding box 480 relates to the rotating fan 160.
- the system described herein uses LIDAR data (or similar) obtained by the surveyor 170, discussed above, that is analyzed by the neural network 302 to construct the bounding boxes 470, 480. Subsequently, in connection with using radar to detect the occupant, the bounding boxes 470, 480 are not included as possible movement by the occupant and, instead, are eliminated as possible occupant movement. Thus, the system does not incorrectly classify movement of the curtain panels 150a, 150b or of the fan 160 as movement by the occupant.
- errors may also occur if the tracking device 120 detects movement that is outside the living area by, for example, detecting movement in an adjacent room. The movement may be due to a person other than the occupant being monitored or by any other moving item. In many cases, movement outside the living area is not useful for monitoring the occupant of the living area.
- the system described herein is useful for filtering out (ignoring) movement outside the living area by restricting data that is processed to motion that is sensed to occur only within boundaries of the living area, which may be determined from the LIDAR data.
- a diagram 500 illustrates a neural network 502 (different from the neural network 302, discussed above) that determines a state of an occupant, such as walking, standing sitting, laying down, turning in bed, and falling.
- the neural network 502 Prior to deployment, the neural network 502 is trained in a conventional manner using previously captured data and known states. Training material for the neural network 502 may include parametric representations of characteristic bounding boxes, point densities and positions of gravity centers. Machine learning of the neural network 502 may include learning characteristics and parameters of bounding boxes of point clouds captured for transition procedures and recognizing states of the occupant by checking transitional procedures leading to the new states. Alternatively, transitional procedures may be verified via direct geometric computations using, for example, backtracking of recorded user trajectories leading to a certain condition.
- FIG. 5 shows the neural network 502 after training and being deployed to receive radar data input from, for example, the detector 120, described above and receive data from the data set 308, discussed above, that maps features of the living area.
- the relationship between a state of the occupant and the data set 308 provides information that facilitates determination by the neural network 502.
- the neural network 502 can more easily determine when the occupant is lying down by using information from the data set 308 indicating a location of the bed in the living area.
- the neural network 502 can more easily determine that movement corresponding to the bounding box 480 of FIG. 4 is not related to movement of the occupant because the data set 308 includes information indicating that the bounding box 480 corresponds to movement of the fan 160 and is thus a false target.
- processing for the neural network 302 and/or the neural network 502 is performed locally by a processor in proximity to the living area.
- data from hand-held LIDAR device 180 and/or data from the tracking device 120 may be transmitted to a processor located in one or more remote locations (possibly using cloud computing) that handles some or all of the processing described herein.
- data from different, possibly unrelated, living areas may be processed together.
- neural network operational parameters from one living area may used to adapt parameters for a neural network for a different living area.
- a flow diagram 600 illustrates processing performed in connection with training a neural network to detect features of a living area based on LIDAR data corresponding to a living area.
- Processing begins at a first step 602 where the neural network receives training data.
- the training data may be conventional training data that contains previously captured LIDAR data sets and contains living area features corresponding to the previously captured LIDAR data sets. Training the neural network causes the neural network to adapt to be more predictive (accurate) of outcomes based on input.
- a step 604 the training data is used to train the neural network using conventional neural network training mechanisms.
- a step 606 where the neural network is evaluated.
- the evaluation at the step 606 may take any appropriate form. For example, it is possible to apply LIDAR data from a known living area to the neural network and then compare the output from the neural network (features of living area) with empirically measured features of the known living area. Evaluation at the step 606 may then determine the difference between the empirically measured features and the output of the neural network.
- a test step 608 where it is determined if training the neural network is complete based on the evaluation performed at the step 606. The test at the step 608 may, for example, determine if the difference between the output of the neural network and the empirical data is within a certain percentage. Of course, any appropriate criteria may be used at the step 608.
- step 608 If it is determined at the step 608 that the neural network training is complete, then processing terminates and the neural network is ready to be deployed. Otherwise, processing proceeds from the test step 608 back to the step 604, discussed above, for another iteration to continue training and improving the neural network.
- a flow diagram 700 illustrates processing performed in connection with using a neural network to detect features of a living area based on LIDAR data corresponding to a living area.
- Processing begins at a first step 702 where the LIDAR data is acquired, as discussed in detail elsewhere herein. See, for example, FIGs. 1A and IB and the corresponding discussion.
- a step 704 where the data acquired at the step 702 is processed to determine features of a living area (dimensions, false targets, etc.), as discussed in detail elsewhere herein. See, for example, FIG. 5 and the corresponding discussion.
- processing is complete.
- OS Mobile computers and tablets may use operating system selected from the group consisting of Mac OS, Windows OS, Linux OS, Chrome OS.
- Software implementations of the system described herein may include executable code that is stored in a computer readable medium and executed by one or more processors.
- the computer readable medium may be non-transitory and include a computer hard drive, ROM, RAM, flash memory, portable computer storage media such as a CD-ROM, a DVD-ROM, a flash drive, an SD card and/or other drive with, for example, a universal serial bus (USB) interface, and/or any other appropriate tangible or non-transitory computer readable medium or computer memory on which executable code may be stored and executed by a processor.
- the software may be bundled (pre-loaded), installed from an app store or downloaded from a location of a network operator.
- the system described herein may be used in connection with any appropriate operating system.
Landscapes
- Engineering & Computer Science (AREA)
- Remote Sensing (AREA)
- Radar, Positioning & Navigation (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Computer Networks & Wireless Communication (AREA)
- Theoretical Computer Science (AREA)
- Evolutionary Computation (AREA)
- Artificial Intelligence (AREA)
- General Health & Medical Sciences (AREA)
- Software Systems (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- General Engineering & Computer Science (AREA)
- Computational Linguistics (AREA)
- Mathematical Physics (AREA)
- Data Mining & Analysis (AREA)
- Biophysics (AREA)
- Biomedical Technology (AREA)
- Health & Medical Sciences (AREA)
- Life Sciences & Earth Sciences (AREA)
- Electromagnetism (AREA)
- Optical Radar Systems And Details Thereof (AREA)
Abstract
L'invention concerne la cartographie automatique de caractéristiques d'une aire de vie pour une surveillance ultérieure d'occupants à l'aide d'un radar qui consiste à former un réseau neuronal pour identifier des caractéristiques d'une aire de vie sur la base de données LIDAR correspondant à l'aire de vie à l'aide de données d'apprentissage contenant des ensembles de données LIDAR précédemment capturés et des caractéristiques d'aire de vie correspondantes pour les ensembles de données LIDAR précédemment capturés, à obtenir manuellement des données LIDAR qui correspondent à l'aire de vie par balayage physique de l'aire de vie, et à appliquer les données LIDAR au réseau neuronal pour obtenir des caractéristiques de l'aire de vie. Le balayage physique de l'aire de vie peut être effectué par un dispositif de surveillance avec un dispositif LIDAR. Le dispositif LIDAR peut être un dispositif autonome ou un dispositif portatif. Le dispositif de surveillance peut se déplacer autour de l'aire de vie tout en pointant le dispositif LIDAR portatif dans différentes directions pour acquérir des données. Le dispositif LIDAR peut être un téléphone intelligent ayant une capacité LIDAR ou une caméra 3D.
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US202263422169P | 2022-11-03 | 2022-11-03 | |
US63/422,169 | 2022-11-03 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2024097300A1 true WO2024097300A1 (fr) | 2024-05-10 |
Family
ID=90931406
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/US2023/036608 WO2024097300A1 (fr) | 2022-11-03 | 2023-11-01 | Cartographie d'une aire de vie à l'aide d'un lidar |
Country Status (1)
Country | Link |
---|---|
WO (1) | WO2024097300A1 (fr) |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20190050732A1 (en) * | 2018-08-28 | 2019-02-14 | Intel Corporation | Dynamic responsiveness prediction |
US20200302681A1 (en) * | 2019-03-18 | 2020-09-24 | Geomagical Labs, Inc. | Virtual interaction with three-dimensional indoor room imagery |
US20210225090A1 (en) * | 2020-01-17 | 2021-07-22 | Apple Inc | Floorplan generation based on room scanning |
US20220268916A1 (en) * | 2021-02-25 | 2022-08-25 | Sumit Kumar Nagpal | Technologies for tracking objects within defined areas |
US20220331028A1 (en) * | 2019-08-30 | 2022-10-20 | Metralabs Gmbh Neue Technologien Und Systeme | System for Capturing Movement Patterns and/or Vital Signs of a Person |
US11481918B1 (en) * | 2017-07-27 | 2022-10-25 | AI Incorporated | Method and apparatus for combining data to construct a floor plan |
US20230035601A1 (en) * | 2021-07-28 | 2023-02-02 | OPAL AI Inc. | Floorplan Generation System And Methods Of Use |
-
2023
- 2023-11-01 WO PCT/US2023/036608 patent/WO2024097300A1/fr unknown
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US11481918B1 (en) * | 2017-07-27 | 2022-10-25 | AI Incorporated | Method and apparatus for combining data to construct a floor plan |
US20190050732A1 (en) * | 2018-08-28 | 2019-02-14 | Intel Corporation | Dynamic responsiveness prediction |
US20200302681A1 (en) * | 2019-03-18 | 2020-09-24 | Geomagical Labs, Inc. | Virtual interaction with three-dimensional indoor room imagery |
US20220331028A1 (en) * | 2019-08-30 | 2022-10-20 | Metralabs Gmbh Neue Technologien Und Systeme | System for Capturing Movement Patterns and/or Vital Signs of a Person |
US20210225090A1 (en) * | 2020-01-17 | 2021-07-22 | Apple Inc | Floorplan generation based on room scanning |
US20220268916A1 (en) * | 2021-02-25 | 2022-08-25 | Sumit Kumar Nagpal | Technologies for tracking objects within defined areas |
US20230035601A1 (en) * | 2021-07-28 | 2023-02-02 | OPAL AI Inc. | Floorplan Generation System And Methods Of Use |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US11706377B2 (en) | Visual, depth and micro-vibration data extraction using a unified imaging device | |
CA2996918C (fr) | Determination de partie de corps actuellement traitee d'un utilisateur | |
Planinc et al. | Introducing the use of depth data for fall detection | |
Leone et al. | Detecting falls with 3D range camera in ambient assisted living applications: A preliminary study | |
US20170364817A1 (en) | Estimating a number of occupants in a region | |
US20180137369A1 (en) | Method and system for automatically managing space related resources | |
US8884809B2 (en) | Personal electronic device providing enhanced user environmental awareness | |
AU2015203771B2 (en) | A method and apparatus for surveillance | |
KR20160032586A (ko) | 관심영역 크기 전이 모델 기반의 컴퓨터 보조 진단 장치 및 방법 | |
WO2017183769A1 (fr) | Dispositif et procédé destinés à la détection de situation anormale | |
CN105701331A (zh) | 计算机辅助诊断设备和计算机辅助诊断方法 | |
KR20160012758A (ko) | 영상 진단 보조 장치 및 방법 | |
US10205891B2 (en) | Method and system for detecting occupancy in a space | |
KR20160035121A (ko) | 깊이 영상정보에서 추출된 위치정보를 이용한 개체계수 방법 및 장치 | |
Volkhardt et al. | Fallen person detection for mobile robots using 3D depth data | |
KR20160037023A (ko) | 컴퓨터 보조 진단 지원 장치 및 방법 | |
JP2006209572A (ja) | 通行監視装置 | |
EP4010782A1 (fr) | Identification sans contact de la présence de multiples personnes pour des soins aux personnes âgées | |
Bhattacharya et al. | Arrays of single pixel time-of-flight sensors for privacy preserving tracking and coarse pose estimation | |
KR20160046670A (ko) | 영상 진단 보조 장치 및 방법 | |
JP2011209794A (ja) | 対象物認識システム及び該システムを利用する監視システム、見守りシステム | |
WO2024097300A1 (fr) | Cartographie d'une aire de vie à l'aide d'un lidar | |
Lyons et al. | A Kinect-based system for automatic recording of some pigeon behaviors | |
JP2014048947A (ja) | 画像監視装置 | |
Pathak et al. | Fall detection for elderly people in homes using Kinect sensor |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 23886685 Country of ref document: EP Kind code of ref document: A1 |