WO2016139203A1 - Multi camera load estimation - Google Patents

Multi camera load estimation Download PDF

Info

Publication number
WO2016139203A1
WO2016139203A1 PCT/EP2016/054319 EP2016054319W WO2016139203A1 WO 2016139203 A1 WO2016139203 A1 WO 2016139203A1 EP 2016054319 W EP2016054319 W EP 2016054319W WO 2016139203 A1 WO2016139203 A1 WO 2016139203A1
Authority
WO
WIPO (PCT)
Prior art keywords
thermal
human
weight
spatial
confidence score
Prior art date
Application number
PCT/EP2016/054319
Other languages
French (fr)
Inventor
Michael Palazzola
Jie Xu
Stephen Allen
Peter FELDHUSEN
Frank Dudde
Alan Parker
Original Assignee
Thyssenkrupp Elevator Ag
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Thyssenkrupp Elevator Ag filed Critical Thyssenkrupp Elevator Ag
Priority to CN201680013625.9A priority Critical patent/CN107428495B/en
Publication of WO2016139203A1 publication Critical patent/WO2016139203A1/en

Links

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B66HOISTING; LIFTING; HAULING
    • B66BELEVATORS; ESCALATORS OR MOVING WALKWAYS
    • B66B1/00Control systems of elevators in general
    • B66B1/34Details, e.g. call counting devices, data transmission from car to control system, devices giving information to the control system
    • B66B1/3476Load weighing or car passenger counting devices

Definitions

  • the disclosed technology pertains to a system for estimating the weight of objects and passengers occupying or entering an elevator car based upon a combination of depth and thermal imaging.
  • Load weight can be determined in a variety of ways. Strain gauges can be placed on the elevator car itself or on structures related to the elevator car in order to measure forces applied to the car by occupants. Strain gauges can be installed on ropes supporting the car in order to measure forces within the car. However, strain gauges need to be carefully calibrated and maintained in order to provide an accurate indication of load weight. Installation and maintenance of strain gauges can be difficult due to the lack of space in and around an elevator car within a hoistway and to the inconvenience of taking elevator cars offline in order to perform maintenance and installation. Even when correctly calibrated, strain gauges placed on structural portions of the elevator can provide inaccurate measurements when occupants within the elevator are in motion or unevenly distributed within the car. Similarly, strain gauges installed on a rope can provide inaccurate weight measurements due to rope vibration and sway.
  • FIG. 1 is a flowchart of a set of high-level steps that a system could perform to determine load weight using cameras.
  • FIG. 2 is a front perspective view of an exemplary camera placement within an elevator car.
  • FIG. 3 is a flowchart of a set of steps that a system could perform to classify objects based upon depth imaging.
  • FIG. 4 is a flowchart of a set of steps that a system could perform to classify objects based upon thermal imaging.
  • FIG. 5 is a flowchart of a set of steps that a system could perform to determine a final classification for objects.
  • FIG. 6 is a flowchart of a set of steps that a system could perform to determine the weight of human objects.
  • FIG. 7 is a flowchart of a set of steps that a system could perform to determine the weight of non-human objects.
  • FIG. 8 is a top-down perspective view illustrating a common field of view between two nearby cameras.
  • the inventors have conceived of novel technology that, for the purpose of illustration, is disclosed herein as applied in the context of elevator load weight determination and determination of placement or position in the elevator car. While the disclosed applications of the inventors' technology satisfy a long-felt but unmet need in the art of elevator load weight determination, it should be understood that the inventors' technology is not limited to being implemented in the precise manners set forth herein, but could be implemented in other manners without undue experimentation by those of ordinary skill in the art in light of this disclosure. Accordingly, the examples set forth herein should be understood as being illustrative only, and should not be treated as limiting.
  • FIG. 1 shows a flowchart of a set of high-level steps that a system could perform to determine load weight using cameras.
  • One or more cameras are installed and initialized (100) in or near an elevator car. Initialization could include, for example, the powering on and automatic or manual calibration of cameras to account for background noise such as light sources, reflective surfaces, or to account for the size and other characteristics of the space within which they are installed.
  • FIG. 2 shows one example of a camera installation within an elevator car (200).
  • a thermal camera (202) is placed near the ceiling of an elevator car (200) so that it can capture a thermal image of the interior of the elevator car (200).
  • the thermal camera (202) may be any device that can capture a representation of temperature variations of objects within its field of view, for example, a Grid-Eye Array Sensor, MLX90621, HTPA32x31, or similar device.
  • a depth camera (204) is placed near the thermal camera (202) such that the cameras (202, 204) share a similar field of view of the interior of the elevator car (200).
  • the depth camera (204) may be any device that can capture a depth field or 3D representation of objects within its field of view, for example, an Asus Xtion, Microsoft inect, PrimeSense Carmine, or similar device.
  • the cameras (202, 204) may be communicatively coupled with processing devices, such as an elevator controller or image processor, via a local area network, wireless area network, or data cable so that acquired image data can be stored, analyzed, or manipulated by other components of the elevator system.
  • processing devices such as an elevator controller or image processor, via a local area network, wireless area network, or data cable so that acquired image data can be stored, analyzed, or manipulated by other components of the elevator system.
  • FIG. 2 shows the cameras (202, 204) placed within the elevator car
  • the disclosed technology could be implemented in a configuration in which cameras (202, 204) would be placed in a lobby area outside of an elevator hoistway and positioned so that their fields of view capture occupants waiting for an elevator car, or occupants who recently entered an elevator car.
  • the disclosed technology could be implemented in configurations in which cameras (202, 204) would be placed in a space between the lobby and the elevator car, such as the hoistway door jamb.
  • the placement of cameras (202, 204) is flexible and will vary by embodiment to fit the particular elevator system with which they are used. Additionally, while the example shown in FIG.
  • FIG. 2 shows a single thermal camera (202) and a single depth camera (204), it is also possible that the disclosed technology could be implemented in a manner multiple cameras of each type, or multiple cameras of one type and a single camera of the other, could be used to gather data for use in load weight determinations.
  • the system may capture a depth image and perform a depth classification (102).
  • the depth classification (102) may be performed by analyzing captured depth image data provided by the depth camera (204) in order to identify discrete objects within the elevator car and provide a provisional depth classification as to whether each identified object is a human or non-human object.
  • the thermal camera (202) may be used to capture thermal image data that can be used to provide a provisional thermal classification (104) as to whether each discrete object is a human or non-human object.
  • the determination of the depth classification (102) and the thermal classification (104) can occur in parallel in some embodiments, such as those where the classification for each is self contained and not dependent upon the other. In other embodiments, such as that shown in FIG.
  • the depth classification (102) may be used as a factor in determining the thermal classification (104). For example, where a number of discrete objects identified within a depth image could be used to identify discrete objects within a thermal image having the same or similar field of view.
  • the thermal classification (104) may be used as a factor in determining the depth classification. For example, if a thermal image indicates that an object with a humanlike heat signature is standing behind or near an object without a heat signature, this information could be used with a depth image to separate what initially appears to be a single object into two discrete objects.
  • the depth classification (102) and thermal classification (104) can be combined, along with any other indicators, to produce a final classification (106) that identifies and classifies each discrete object within the elevator car as being human or non-human.
  • the depth data, thermal data, and final classifications (106) can be used to allow the elevator controller, or another processing device, to calculate the weight of all identified objects classified as humans (108) and calculate the weight of all identified objects classified as non-human (1 10).
  • Calculated weights (108, 110) may be used, for example, to prioritize dispatch (1 12) of elevator cars to optimize the safety, comfort, and efficiency of the elevator system.
  • FIG. 3 that figure shows a flowchart of a set of steps that a system could perform to classify objects based upon depth imaging.
  • a depth image is acquired (300) from the depth camera (204) and made available to the image processor.
  • the image processor uses object recognition software to identify the total number of separate and distinct objects (302) within the elevator car. Once separate objects have been identified (302), the collection of separate objects can be iterated through and classified until each has been classified (304). The collection of separate objects may be iterated through in the order that they were identified as separate objects, in reverse order, or in any other order that may best take advantage of a particular hardware configuration's capabilities.
  • an image processor can apply object recognition software in order to classify (308) whether a distinct object is a human or not.
  • Object recognition software would accept depth images as input and identify objects, such as human forms. Examples of object recognition software could include OpenCV, OpenNI, RealSense SDK, JavaFX, or similar software.
  • object recognition software will use particle filtering, a Bayesian numerical approximation method that can assist in human tracking and building articulated human models based upon depth imaging. Particle filtering has four basic steps: resampling from previous N particles, propagating to apply temporal dynamics, weighting by likelihood, and estimating for posterior. See Table 1 below for examples of a mathematical representation for the steps of particle filtering.
  • x represents the hidden states
  • Z represents the observable states
  • k represents the time step.
  • a confidence score can be generated representing the likelihood that the particular object is human. If the object recognition software suggests that a particular object is non-human, a confidence score can be generated representing the likelihood that the particular object is non-human. As each object is classified by the object recognition software as being human or non-human, this information as well as its related confidence score are preserved in a data structure. In some embodiments, where an object cannot be classified as human or non-human by the object recognition software, it may be classified as unknown. Once all distinct objects have been classified (304), the identification, classification, and confidence scores can be stored (306) in a database, cache, memory, or other storage medium for further use by the image processor and elevator controller. In some embodiments, no separate storage (306) of data would take place other than what would inherently occur as part of the normal operation of the system.
  • FIG. 4 shows a flowchart of a set of steps that a system could perform to classify objects based upon thermal imaging.
  • a thermal image is acquired (400) from the thermal camera (202) and made available to the image processor.
  • the image processor will retrieve depth data from the depth camera (204) having a similar field of view and captured at a similar time frame as the recently acquired thermal image (400) and map the thermal image to the depth image (402).
  • the image processor can determine the thermal data for a distinct object identified by the depth camera (204).
  • the depth image can be mapped to the thermal image so that the image processor can determine the depth data for a distinct object identified by the thermal camera (202).
  • mapping the thermal image to the depth image (402) may be performed using some preparatory image transformation.
  • FIG. 8 shows a top-down perspective view of the field of view of two cameras.
  • the cameras (800, 802) are installed in such a way that they are pointed in the same direction but have a static distance between the mid-point of each lens. Due to the offset, the cameras (800, 802) each have a unique field of view (804, 806) as well as a common field of view (808).
  • the resultant images of each camera (800, 802) could be mapped to each other by selecting the portion of each image that represents a common field of view (808), and discarding the remainder of each image (804, 806).
  • This image transformation could be manually configured at the time of installation or could be automatically configured by comparison of background features of an empty elevator car between the two images.
  • thermal image is mapped to the depth image (402) so that distinct objects can be identified within the thermal image, any objects that have not been classified based upon the thermal image (404) can be examined. If the thermal image data indicates that an object is human, it can be thermal classified (408) as being a human and a confidence score can be generated representing the likelihood that the object is human.
  • Thermal data may indicate that an object is human in a variety of ways, for example, an object having a temperature between 90-100 degrees Fahrenheit, an object showing thermal patterns that suggest a torso, arms, legs, and head, or an object having higher temperatures close to a center, such as a torso, and lower temperatures at extremities, such as arms and legs, could all serve as indicators of a human object and can also be used as factors when calculating a thermal confidence score. If the thermal data classifies (408) an object as non-human, a confidence score can be generated representing the likelihood that the object is non-human.
  • Thermal data may indicate that an object is non-human in a variety of ways, for example, an object having the same temperature as the elevator car floor or wall, an object having a temperature below 90 degrees Fahrenheit, and an object having a steady temperature across its entire mass could all serve as indicators of a non- human object and can also be used as factors when calculating a thermal confidence score. These factors could be used in calculating a thermal confidence score by combining a weighted value from each factor to arrive at a probability indicator.
  • an object having a temperature of 98 degrees Fahrenheit could be valued at a 95% confidence in calculating a thermal confidence score indicating a human, while variations above or below 98 degrees Fahrenheit could gradually decrease the confidence rating such that a temperature of either 30 or 140 degrees Fahrenheit could have a 0% confidence that an object is a human.
  • an object may be classified as unknown.
  • the thermal classifications and confidence scores can be stored (406) in a database, cache, memory, or other storage medium as part of the object data structure for further use by the image processor and elevator controller.
  • FIG. 5 shows a flowchart of a set of steps that a system could perform to determine a final classification for objects.
  • the system will check to see if there is a depth confidence score related to the object (502). If a depth confidence score is available (502), the depth confidence score will be selected and retrieved for use (504) as a factor suggesting that the object is human or non-human and the system will proceed to check for a thermal confidence score (506).
  • the system will proceed to check for a thermal confidence score (506). If a thermal confidence score is available (506), the thermal confidence score will be selected and retrieved for use (508) as a factor suggesting that the object is human or non-human and the system will proceed to check for any other factors influencing confidence (510). If no thermal confidence score is available, the system will proceed to check for other factors influencing confidence (510). If other factors influencing confidence are available (510), the other factors will be selected and retrieved for use (512) as factors suggesting that the object is human or non-human, and a final classification of the object will be determined (514).
  • a final classification of the object will be determined (514).
  • a final classification of unknown may be assigned.
  • an object of unknown classification could be assigned a configurable default classification. For example, if a non-human object weighs more than a human object of the same size, a default classification of non-human could be configured so that total loads will be overestimated rather than underestimated.
  • an unknown classification could cause an object to have its weight calculated as both a human object and a non-human object, and a final weight determined by an average of the two.
  • the final classifications can be stored (516) as part of the object data structure to a database, cache, memory or other storage medium and made available for further use by the image processor and elevator controller.
  • no separate storage ( 16) of data would take place other than what would inherently occur as part of the normal operation of the system.
  • Other factors that could influence confidence scores beyond thermal imaging and depth imaging could include, for example, a sound detecting device that can detect human breathing or heart rate, an RFID reader that determines the total number of humans in the elevator by scanning elevator access cards, a motion sensing door counter that counts the number of humans entering an elevator as they enter, an elevator controller which reports the number of floors selected for disembarkation, or other devices or data which could provide an indication of the number of humans on an elevator car.
  • Determining a final classification ( 14) from one or more confidence scores may be performed in a variety of ways.
  • each confidence score could be equally weighted and combined or compared in order to classify an object. For example, if an object's depth confidence score indicated 50% confidence that it was non-human, a thermal confidence score indicated 55% confidence that it was human, and no other information indicating whether that object was human or non-human was available, the object's final classification could be determined (514) as human based on the higher thermal classification.
  • a weighted combination of thermal confidence and depth confidence could be configured if the confidence from one device is valued above another.
  • the thermal confidence might be considered twice as valuable as the depth confidence due to the accuracy and simplicity of the results of the thermal image, meaning that in a final comparison the thermal confidence would be weighted to 60% confidence that the object was human, and resulting in a final classification (514) that the object is human.
  • a third factor such as data derived from a door counter that suggests a maximum number of occupants, could be used in combination with thermal and/or depth confidence to determine a final classification (514).
  • objects could be considered as a group rather than in isolation, and if the total number of objects classified as human exceeds the number of occupants indicated by the door counter, the human confidence scores could be weighted lower to reflect a loss of confidence that they are accurate based upon the door counter data.
  • a third factor could be provided by one or more of the thermal camera (202) and depth camera (204).
  • a thermal image and depth image might initially classify an object as being human
  • a depth image might also indicate that the object is of a height and shape that may instead indicate that the object is instead a service animal.
  • this factor could influence the confidence score such that the object can be more accurately classified as a non-human object.
  • Other methods for determining a final classification based upon one or more confidence factors will be apparent in light of this disclosure.
  • FIG. 6 shows a flowchart of a set of steps that a system could perform to determine the weight of human objects.
  • the depth image for an object could be examined in order to determine the total volume (602) of the object.
  • the thermal image could be compared to the depth image (604) to determine if any reduction of volume may be necessary.
  • the calculated volume (602) of the human could be reduced (608) by a factor to account for the difference in volume added by the garment.
  • the calculated volume (602) of the human could be reduced (612) by a factor to account for the difference in volume added by the carried object.
  • Carried objects could have a weight determination made as part of the non-human object weight calculation steps shown in FIG. 7 and described below, or could be assigned a static weight to be added to the carriers calculated weight that is representative of the average weight of carried bags.
  • the weight of the human can be calculated based upon volume (614). Once all human objects have been weighed (600), the weight of the human objects could be totaled and stored in a database, memory, cache, or other storage medium (616).
  • Determining weight from volume (614) could be performed in a variety of ways.
  • a static unit of mass per unit of volume could be provided based upon testing or available data in order to estimate weight.
  • the system could be configured to calculate each cubic centimeter of human volume to weigh 1 gram, such that a human body with total volume of 60,000 cubic centimeters, at 1 gram per cubic centimeter, would be calculated to weigh 60 kilograms, or about 132 pounds.
  • different values for unit of mass per unit of volume could be provided for different areas of the human body, which could result in more accurate final measurements.
  • one cubic centimeter of leg volume could, for example, be calculated as 1.1 grams, since the legs are likely to contain more high density muscle and bone as compared to the arms or torso.
  • the weight for each limb could be separately calculated and added to determine the total weight from volume (614).
  • a multiple linear regression model for soft biometric estimation could be used.
  • a head, torso, leg, and arm volume model could be built based upon depth imaging. Outliers could be filtered out by using a moving median or random sample consensus ("RANSAC") on the point clouds. Length and circumference of head, torso, legs, and arms could be determined from the volume model and used in an equation to determine weight. See Table 2 below for an example of such an equation. Other methods of calculating body weight based upon the depth image will be apparent in light of this disclosure.
  • Table 2 Example equation for determining body weight [0032] Turning now to FIG. 7, that figure shows a flowchart of a set of steps that a system could perform to determine the weight of non-human objects. If there are non-human objects to weigh (700), the volume of an object is calculated (702) based upon the depth image and using the object recognition software. After volume has been calculated (702), a weight multiplier can be determined for the object that can be used in order to calculate the weight of the object. A weight multiplier may be determined based upon information known about the objects, such as whether it is being carried or suspended, whether it is resting on the floor or on a cart, or other factors. For example, depth information for an object could be examined to determine if the object is being carried or suspended above the ground (704).
  • a mass multiplier could be selected (706) representing the likely mass per volume characteristics of carried objects. In this manner, a bag being carried by an occupant could be assigned a low mass per volume, since it is unlikely that an extremely heavy object would be suspended above the ground by an occupant. Alternately, if an object is placed on the ground (708), it may indicate that the object is on a cart, dolly, or other wheeled device, or is too heavy to easily suspend above the ground. In such a case, a ground weight multiplier could be selected (710) for the object, giving it a fairly high mass per volume characteristic.
  • a standard weight multiplier could be selected (712), giving the object a moderate mass per volume characteristic representative of the average mass per volume of objects likely to be taken on elevators, such as books, papers, computers, liquids, foods, or other objects, which may vary depending upon the particular location and intended use for an elevator car.
  • the weight of the object can be determined (714) by using the volume of the object, based upon the depth image and provided by the object recognition software, and the selected mass per volume multiplier for the object.
  • the non-human weights can be stored to a database, memory, cache, or another storage medium (716).
  • data from depth (102), thermal (104) and final (106) classification may be preserved and integrated into future classifications so that the classification process may be adaptively improved over time.
  • Such adaptive improvements could be implemented, for example, by way of an artificial intelligence structure, such as a neural network, by way of a data structure, such as an object comparison and lookup tree, or through similar means.
  • a neural network adaptive classification could track a plurality of inputs and outputs from the classification process and organize them so that future output data can be more efficiently generated by examining future input data and analyzing it based upon similarities to historical input data.
  • a data structure adaptive classification could store a plurality of input data in a manner that allows for rapid lookup of its resultant classification, which could be used to quickly classify objects in the case of an exact match, or which could be used as an additional confidence factor during classification.
  • the exact implementation of adaptive classification may depend upon the desired result, as some implementations may result in increased speed of classification, while other may result in increased accuracy of classification. Such variations in implementation will be apparent in light of this disclosure.
  • the combined weight of the human and non-human objects (108, 110) can be determined and communicated to the elevator dispatch controller (1 12) for appropriate action. Actions taken based upon reported weight may vary by embodiment.
  • an elevator car that is operating near its max load weight could be placed into a floor bypass mode which ignores further floor calls until the load weight is reduced.
  • an elevator car that is operating at a low load weight could be prioritized to answer floor calls.
  • depth and thermal cameras (202, 204) are placed in a lobby outside an elevator hoistway, and it is determined that the load weight of individuals waiting at a floor stop is near the maximum load weight of an elevator car, an empty elevator car could be prioritized to address that floor call.
  • the partially loaded elevator car could be prioritized for dispatch to that floor stop.
  • an elevator car that is determined to be empty based upon thermal and depth imaging could have all current floor stops canceled, to prevent unnecessary floor stops. Other variations on actions taken by an elevator controller or dispatch controller will be apparent in light of this disclosure.
  • Embodiment 1 A system comprising: an elevator car; a depth camera situated at a first camera location; a thermal camera situated at a second camera location; an image processor communicatively coupled with the depth camera and the thermal camera; and an elevator controller communicatively coupled with the image processor; wherein the image processor is configured to: receive a set of image data from the depth camera and the thermal camera, wherein the set of image data comprises a set of thermal data and a set of spatial data; identify a set of discrete objects based upon the set of image data; determine a classification for each object of the set of discrete objects based upon the set of image data; estimate a weight for each object of the set of discrete objects based upon the classification; and provide a total weight estimate, based upon the weight for each object of the set of discrete objects, to the elevator controller; and wherein the elevator controller is configured to control one or more elevator cars based upon the total weight estimate.
  • Embodiment 1 wherein the elevator controller and the image processor share a processor and a memory.
  • Embodiments 1-2 further comprising a passenger waiting area, wherein the elevator car is configured to travel to the passenger waiting area, wherein the first camera location and the second camera location are located at the passenger waiting area.
  • the elevator controller is configured to: determine an additional occupancy weight based upon a maximum occupancy weight that is configured for the elevator car and a current occupancy weight that is provided by a sensor separate from the depth camera and the thermal camera; when the total weight estimate does not exceed the additional occupancy weight, prioritize sending the elevator car to the passenger waiting area; and when the total weight estimate is greater than about 75% of the maximum occupancy weight, and when the current occupancy weight indicates that the elevator car is empty, prioritize sending the elevator car to the passenger waiting area.
  • Embodiment 4 wherein the sensor separate from the depth camera and the thermal camera is a floor sensor in the elevator car.
  • Embodiment 1 wherein the first camera location and the second camera location are within the elevator car, and wherein the elevator controller is configured to: when the total weight estimate indicates that there are no passengers in the elevator car, cancel any floor stops that have been requested from a keypad of the elevator car; and when the total weight estimate indicates that the elevator car cannot accept any additional passengers, place the elevator car into a floor bypass mode.
  • the image processor is further configured to: map the set of thermal data to the set of spatial data to create a thermal spatial overlay; for each object of the set of discrete objects, identify a subset of the thermal spatial overlay that is associated with that object, the subset of the thermal spatial overlay comprising a subset of the set of thermal data and a subset of the set of spatial data.
  • the system of Embodiment 7, The system of claim 7, wherein the image processor is configured to determine the classification for each object of the set of discrete objects based on execution of a set of instructions that, when executed, cause the image processor to, for each object from the set of discrete objects: determine a thermal classification confidence score, wherein the thermal classification confidence score indicates the likelihood that the object is a human based upon whether the object's temperature falls within a configured range; determine a spatial classification confidence score, wherein the spatial classification confidence score indicates the likelihood that the object is a human based upon whether the object is spatially similar to a human; and determine a final classification confidence score based upon the thermal classification confidence score and the spatial classification confidence score.
  • Embodiment 8 wherein the image processor is configured to determine the classification for each object of the set of discrete objects based on execution of a set of instructions that, when executed, cause the image processor to, for each object from the set of discrete objects: determine a tertiary classification confidence score based upon one or more of: information from a sound detecting device indicating the presence of passengers; information from an RFID reader indicating the presence of passenger key cards; information from a motion sensing door counter indicating the number of passengers that entered the elevator car; and information from the elevator controller indicating the number of floors selected for disembarkation; and determine the final classification confidence score based upon the thermal classification confidence score, the spatial classification confidence score, and the tertiary classification score.
  • the image processor is configured to estimate the weight for each object of the set of discrete objects based on execution of a set of instructions that, when executed, causes the image processor to, for each object from the set of discrete objects that is classified as a human: determine a volume of the object based upon the subset of the set of spatial data associated with the object; determme a portion of the volume of the object that is accounted for by one or more items being held or worn by the object based upon the subset of the set of thermal data associated with the object; reducing the volume of the object by the portion of the volume of the object that is accounted for by the one or more items being held or worn by the object; and determine the weight based upon the volume of the object.
  • the image processor is configured to estimate the weight for each object of the set of discrete objects based on execution of a set of instructions that, when executed, causes the image processor to, for each object from the set of discrete objects that is classified as a non-human: determine a volume of the non- human object based upon the subset of the set of spatial data associated with the non-human object; determine if the non-human object is being carried by a human object based upon the subset of the set of spatial data associated with the non-human object and the subset of the set of thermal data associated with the non-human object; if the non-human object is being carried by a human object, determine the weight for the non-human object based upon the volume of the non-human object and a carried object weight calculation; and if the non-human object is not being carried by a human object, determine the weight for the non-human object based upon the volume of the non-human object and a heavy object weight calculation.
  • a method comprising the steps: at an image processor, receiving a set of image data from a depth camera at a first camera location and a thermal camera at a second camera location, wherein the set of image data comprises a set of thermal data and a set of spatial data; identifying a set of discrete objects based upon the set of image data; determining a classification for each object of the set of discrete objects based upon the set of image data; estimating a weight for each object of the set of discrete objects based upon the classification; and providing a total weight estimate to an elevator controller, the total weight estimated based upon the weight for each object of the set of discrete objects, to the elevator controller; wherein the elevator controller is configured to control one or more elevator cars based upon the total weight estimate.
  • Embodiment 12 wherein the one or more elevator cars are configured to travel to a passenger waiting area, wherein the first camera location and the second camera location are located at the passenger waiting area.
  • Embodiment 13 further comprising the steps: determining an additional occupancy weight based upon a maximum occupancy weight that is configured for an elevator car of the one or more elevator cars and a current occupancy weight that is provided by a sensor separate from the depth camera and the thermal camera; when the total weight estimate does not exceed the additional occupancy weight, prioritize sending the elevator car to the passenger waiting area; and when the total weight estimate is greater than about 75% of the maximum occupancy weight, prioritize sending the elevator car to the passenger waiting area.
  • Embodiment 14 wherein the sensor separate from the depth camera and the thermal camera is a floor sensor in the elevator car.
  • Embodiment 16 The method of Embodiment 12, wherein the first camera location and the second camera location are within the elevator car, further comprising the steps: when the total weight estimate indicates that there are no passengers in the elevator car, canceling any floor stops that have been requested from a keypad of the elevator car; and when the total weight estimate indicates that the elevator car cannot accept any additional passengers, placing the elevator car into a floor bypass mode.
  • any of Embodiments 12-16 further comprising the steps: at the image processor, mapping the set of thermal data to the set of spatial data to create a thermal spatial overlay; for each object of the set of discrete objects, identifying a subset of the thermal spatial overlay that is associated with that object, the subset of the thermal spatial overlay comprising a subset of the set of thermal data and a subset of the set of spatial data.
  • the step of determining the classification for each object of the set of discrete objects comprises the steps, for each object from the set of discrete objects: at the image processor, determining a thermal classification confidence score, wherein the thermal classification confidence score indicates the likelihood that the object is a human based upon whether the object's temperature falls within a configured range; determining a spatial classification confidence score, wherein the spatial classification confidence score indicates the likelihood that the object is a human based upon whether the object is spatially similar to a human; and determining a final classification confidence score based upon the thermal classification confidence score and the spatial classification confidence score.
  • Embodiment 19 [0074] The method of any of Embodiments 17-18, wherein the step of estimating the weight for each object of the set of discrete objects comprises the steps, for each object from the set of discrete objects: at the image processor, determining a volume of the object based upon the subset of the set of spatial data associated with the object; determining a portion of the volume of the object that is accounted for by one or more items being held or worn by the object based upon the subset of the set of thermal data associated with the object; reducing the volume of the object by the portion of the volume of the object that is accounted for by the one or more items being held or worn by the object; and determining the weight based upon the volume of the object.
  • a system comprising: an elevator car; a depth camera situated at a first camera location; a thermal camera situated at a second camera location; an image processor communicatively coupled with the depth camera and the thermal camera; and an elevator controller communicatively coupled with the image processor; wherein a depth camera field of view and a thermal camera field of view overlap; wherein the image processor is configured to: receive a set of image data from the depth camera and the thermal camera, wherein the set of image data comprises a set of thermal data and a set of spatial data; identify a set of discrete objects based upon the set of image data; map the set of thermal data to the set of spatial data to create a thermal spatial overlay; for each object of the set of discrete objects, identify a subset of the thermal spatial overlay that is associated with that object, the subset of the thermal spatial overlay comprising a subset of the set of thermal data and a subset of the set of spatial data; determine a thermal classification confidence score, wherein the thermal classification confidence score indicates the likelihood that the object is a

Landscapes

  • Engineering & Computer Science (AREA)
  • Automation & Control Theory (AREA)
  • Mechanical Engineering (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Indicating And Signalling Devices For Elevators (AREA)
  • Elevator Control (AREA)
  • Cage And Drive Apparatuses For Elevators (AREA)

Abstract

Imaging data captured by 3D depth cameras and thermal cameras can be combined to identify objects and determine whether they are human or non-human. The total weight of human and non-human objects can be estimated based upon volume analysis and reported to an elevator dispatch controller to allow for more efficient dispatch of elevator cars.

Description

MULTI CAMERA LOAD ESTIMATION
PRIORITY
[0001] This application claims priority to U.S. Provisional Patent Application
Serial No. 62/128,187, filed March 4, 2015, entitled "Multi Camera Load Estimation". The disclosure of which is incorporated by reference herein.
FIELD
[0002] The disclosed technology pertains to a system for estimating the weight of objects and passengers occupying or entering an elevator car based upon a combination of depth and thermal imaging.
BACKGROUND
[0003] Determining the weight of occupants in an elevator car is important for the efficient and safe operation of an elevator system. When load weight is known, elevator cars can be directed to offload passengers before accepting more passengers when operating at or near load weight limits. By ensuring that elevators are not overloaded, the safety and comfort of passengers can be protected and the longevity of the elevator system's mechanical components can be increased.
[0004] Load weight can be determined in a variety of ways. Strain gauges can be placed on the elevator car itself or on structures related to the elevator car in order to measure forces applied to the car by occupants. Strain gauges can be installed on ropes supporting the car in order to measure forces within the car. However, strain gauges need to be carefully calibrated and maintained in order to provide an accurate indication of load weight. Installation and maintenance of strain gauges can be difficult due to the lack of space in and around an elevator car within a hoistway and to the inconvenience of taking elevator cars offline in order to perform maintenance and installation. Even when correctly calibrated, strain gauges placed on structural portions of the elevator can provide inaccurate measurements when occupants within the elevator are in motion or unevenly distributed within the car. Similarly, strain gauges installed on a rope can provide inaccurate weight measurements due to rope vibration and sway.
What is needed, therefore, is an improved system for determining the weight of passengers and objects within an elevator car and their relative position in the elevator car.
BRIEF DESCRIPTION OF THE DRAWINGS
The drawings and detailed description that follow are intended to be merely illustrative and are not intended to limit the scope of the invention as contemplated by the inventors.
FIG. 1 is a flowchart of a set of high-level steps that a system could perform to determine load weight using cameras.
FIG. 2 is a front perspective view of an exemplary camera placement within an elevator car.
FIG. 3 is a flowchart of a set of steps that a system could perform to classify objects based upon depth imaging.
FIG. 4 is a flowchart of a set of steps that a system could perform to classify objects based upon thermal imaging.
FIG. 5 is a flowchart of a set of steps that a system could perform to determine a final classification for objects.
FIG. 6 is a flowchart of a set of steps that a system could perform to determine the weight of human objects.
FIG. 7 is a flowchart of a set of steps that a system could perform to determine the weight of non-human objects. [0014] FIG. 8 is a top-down perspective view illustrating a common field of view between two nearby cameras.
DETAILED DESCRIPTION
[0015] The inventors have conceived of novel technology that, for the purpose of illustration, is disclosed herein as applied in the context of elevator load weight determination and determination of placement or position in the elevator car. While the disclosed applications of the inventors' technology satisfy a long-felt but unmet need in the art of elevator load weight determination, it should be understood that the inventors' technology is not limited to being implemented in the precise manners set forth herein, but could be implemented in other manners without undue experimentation by those of ordinary skill in the art in light of this disclosure. Accordingly, the examples set forth herein should be understood as being illustrative only, and should not be treated as limiting.
[0016] Turning now to the figures, FIG. 1 shows a flowchart of a set of high-level steps that a system could perform to determine load weight using cameras. One or more cameras are installed and initialized (100) in or near an elevator car. Initialization could include, for example, the powering on and automatic or manual calibration of cameras to account for background noise such as light sources, reflective surfaces, or to account for the size and other characteristics of the space within which they are installed. FIG. 2 shows one example of a camera installation within an elevator car (200). In this embodiment, a thermal camera (202) is placed near the ceiling of an elevator car (200) so that it can capture a thermal image of the interior of the elevator car (200). The thermal camera (202) may be any device that can capture a representation of temperature variations of objects within its field of view, for example, a Grid-Eye Array Sensor, MLX90621, HTPA32x31, or similar device. A depth camera (204) is placed near the thermal camera (202) such that the cameras (202, 204) share a similar field of view of the interior of the elevator car (200). The depth camera (204) may be any device that can capture a depth field or 3D representation of objects within its field of view, for example, an Asus Xtion, Microsoft inect, PrimeSense Carmine, or similar device. The cameras (202, 204) may be communicatively coupled with processing devices, such as an elevator controller or image processor, via a local area network, wireless area network, or data cable so that acquired image data can be stored, analyzed, or manipulated by other components of the elevator system.
[0017] While the example shown in FIG. 2 shows the cameras (202, 204) placed within the elevator car, it is also possible that the disclosed technology could be implemented in a configuration in which cameras (202, 204) would be placed in a lobby area outside of an elevator hoistway and positioned so that their fields of view capture occupants waiting for an elevator car, or occupants who recently entered an elevator car. Alternatively, the disclosed technology could be implemented in configurations in which cameras (202, 204) would be placed in a space between the lobby and the elevator car, such as the hoistway door jamb. The placement of cameras (202, 204) is flexible and will vary by embodiment to fit the particular elevator system with which they are used. Additionally, while the example shown in FIG. 2 shows a single thermal camera (202) and a single depth camera (204), it is also possible that the disclosed technology could be implemented in a manner multiple cameras of each type, or multiple cameras of one type and a single camera of the other, could be used to gather data for use in load weight determinations.
[0018] Once the cameras (202, 204) are ready for use (100), the system may capture a depth image and perform a depth classification (102). The depth classification (102) may be performed by analyzing captured depth image data provided by the depth camera (204) in order to identify discrete objects within the elevator car and provide a provisional depth classification as to whether each identified object is a human or non-human object. Similarly, the thermal camera (202) may be used to capture thermal image data that can be used to provide a provisional thermal classification (104) as to whether each discrete object is a human or non-human object. The determination of the depth classification (102) and the thermal classification (104) can occur in parallel in some embodiments, such as those where the classification for each is self contained and not dependent upon the other. In other embodiments, such as that shown in FIG. 1 , the depth classification (102) may be used as a factor in determining the thermal classification (104). For example, where a number of discrete objects identified within a depth image could be used to identify discrete objects within a thermal image having the same or similar field of view. Similarly, the thermal classification (104) may be used as a factor in determining the depth classification. For example, if a thermal image indicates that an object with a humanlike heat signature is standing behind or near an object without a heat signature, this information could be used with a depth image to separate what initially appears to be a single object into two discrete objects. The depth classification (102) and thermal classification (104) can be combined, along with any other indicators, to produce a final classification (106) that identifies and classifies each discrete object within the elevator car as being human or non-human. The depth data, thermal data, and final classifications (106) can be used to allow the elevator controller, or another processing device, to calculate the weight of all identified objects classified as humans (108) and calculate the weight of all identified objects classified as non-human (1 10). Calculated weights (108, 110) may be used, for example, to prioritize dispatch (1 12) of elevator cars to optimize the safety, comfort, and efficiency of the elevator system.
Turning now to FIG. 3, that figure shows a flowchart of a set of steps that a system could perform to classify objects based upon depth imaging. In this embodiment, a depth image is acquired (300) from the depth camera (204) and made available to the image processor. The image processor uses object recognition software to identify the total number of separate and distinct objects (302) within the elevator car. Once separate objects have been identified (302), the collection of separate objects can be iterated through and classified until each has been classified (304). The collection of separate objects may be iterated through in the order that they were identified as separate objects, in reverse order, or in any other order that may best take advantage of a particular hardware configuration's capabilities. In order to classify an object, an image processor can apply object recognition software in order to classify (308) whether a distinct object is a human or not. Object recognition software would accept depth images as input and identify objects, such as human forms. Examples of object recognition software could include OpenCV, OpenNI, RealSense SDK, JavaFX, or similar software. In some embodiments the object recognition software will use particle filtering, a Bayesian numerical approximation method that can assist in human tracking and building articulated human models based upon depth imaging. Particle filtering has four basic steps: resampling from previous N particles, propagating to apply temporal dynamics, weighting by likelihood, and estimating for posterior. See Table 1 below for examples of a mathematical representation for the steps of particle filtering.
Figure imgf000007_0002
Figure imgf000007_0001
posterior probability, x represents the hidden states, Z represents the observable states, and k represents the time step.
If the object recognition software suggests that a particular object is human, a confidence score can be generated representing the likelihood that the particular object is human. If the object recognition software suggests that a particular object is non-human, a confidence score can be generated representing the likelihood that the particular object is non-human. As each object is classified by the object recognition software as being human or non-human, this information as well as its related confidence score are preserved in a data structure. In some embodiments, where an object cannot be classified as human or non-human by the object recognition software, it may be classified as unknown. Once all distinct objects have been classified (304), the identification, classification, and confidence scores can be stored (306) in a database, cache, memory, or other storage medium for further use by the image processor and elevator controller. In some embodiments, no separate storage (306) of data would take place other than what would inherently occur as part of the normal operation of the system.
Turning now to FIG. 4, that figure shows a flowchart of a set of steps that a system could perform to classify objects based upon thermal imaging. In this embodiment, a thermal image is acquired (400) from the thermal camera (202) and made available to the image processor. The image processor will retrieve depth data from the depth camera (204) having a similar field of view and captured at a similar time frame as the recently acquired thermal image (400) and map the thermal image to the depth image (402). By mapping the thermal image (400) to the depth image (402), the image processor can determine the thermal data for a distinct object identified by the depth camera (204). In an alternative embodiment, the depth image can be mapped to the thermal image so that the image processor can determine the depth data for a distinct object identified by the thermal camera (202). For example, if a depth image indicates that a distinct object is located at a particular location (e.g., within a particular cell or cells used by a grid eye infrared array sensor to subdivide its field of view) within the elevator car, the same distinct object will be located in a similar position within the thermal image, since the thermal camera (202) is situated in the elevator car such that it shares a common field of view with the depth camera (204). In some embodiments, where it may be desirable to install the thermal camera (202) and depth camera (204) in a way that does not result in a common field of view, mapping the thermal image to the depth image (402) may be performed using some preparatory image transformation.
[0022] For example, FIG. 8 shows a top-down perspective view of the field of view of two cameras. In FIG. 8, the cameras (800, 802) are installed in such a way that they are pointed in the same direction but have a static distance between the mid-point of each lens. Due to the offset, the cameras (800, 802) each have a unique field of view (804, 806) as well as a common field of view (808). The resultant images of each camera (800, 802) could be mapped to each other by selecting the portion of each image that represents a common field of view (808), and discarding the remainder of each image (804, 806). This image transformation could be manually configured at the time of installation or could be automatically configured by comparison of background features of an empty elevator car between the two images. Other image transformations might be where cameras are installed on opposite sides of an elevator car and would require an image to be mirrored in order to arrive at a comparable field of view, or where one camera might be placed closer to the target field of view than the other and would require the far image to be zoomed and cropped in order to arrive at a comparable field of view.
[0023] Once the thermal image is mapped to the depth image (402) so that distinct objects can be identified within the thermal image, any objects that have not been classified based upon the thermal image (404) can be examined. If the thermal image data indicates that an object is human, it can be thermal classified (408) as being a human and a confidence score can be generated representing the likelihood that the object is human. Thermal data may indicate that an object is human in a variety of ways, for example, an object having a temperature between 90-100 degrees Fahrenheit, an object showing thermal patterns that suggest a torso, arms, legs, and head, or an object having higher temperatures close to a center, such as a torso, and lower temperatures at extremities, such as arms and legs, could all serve as indicators of a human object and can also be used as factors when calculating a thermal confidence score. If the thermal data classifies (408) an object as non-human, a confidence score can be generated representing the likelihood that the object is non-human.
Thermal data may indicate that an object is non-human in a variety of ways, for example, an object having the same temperature as the elevator car floor or wall, an object having a temperature below 90 degrees Fahrenheit, and an object having a steady temperature across its entire mass could all serve as indicators of a non- human object and can also be used as factors when calculating a thermal confidence score. These factors could be used in calculating a thermal confidence score by combining a weighted value from each factor to arrive at a probability indicator. For example, an object having a temperature of 98 degrees Fahrenheit could be valued at a 95% confidence in calculating a thermal confidence score indicating a human, while variations above or below 98 degrees Fahrenheit could gradually decrease the confidence rating such that a temperature of either 30 or 140 degrees Fahrenheit could have a 0% confidence that an object is a human. In some embodiments, where an object cannot be classified as human or non-human based upon the thermal data, it may be classified as unknown. Once all distinct objects have received a thermal classification (404), the thermal classifications and confidence scores can be stored (406) in a database, cache, memory, or other storage medium as part of the object data structure for further use by the image processor and elevator controller. In some embodiments, no separate storage (406) of data would take place other than what would inherently occur as part of the normal operation of the system. [0025] Turning now to FIG. 5, that figure shows a flowchart of a set of steps that a system could perform to determine a final classification for objects. In this embodiment, if there are objects that have not received a final classification (500), the system will check to see if there is a depth confidence score related to the object (502). If a depth confidence score is available (502), the depth confidence score will be selected and retrieved for use (504) as a factor suggesting that the object is human or non-human and the system will proceed to check for a thermal confidence score (506). If no depth confidence score is available, the system will proceed to check for a thermal confidence score (506). If a thermal confidence score is available (506), the thermal confidence score will be selected and retrieved for use (508) as a factor suggesting that the object is human or non-human and the system will proceed to check for any other factors influencing confidence (510). If no thermal confidence score is available, the system will proceed to check for other factors influencing confidence (510). If other factors influencing confidence are available (510), the other factors will be selected and retrieved for use (512) as factors suggesting that the object is human or non-human, and a final classification of the object will be determined (514).
[0026] If no other factors influencing confidence are available (510), a final classification of the object will be determined (514). In some embodiments, where no final classification can be determined for an object due to the object being classified as unknown by one or both of the thermal classification and the depth classification, or where confidence scores from various classifications offset and result in an indeterminable final classification, a final classification of unknown may be assigned. In such an embodiment, an object of unknown classification could be assigned a configurable default classification. For example, if a non-human object weighs more than a human object of the same size, a default classification of non-human could be configured so that total loads will be overestimated rather than underestimated. Alternately, an unknown classification could cause an object to have its weight calculated as both a human object and a non-human object, and a final weight determined by an average of the two. Once all objects have been classified (500), the final classifications can be stored (516) as part of the object data structure to a database, cache, memory or other storage medium and made available for further use by the image processor and elevator controller. In some embodiments, no separate storage ( 16) of data would take place other than what would inherently occur as part of the normal operation of the system.
[0027] Other factors that could influence confidence scores beyond thermal imaging and depth imaging could include, for example, a sound detecting device that can detect human breathing or heart rate, an RFID reader that determines the total number of humans in the elevator by scanning elevator access cards, a motion sensing door counter that counts the number of humans entering an elevator as they enter, an elevator controller which reports the number of floors selected for disembarkation, or other devices or data which could provide an indication of the number of humans on an elevator car. These factors could influence confidence scores by, for example, lowering the confidence scores of all provisional human classifications, which could result in a low confidence human classification becoming a non-human (or undecided) classification, in a scenario where provisional classifications indicate the presence of a number of humans exceeding that reported by an RFID scanner or door counter.
[0028] Determining a final classification ( 14) from one or more confidence scores may be performed in a variety of ways. In some embodiments, each confidence score could be equally weighted and combined or compared in order to classify an object. For example, if an object's depth confidence score indicated 50% confidence that it was non-human, a thermal confidence score indicated 55% confidence that it was human, and no other information indicating whether that object was human or non-human was available, the object's final classification could be determined (514) as human based on the higher thermal classification. In another embodiment, a weighted combination of thermal confidence and depth confidence could be configured if the confidence from one device is valued above another. For example, if a depth confidence score indicated 50% confidence that an object is non human, and a thermal confidence indicated a 30% confidence that an object was human, the thermal confidence might be considered twice as valuable as the depth confidence due to the accuracy and simplicity of the results of the thermal image, meaning that in a final comparison the thermal confidence would be weighted to 60% confidence that the object was human, and resulting in a final classification (514) that the object is human.
[0029] In another embodiment, a third factor, such as data derived from a door counter that suggests a maximum number of occupants, could be used in combination with thermal and/or depth confidence to determine a final classification (514). For example, objects could be considered as a group rather than in isolation, and if the total number of objects classified as human exceeds the number of occupants indicated by the door counter, the human confidence scores could be weighted lower to reflect a loss of confidence that they are accurate based upon the door counter data. In some embodiments, a third factor could be provided by one or more of the thermal camera (202) and depth camera (204). For example, while a thermal image and depth image might initially classify an object as being human, a depth image might also indicate that the object is of a height and shape that may instead indicate that the object is instead a service animal. In such a scenario, this factor could influence the confidence score such that the object can be more accurately classified as a non-human object. Other methods for determining a final classification based upon one or more confidence factors will be apparent in light of this disclosure.
[0030] Turning now to FIG. 6, that figure shows a flowchart of a set of steps that a system could perform to determine the weight of human objects. In this embodiment, when there are human objects to weigh (600), the depth image for an object could be examined in order to determine the total volume (602) of the object. The thermal image could be compared to the depth image (604) to determine if any reduction of volume may be necessary. For example, if the comparison of the total volume of a human to the thermal volume of the human indicates that a bulky garment such as a heavy coat, raincoat, or other garment that might make the overall volume of a human to appear larger (606) than it actually is, the calculated volume (602) of the human could be reduced (608) by a factor to account for the difference in volume added by the garment. Similarly, if the comparison of the total volume to the thermal volume indicates that a non- human load is being carried by a human, such as a backpack, courier bag, or some other object carried close to the body that might appear to a depth camera to be part of the same object (610), the calculated volume (602) of the human could be reduced (612) by a factor to account for the difference in volume added by the carried object. Carried objects could have a weight determination made as part of the non-human object weight calculation steps shown in FIG. 7 and described below, or could be assigned a static weight to be added to the carriers calculated weight that is representative of the average weight of carried bags. Once an accurate volume is determined, the weight of the human can be calculated based upon volume (614). Once all human objects have been weighed (600), the weight of the human objects could be totaled and stored in a database, memory, cache, or other storage medium (616).
Determining weight from volume (614) could be performed in a variety of ways. In some embodiments, a static unit of mass per unit of volume could be provided based upon testing or available data in order to estimate weight. For example, the system could be configured to calculate each cubic centimeter of human volume to weigh 1 gram, such that a human body with total volume of 60,000 cubic centimeters, at 1 gram per cubic centimeter, would be calculated to weigh 60 kilograms, or about 132 pounds. In some embodiments, different values for unit of mass per unit of volume could be provided for different areas of the human body, which could result in more accurate final measurements. In such an embodiment, one cubic centimeter of leg volume could, for example, be calculated as 1.1 grams, since the legs are likely to contain more high density muscle and bone as compared to the arms or torso. The weight for each limb could be separately calculated and added to determine the total weight from volume (614). In other embodiments, a multiple linear regression model for soft biometric estimation could be used. In such an embodiment, a head, torso, leg, and arm volume model could be built based upon depth imaging. Outliers could be filtered out by using a moving median or random sample consensus ("RANSAC") on the point clouds. Length and circumference of head, torso, legs, and arms could be determined from the volume model and used in an equation to determine weight. See Table 2 below for an example of such an equation. Other methods of calculating body weight based upon the depth image will be apparent in light of this disclosure.
weight = -122.27
+ 0.48*(overall height)
- 0.17*(upper leg length)
+ 0.52*(calf circumference)
+ 0.16*(upper arm length)
+ 0.77* (upper arm circumference)
+ 0.49* (waist circumference)
+ 0.58*(upper leg circumference)
Table 2: Example equation for determining body weight [0032] Turning now to FIG. 7, that figure shows a flowchart of a set of steps that a system could perform to determine the weight of non-human objects. If there are non-human objects to weigh (700), the volume of an object is calculated (702) based upon the depth image and using the object recognition software. After volume has been calculated (702), a weight multiplier can be determined for the object that can be used in order to calculate the weight of the object. A weight multiplier may be determined based upon information known about the objects, such as whether it is being carried or suspended, whether it is resting on the floor or on a cart, or other factors. For example, depth information for an object could be examined to determine if the object is being carried or suspended above the ground (704). If the object is suspended above the ground by some means, a mass multiplier could be selected (706) representing the likely mass per volume characteristics of carried objects. In this manner, a bag being carried by an occupant could be assigned a low mass per volume, since it is unlikely that an extremely heavy object would be suspended above the ground by an occupant. Alternately, if an object is placed on the ground (708), it may indicate that the object is on a cart, dolly, or other wheeled device, or is too heavy to easily suspend above the ground. In such a case, a ground weight multiplier could be selected (710) for the object, giving it a fairly high mass per volume characteristic.
[0033] If it can't be determined that an object falls into a type that has a special weight multiplier, a standard weight multiplier could be selected (712), giving the object a moderate mass per volume characteristic representative of the average mass per volume of objects likely to be taken on elevators, such as books, papers, computers, liquids, foods, or other objects, which may vary depending upon the particular location and intended use for an elevator car. Once the mass per volume multiplier is determined, the weight of the object can be determined (714) by using the volume of the object, based upon the depth image and provided by the object recognition software, and the selected mass per volume multiplier for the object. When there are no remaining non-human objects to weigh (700), the non-human weights can be stored to a database, memory, cache, or another storage medium (716).
[0034] In some embodiments, data from depth (102), thermal (104) and final (106) classification may be preserved and integrated into future classifications so that the classification process may be adaptively improved over time. Such adaptive improvements could be implemented, for example, by way of an artificial intelligence structure, such as a neural network, by way of a data structure, such as an object comparison and lookup tree, or through similar means. A neural network adaptive classification could track a plurality of inputs and outputs from the classification process and organize them so that future output data can be more efficiently generated by examining future input data and analyzing it based upon similarities to historical input data. A data structure adaptive classification could store a plurality of input data in a manner that allows for rapid lookup of its resultant classification, which could be used to quickly classify objects in the case of an exact match, or which could be used as an additional confidence factor during classification. The exact implementation of adaptive classification may depend upon the desired result, as some implementations may result in increased speed of classification, while other may result in increased accuracy of classification. Such variations in implementation will be apparent in light of this disclosure.
[0035] The combined weight of the human and non-human objects (108, 110) can be determined and communicated to the elevator dispatch controller (1 12) for appropriate action. Actions taken based upon reported weight may vary by embodiment. In some embodiments, an elevator car that is operating near its max load weight could be placed into a floor bypass mode which ignores further floor calls until the load weight is reduced. In some embodiments, an elevator car that is operating at a low load weight could be prioritized to answer floor calls. In some embodiments, where depth and thermal cameras (202, 204) are placed in a lobby outside an elevator hoistway, and it is determined that the load weight of individuals waiting at a floor stop is near the maximum load weight of an elevator car, an empty elevator car could be prioritized to address that floor call. In some embodiments, where it is determined that the maximum load weight of passengers waiting at a floor stop is less than the available load weight for a partially loaded elevator car, the partially loaded elevator car could be prioritized for dispatch to that floor stop. In some embodiments, an elevator car that is determined to be empty based upon thermal and depth imaging could have all current floor stops canceled, to prevent unnecessary floor stops. Other variations on actions taken by an elevator controller or dispatch controller will be apparent in light of this disclosure.
[0036] The following embodiments relate to various non-exhaustive ways in which the teachings herein may be combined or applied. It should be understood that the following embodiments are not intended to restrict the coverage of any claims that may be presented at any time in this document or in subsequent filings based on this document. No disclaimer is intended. The following embodiments are being provided for nothing more than merely illustrative purposes. It is contemplated that the various teachings herein may be arranged and applied in numerous other ways. It is also contemplated that some embodiments may omit certain features referred to in the below embodiments. Therefore, none of the aspects or features referred to below should be deemed critical unless otherwise explicitly indicated as such at a later date by the inventors or by a successor in interest to the inventors. If any claims are presented in this document or in subsequent filings related to this document that include additional features beyond those referred to below, those additional features shall not be presumed to have been added for any reason relating to patentability.
[0037] Embodiment 1 A system comprising: an elevator car; a depth camera situated at a first camera location; a thermal camera situated at a second camera location; an image processor communicatively coupled with the depth camera and the thermal camera; and an elevator controller communicatively coupled with the image processor; wherein the image processor is configured to: receive a set of image data from the depth camera and the thermal camera, wherein the set of image data comprises a set of thermal data and a set of spatial data; identify a set of discrete objects based upon the set of image data; determine a classification for each object of the set of discrete objects based upon the set of image data; estimate a weight for each object of the set of discrete objects based upon the classification; and provide a total weight estimate, based upon the weight for each object of the set of discrete objects, to the elevator controller; and wherein the elevator controller is configured to control one or more elevator cars based upon the total weight estimate.
Embodiment 2
The system of Embodiment 1, wherein the elevator controller and the image processor share a processor and a memory.
Embodiment 3
The system of any of Embodiments 1-2, further comprising a passenger waiting area, wherein the elevator car is configured to travel to the passenger waiting area, wherein the first camera location and the second camera location are located at the passenger waiting area.
Embodiment 4
The system of any of Embodiment 3, wherein the elevator controller is configured to: determine an additional occupancy weight based upon a maximum occupancy weight that is configured for the elevator car and a current occupancy weight that is provided by a sensor separate from the depth camera and the thermal camera; when the total weight estimate does not exceed the additional occupancy weight, prioritize sending the elevator car to the passenger waiting area; and when the total weight estimate is greater than about 75% of the maximum occupancy weight, and when the current occupancy weight indicates that the elevator car is empty, prioritize sending the elevator car to the passenger waiting area.
Embodiment 5
The system of Embodiment 4, wherein the sensor separate from the depth camera and the thermal camera is a floor sensor in the elevator car.
Embodiment 6
The system of Embodiment 1, wherein the first camera location and the second camera location are within the elevator car, and wherein the elevator controller is configured to: when the total weight estimate indicates that there are no passengers in the elevator car, cancel any floor stops that have been requested from a keypad of the elevator car; and when the total weight estimate indicates that the elevator car cannot accept any additional passengers, place the elevator car into a floor bypass mode.
Embodiment 7
The system of any of Embodiments 1-6, wherein the image processor is further configured to: map the set of thermal data to the set of spatial data to create a thermal spatial overlay; for each object of the set of discrete objects, identify a subset of the thermal spatial overlay that is associated with that object, the subset of the thermal spatial overlay comprising a subset of the set of thermal data and a subset of the set of spatial data.
Embodiment 8
The system of Embodiment 7, The system of claim 7, wherein the image processor is configured to determine the classification for each object of the set of discrete objects based on execution of a set of instructions that, when executed, cause the image processor to, for each object from the set of discrete objects: determine a thermal classification confidence score, wherein the thermal classification confidence score indicates the likelihood that the object is a human based upon whether the object's temperature falls within a configured range; determine a spatial classification confidence score, wherein the spatial classification confidence score indicates the likelihood that the object is a human based upon whether the object is spatially similar to a human; and determine a final classification confidence score based upon the thermal classification confidence score and the spatial classification confidence score.
[0053] Embodiment 9
[0054] The system of Embodiment 8, wherein the image processor is configured to determine the classification for each object of the set of discrete objects based on execution of a set of instructions that, when executed, cause the image processor to, for each object from the set of discrete objects: determine a tertiary classification confidence score based upon one or more of: information from a sound detecting device indicating the presence of passengers; information from an RFID reader indicating the presence of passenger key cards; information from a motion sensing door counter indicating the number of passengers that entered the elevator car; and information from the elevator controller indicating the number of floors selected for disembarkation; and determine the final classification confidence score based upon the thermal classification confidence score, the spatial classification confidence score, and the tertiary classification score.
[0055] Embodiment 10
[0056] The system of any of Embodiments 7-9, wherein the image processor is configured to estimate the weight for each object of the set of discrete objects based on execution of a set of instructions that, when executed, causes the image processor to, for each object from the set of discrete objects that is classified as a human: determine a volume of the object based upon the subset of the set of spatial data associated with the object; determme a portion of the volume of the object that is accounted for by one or more items being held or worn by the object based upon the subset of the set of thermal data associated with the object; reducing the volume of the object by the portion of the volume of the object that is accounted for by the one or more items being held or worn by the object; and determine the weight based upon the volume of the object.
[0057] Embodiment 1 1
[0058] The system of any of Embodiments 7-10, wherein the image processor is configured to estimate the weight for each object of the set of discrete objects based on execution of a set of instructions that, when executed, causes the image processor to, for each object from the set of discrete objects that is classified as a non-human: determine a volume of the non- human object based upon the subset of the set of spatial data associated with the non-human object; determine if the non-human object is being carried by a human object based upon the subset of the set of spatial data associated with the non-human object and the subset of the set of thermal data associated with the non-human object; if the non-human object is being carried by a human object, determine the weight for the non-human object based upon the volume of the non-human object and a carried object weight calculation; and if the non-human object is not being carried by a human object, determine the weight for the non-human object based upon the volume of the non-human object and a heavy object weight calculation.
[0059] Embodiment 12
[0060] A method comprising the steps: at an image processor, receiving a set of image data from a depth camera at a first camera location and a thermal camera at a second camera location, wherein the set of image data comprises a set of thermal data and a set of spatial data; identifying a set of discrete objects based upon the set of image data; determining a classification for each object of the set of discrete objects based upon the set of image data; estimating a weight for each object of the set of discrete objects based upon the classification; and providing a total weight estimate to an elevator controller, the total weight estimated based upon the weight for each object of the set of discrete objects, to the elevator controller; wherein the elevator controller is configured to control one or more elevator cars based upon the total weight estimate.
Embodiment 13
The method of Embodiment 12, wherein the one or more elevator cars are configured to travel to a passenger waiting area, wherein the first camera location and the second camera location are located at the passenger waiting area.
Embodiment 14
The method of Embodiment 13, further comprising the steps: determining an additional occupancy weight based upon a maximum occupancy weight that is configured for an elevator car of the one or more elevator cars and a current occupancy weight that is provided by a sensor separate from the depth camera and the thermal camera; when the total weight estimate does not exceed the additional occupancy weight, prioritize sending the elevator car to the passenger waiting area; and when the total weight estimate is greater than about 75% of the maximum occupancy weight, prioritize sending the elevator car to the passenger waiting area.
Embodiment 15
The method of Embodiment 14, wherein the sensor separate from the depth camera and the thermal camera is a floor sensor in the elevator car.
Embodiment 16 The method of Embodiment 12, wherein the first camera location and the second camera location are within the elevator car, further comprising the steps: when the total weight estimate indicates that there are no passengers in the elevator car, canceling any floor stops that have been requested from a keypad of the elevator car; and when the total weight estimate indicates that the elevator car cannot accept any additional passengers, placing the elevator car into a floor bypass mode.
Embodiment 17
The method of any of Embodiments 12-16, further comprising the steps: at the image processor, mapping the set of thermal data to the set of spatial data to create a thermal spatial overlay; for each object of the set of discrete objects, identifying a subset of the thermal spatial overlay that is associated with that object, the subset of the thermal spatial overlay comprising a subset of the set of thermal data and a subset of the set of spatial data.
Embodiment 18
The method of Embodiment 17, wherein the step of determining the classification for each object of the set of discrete objects comprises the steps, for each object from the set of discrete objects: at the image processor, determining a thermal classification confidence score, wherein the thermal classification confidence score indicates the likelihood that the object is a human based upon whether the object's temperature falls within a configured range; determining a spatial classification confidence score, wherein the spatial classification confidence score indicates the likelihood that the object is a human based upon whether the object is spatially similar to a human; and determining a final classification confidence score based upon the thermal classification confidence score and the spatial classification confidence score.
Embodiment 19 [0074] The method of any of Embodiments 17-18, wherein the step of estimating the weight for each object of the set of discrete objects comprises the steps, for each object from the set of discrete objects: at the image processor, determining a volume of the object based upon the subset of the set of spatial data associated with the object; determining a portion of the volume of the object that is accounted for by one or more items being held or worn by the object based upon the subset of the set of thermal data associated with the object; reducing the volume of the object by the portion of the volume of the object that is accounted for by the one or more items being held or worn by the object; and determining the weight based upon the volume of the object.
[0075] Embodiment 20
[0076] A system comprising: an elevator car; a depth camera situated at a first camera location; a thermal camera situated at a second camera location; an image processor communicatively coupled with the depth camera and the thermal camera; and an elevator controller communicatively coupled with the image processor; wherein a depth camera field of view and a thermal camera field of view overlap; wherein the image processor is configured to: receive a set of image data from the depth camera and the thermal camera, wherein the set of image data comprises a set of thermal data and a set of spatial data; identify a set of discrete objects based upon the set of image data; map the set of thermal data to the set of spatial data to create a thermal spatial overlay; for each object of the set of discrete objects, identify a subset of the thermal spatial overlay that is associated with that object, the subset of the thermal spatial overlay comprising a subset of the set of thermal data and a subset of the set of spatial data; determine a thermal classification confidence score, wherein the thermal classification confidence score indicates the likelihood that the object is a human based upon whether the object's temperature falls within a configured range; determine a spatial classification confidence score, wherein the spatial classification confidence score indicates the likelihood that the object is a human based upon whether the object is spatially similar to a human; determine a final classification confidence score based upon the thermal classification confidence score and the spatial classification confidence score; when the final classification confidence score indicates that the object is a human, determine a volume of the object based upon the subset of the set of spatial data associated with the object; determine a portion of the volume of the object that is accounted for by one or more items being held or worn by the object based upon the subset of the set of thermal data associated with the object; reducing the volume of the object by the portion of the volume of the object that is accounted for by the one or more items being held or worn by the object; determine the weight based upon the volume of the object; and provide a total weight estimate, based upon the weight for each object of the set of discrete objects, to the elevator controller; and wherein the elevator controller is configured to control one or more elevator cars based upon the total weight estimate.
Further variations on, features for, and applications of the inventor's technology will be apparent to, and could be practiced without undue experimentation by, those of ordinary skill in the art in light of this disclosure. Accordingly, the protection accorded by this document, or by any related document, should not be limited to the material explicitly disclosed herein. Accordingly, we claim:

Claims

1. A system comprising: an elevator car; a depth camera situated at a first camera location; a thermal camera situated at a second camera location; an image processor communicatively coupled with the depth camera and the
thermal camera; and an elevator controller communicatively coupled with the image processor; wherein the image processor is configured to: receive a set of image data from the depth camera and the thermal camera,
wherein the set of image data comprises a set of thermal data and a set of spatial data; identify a set of discrete objects based upon the set of image data; determine a classification for each object of the set of discrete objects based upon the set of image data; estimate a weight for each object of the set of discrete objects based upon the classification; and provide a total weight estimate, based upon the weight for each object of the set of discrete objects, to the elevator controller; and wherein the elevator controller is configured to control one or more elevator cars based upon the total weight estimate.
2. The system of claim 1 , wherein the elevator controller and the image processor share a processor and a memory.
3. The system of claim 1 or 2, further comprising a passenger waiting area, wherein the elevator car is configured to travel to the passenger waiting area, wherein the first camera location and the second camera location are located at the passenger waiting area.
4. The system of claim 3, wherein the elevator controller is configured to: determine an additional occupancy weight based upon a maximum occupancy weight that is configured for the elevator car and a current occupancy weight that is provided by a sensor separate from the depth camera and the thermal camera; when the total weight estimate does not exceed the additional occupancy weight, prioritize sending the elevator car to the passenger waiting area; and when the total weight estimate is greater than about 75% of the maximum
occupancy weight, and when the current occupancy weight indicates that the elevator car is empty, prioritize sending the elevator car to the passenger waiting area.
5. The system of claim 4, wherein the sensor separate from the depth camera and the thermal camera is a floor sensor in the elevator car.
6. The system of claim 1 or 2 wherein the first camera location and the second camera location are within the elevator car, and wherein the elevator controller is configured to: when the total weight estimate indicates that there are no passengers in the
elevator car, cancel any floor stops that have been requested from a keypad of the elevator car; and when the total weight estimate indicates that the elevator car cannot accept any additional passengers, place the elevator car into a floor bypass mode.
7. The system according to any of the preceding claims, wherein the image processor is further configured to: map the set of thermal data to the set of spatial data to create a thermal spatial overlay; and for each object of the set of discrete objects, identify a subset of the thermal spatial overlay that is associated with that object, the subset of the thermal spatial overlay comprising a subset of the set of thermal data and a subset of the set of spatial data.
8. The system of claim 7, wherein the image processor is configured to determine the classification for each object of the set of discrete objects based on execution of a set of instructions that, when executed, cause the image processor to, for each object from the set of discrete objects: determine a thermal classification confidence score, wherein the thermal
classification confidence score indicates the likelihood that the object is a human based upon whether the object's temperature falls within a configured range; determine a spatial classification confidence score, wherein the spatial
classification confidence score indicates the likelihood that the object is a human based upon whether the object is spatially similar to a human; and determine a final classification confidence score based upon the thermal
classification confidence score and the spatial classification confidence score.
9. The system of claim 8, wherein the image processor is configured to determine the classification for each object of the set of discrete objects based on execution of a set of instructions that, when executed, cause the image processor to, for each object from the set of discrete objects: determine a tertiary classification confidence score based upon one or more of: information from a sound detecting device indicating the presence of passengers; information from an RFID reader indicating the presence of passenger key cards; information from a motion sensing door counter indicating the number of passengers that entered the elevator car; and information from the elevator controller indicating the number of floors selected for disembarkation; and determine the final classification confidence score based upon the thermal
classification confidence score, the spatial classification confidence score, and the tertiary classification score.
10. The system of claim 7 or 8, wherein the image processor is configured to estimate the weight for each object of the set of discrete objects based on execution of a set of instructions that, when executed, causes the image processor to, for each object from the set of discrete objects that is classified as a human: determine a volume of the object based upon the subset of the set of spatial data associated with the object; determine a portion of the volume of the object that is accounted for by one or more items being held or worn by the object based upon the subset of the set of thermal data associated with the object; reduce the volume of the object by the portion of the volume of the object that is accounted for by the one or more items being held or worn by the object; and determine the weight based upon the volume of the object.
11. The system of claim 7, 8 or 9, wherein the image processor is configured to estimate the weight for each object of the set of discrete objects based on execution of a set of instructions that, when executed, causes the image processor to, for each object from the set of discrete objects that is classified as a non-human: determine a volume of the non-human object based upon the subset of the set of spatial data associated with the non-human object; determine if the non-human object is being carried by a human object based upon the subset of the set of spatial data associated with the non-human object and the subset of the set of thermal data associated with the non-human object; if the non-human object is being carried by a human object, determine the weight for the non-human object based upon the volume of the non-human object and a carried object weight calculation; and if the non- human object is not being carried by a human object, determine the weight for the non-human object based upon the volume of the non-human object and a heavy object weight calculation.
A method comprising the steps: at an image processor, receiving a set of image data from a depth camera at a first camera location and a thermal camera at a second camera location, wherein the set of image data comprises a set of thermal data and a set of spatial data; identifying a set of discrete objects based upon the set of image data; determining a classification for each object of the set of discrete objects based upon the set of image data; estimating a weight for each object of the set of discrete objects based upon the classification; and providing a total weight estimate to an elevator controller, the total weight
estimated based upon the weight for each object of the set of discrete objects, to the elevator controller; wherein the elevator controller is configured to control one or more elevator cars based upon the total weight estimate.
13. The method of claim 12, wherein the one or more elevator cars are configured to travel to a passenger waiting area, wherein the first camera location and the second camera location are located at the passenger waiting area.
14. The method of claim 13, further comprising the steps: determining an additional occupancy weight based upon a maximum occupancy weight that is configured for an elevator car of the one or more elevator cars and a current occupancy weight that is provided by a sensor separate from the depth camera and the thermal camera; when the total weight estimate does not exceed the additional occupancy weight, sending the elevator car to the passenger waiting area; and when the total weight estimate is greater than about 75% of the maximum
occupancy weight, sending the elevator car to the passenger waiting area.
15. The method of claim 14, wherein the sensor separate from the depth camera and the thermal camera is a floor sensor in the elevator car.
16. The method according to one of the claims 12-15, wherein the first camera location and the second camera location are within the elevator car, further comprising the steps: when the total weight estimate indicates that there are no passengers in the
elevator car, canceling any floor stops that have been requested from a keypad of the elevator car; and when the total weight estimate indicates that the elevator car cannot accept any additional passengers, placing the elevator car into a floor bypass mode.
17. The method according to one of the claims 12-16, further comprising the steps: at the image processor, mapping the set of thermal data to the set of spatial data to create a thermal spatial overlay; for each object of the set of discrete objects, identifying a subset of the thermal spatial overlay that is associated with that object, the subset of the thermal spatial overlay comprising a subset of the set of thermal data and a subset of the set of spatial data.
18. The method according to one of the claims 12-17, wherein the step of determining the classification for each object of the set of discrete objects comprises the steps, for each object from the set of discrete objects: at the image processor, determining a thermal classification confidence score, wherein the thermal classification confidence score indicates the likelihood that the object is a human based upon whether the object's temperature falls within a configured range; determining a spatial classification confidence score, wherein the spatial
classification confidence score indicates the likelihood that the object is a human based upon whether the object is spatially similar to a human; and determining a final classification confidence score based upon the thermal
classification confidence score and the spatial classification confidence score.
19. The method according to one of the claims 12-18, wherein the step of estimating the weight for each object of the set of discrete objects comprises the steps, for each object from the set of discrete objects: at the image processor, determining a volume of the object based upon the subset of the set of spatial data associated with the object; determining a portion of the volume of the object that is accounted for by one or more items being held or worn by the object based upon the subset of the set of thermal data associated with the object; reducing the volume of the object by the portion of the volume of the object that is accounted for by the one or more items being held or worn by the object; and determining the weight based upon the volume of the object.
20. A system comprising: an elevator car; a depth camera situated at a first camera location; a thermal camera situated at a second camera location; an image processor communicatively coupled with the depth camera and the thermal camera; and an elevator controller communicatively coupled with the image processor; wherein a depth camera field of view and a thermal camera field of view overlap; wherein the image processor is configured to: receive a set of image data from the depth camera and the thermal camera,
wherein the set of image data comprises a set of thermal data and a set of spatial data; identify a set of discrete objects based upon the set of image data; map the set of thermal data to the set of spatial data to create a thermal spatial overlay; for each object of the set of discrete objects, identify a subset of the thermal spatial overlay that is associated with that object, the subset of the thermal spatial overlay comprising a subset of the set of thermal data and a subset of the set of spatial data; determine a thermal classification confidence score, wherein the thermal
classification confidence score indicates the likelihood that the object is a human based upon whether the object's temperature falls within a configured range; determine a spatial classification confidence score, wherein the spatial
classification confidence score indicates the likelihood that the object is a human based upon whether the object is spatially similar to a human; determine a final classification confidence score based upon the thermal
classification confidence score and the spatial classification confidence score; when the final classification confidence score indicates that the object is a human, determine a volume of the object based upon the subset of the set of spatial data associated with the object; determine a portion of the volume of the object that is accounted for by one or more items being held or worn by the object based upon the subset of the set of thermal data associated with the object; reduce the volume of the object by the portion of the volume of the object that is accounted for by the one or more items being held or worn by the object; determine the weight based upon the volume of the object; and provide a total weight estimate, based upon the weight for each object of the set of discrete objects, to the elevator controller; and wherein the elevator controller is configured to control one or more elevator cars based upon the total weight estimate.
PCT/EP2016/054319 2015-03-04 2016-03-01 Multi camera load estimation WO2016139203A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201680013625.9A CN107428495B (en) 2015-03-04 2016-03-01 More photographic device load estimations

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US201562128187P 2015-03-04 2015-03-04
US62/128,187 2015-03-04

Publications (1)

Publication Number Publication Date
WO2016139203A1 true WO2016139203A1 (en) 2016-09-09

Family

ID=55542630

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/EP2016/054319 WO2016139203A1 (en) 2015-03-04 2016-03-01 Multi camera load estimation

Country Status (2)

Country Link
CN (1) CN107428495B (en)
WO (1) WO2016139203A1 (en)

Cited By (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108898044A (en) * 2018-04-13 2018-11-27 顺丰科技有限公司 Charging ratio acquisition methods, device, system and storage medium
WO2019043061A1 (en) * 2017-08-29 2019-03-07 Thyssenkrupp Elevator Ag Elevator traffic monitoring and control system
WO2019176039A1 (en) * 2018-03-15 2019-09-19 三菱電機株式会社 Image processing device, image processing method, and image processing program
WO2020000368A1 (en) * 2018-06-29 2020-01-02 Logistics and Supply Chain MultiTech R&D Centre Limited Electronic device and method for counting objects
CN111196536A (en) * 2019-11-26 2020-05-26 恒大智慧科技有限公司 Method, apparatus and storage medium for capacity-based control of elevators in intelligent community
EP3699929A1 (en) * 2019-02-25 2020-08-26 Siemens Healthcare GmbH Patient weight estimation from surface data using a patient model
US20200302443A1 (en) * 2019-03-22 2020-09-24 Capital One Services, Llc Determining a body mass index of a user of a transaction device and verifying the user for utilization of the transaction device based on the body mass index
IT201900018653A1 (en) * 2019-10-14 2021-04-14 Vega S R L Procedure for determining the human weight in an elevator car
IT202000002329A1 (en) * 2020-02-06 2021-08-06 Faiveley Transport Italia Spa System for estimating a load index of a railway vehicle
IT202000013279A1 (en) * 2020-06-04 2021-12-04 Grottini Lab S R L RE-IDENTIFICATION SYSTEM OF ONE OR MORE PEOPLE IN A PUBLIC AND/OR PRIVATE ENVIRONMENT
US20220245375A1 (en) * 2021-01-30 2022-08-04 David Young Volumetric Security
US11608250B2 (en) 2017-03-27 2023-03-21 Inventio Ag Method and device for monitoring an elevator car door
JP7347349B2 (en) 2020-07-07 2023-09-20 トヨタ自動車株式会社 Information processing device, information processing system, and information processing method

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP3571997B1 (en) * 2018-05-23 2022-11-23 Siemens Healthcare GmbH Method and device for determining the weight of a patient and/or a body mass index
CN109775506A (en) * 2019-03-26 2019-05-21 海安县申菱电器制造有限公司 A kind of elevator weighing apparatus
CN109867187A (en) * 2019-03-26 2019-06-11 海安县申菱电器制造有限公司 A kind of elevator weighing method
CN110386520A (en) * 2019-08-05 2019-10-29 浙江一网通信息科技有限公司 A kind of Elevator operation control system and method
CN112225021B (en) * 2020-11-02 2022-04-12 江苏蒙哥马利电梯有限公司 Intelligent elevator dispatching control method based on planetary gear transmission module

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20060037818A1 (en) * 2003-03-20 2006-02-23 Romeo Deplazes Three-dimensional monitoring in the area of an elevator by means of a three-dimensional sensor
US20120326959A1 (en) * 2011-06-21 2012-12-27 Microsoft Corporation Region of interest segmentation

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP4906377B2 (en) * 2006-03-22 2012-03-28 株式会社日立製作所 Elevator system
CN103253563B (en) * 2012-02-17 2014-10-22 上海三菱电梯有限公司 Elevator and control method thereof

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20060037818A1 (en) * 2003-03-20 2006-02-23 Romeo Deplazes Three-dimensional monitoring in the area of an elevator by means of a three-dimensional sensor
US20120326959A1 (en) * 2011-06-21 2012-12-27 Microsoft Corporation Region of interest segmentation

Cited By (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11608250B2 (en) 2017-03-27 2023-03-21 Inventio Ag Method and device for monitoring an elevator car door
WO2019043061A1 (en) * 2017-08-29 2019-03-07 Thyssenkrupp Elevator Ag Elevator traffic monitoring and control system
WO2019176039A1 (en) * 2018-03-15 2019-09-19 三菱電機株式会社 Image processing device, image processing method, and image processing program
CN108898044A (en) * 2018-04-13 2018-11-27 顺丰科技有限公司 Charging ratio acquisition methods, device, system and storage medium
CN108898044B (en) * 2018-04-13 2021-10-29 顺丰科技有限公司 Loading rate obtaining method, device and system and storage medium
WO2020000368A1 (en) * 2018-06-29 2020-01-02 Logistics and Supply Chain MultiTech R&D Centre Limited Electronic device and method for counting objects
CN112771534A (en) * 2018-06-29 2021-05-07 物流及供应链多元技术研发中心有限公司 Electronic device and object counting method
US11836984B2 (en) 2018-06-29 2023-12-05 Logistics and Supply Chain MultiTech R&D Centre Limited Electronic device and method for counting objects
EP3699929A1 (en) * 2019-02-25 2020-08-26 Siemens Healthcare GmbH Patient weight estimation from surface data using a patient model
US11703373B2 (en) 2019-02-25 2023-07-18 Siemens Healthcare Gmbh Patient weight estimation from surface data using a patient model
US20200302443A1 (en) * 2019-03-22 2020-09-24 Capital One Services, Llc Determining a body mass index of a user of a transaction device and verifying the user for utilization of the transaction device based on the body mass index
IT201900018653A1 (en) * 2019-10-14 2021-04-14 Vega S R L Procedure for determining the human weight in an elevator car
CN111196536A (en) * 2019-11-26 2020-05-26 恒大智慧科技有限公司 Method, apparatus and storage medium for capacity-based control of elevators in intelligent community
WO2021156806A1 (en) * 2020-02-06 2021-08-12 Faiveley Transport Italia S.P.A. System for estimating a load index of a railway vehicle
IT202000002329A1 (en) * 2020-02-06 2021-08-06 Faiveley Transport Italia Spa System for estimating a load index of a railway vehicle
IT202000013279A1 (en) * 2020-06-04 2021-12-04 Grottini Lab S R L RE-IDENTIFICATION SYSTEM OF ONE OR MORE PEOPLE IN A PUBLIC AND/OR PRIVATE ENVIRONMENT
JP7347349B2 (en) 2020-07-07 2023-09-20 トヨタ自動車株式会社 Information processing device, information processing system, and information processing method
US20220245375A1 (en) * 2021-01-30 2022-08-04 David Young Volumetric Security

Also Published As

Publication number Publication date
CN107428495B (en) 2019-09-06
CN107428495A (en) 2017-12-01

Similar Documents

Publication Publication Date Title
WO2016139203A1 (en) Multi camera load estimation
US20200023811A1 (en) Vehicle load prediction
CN106144816B (en) Occupant detection based on depth transducer
CN103974887B (en) Method and apparatus for elevator motion detection
JPH08178390A (en) Human body activity value calculator and human body activity value/wearing value calculator and air conditioning equipment therewith and human body abnormality communicator
US9251400B2 (en) Learning apparatus, method for controlling learning apparatus, detection apparatus, method for controlling detection apparatus and storage medium
EP2971987A1 (en) Energy saving heating, ventilation, air conditioning control system
US20180039862A1 (en) Method and system for detecting an occupant in an image
US20030234519A1 (en) System or method for selecting classifier attribute types
JP2021140830A (en) Store device, store management method, and program
CN107832986B (en) Method for providing safety for transit point
CN112850384B (en) Control method, control device, elevator and storage medium
JP6351860B2 (en) Action identification device, air conditioner and robot control device
EP1686544A2 (en) Traffic monitoring apparatus
JP2019532387A (en) Infant detection for electronic gate environments
EP3564880A1 (en) System and method for validating physical-item security
CN107358155A (en) Method and device for detecting ghost face action and method and system for recognizing living body
US20170163909A1 (en) Method and system for detecting occupancy in a space
EP3994604A1 (en) Systems and methods for determining actions performed by objects within images
KR20210136323A (en) System and method for determining a patient's fall situation
JP6092498B2 (en) Spatial measurement system, measurement method, and elevator control system
JP6723763B2 (en) Luggage inspection system
JP2013064671A (en) Measurement system and measurement method
KR101379211B1 (en) Apparatus and method for detecting position of moving unit
JP2019006272A (en) Estimation processing device, estimation processing system, and estimation processing method

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 16710403

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 16710403

Country of ref document: EP

Kind code of ref document: A1