WO2018096544A1 - Machine learning in a multi-unit system - Google Patents
Machine learning in a multi-unit system Download PDFInfo
- Publication number
- WO2018096544A1 WO2018096544A1 PCT/IL2017/051289 IL2017051289W WO2018096544A1 WO 2018096544 A1 WO2018096544 A1 WO 2018096544A1 IL 2017051289 W IL2017051289 W IL 2017051289W WO 2018096544 A1 WO2018096544 A1 WO 2018096544A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- unit
- parameter set
- images
- units
- image
- Prior art date
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/20—Analysis of motion
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N20/00—Machine learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/44—Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
- G06V10/443—Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components by matching or filtering
- G06V10/449—Biologically inspired filters, e.g. difference of Gaussians [DoG] or Gabor filters
- G06V10/451—Biologically inspired filters, e.g. difference of Gaussians [DoG] or Gabor filters with interaction between the filter responses, e.g. cortical complex cells
- G06V10/454—Integrating the filters into a hierarchical structure, e.g. convolutional neural networks [CNN]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/82—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/50—Context or environment of the image
- G06V20/59—Context or environment of the image inside of a vehicle, e.g. relating to seat occupancy, driver state or inner lighting conditions
- G06V20/593—Recognising seat occupancy
-
- G—PHYSICS
- G16—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
- G16Z—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS, NOT OTHERWISE PROVIDED FOR
- G16Z99/00—Subject matter not provided for in other main groups of this subclass
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
-
- G—PHYSICS
- G16—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
- G16H—HEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
- G16H30/00—ICT specially adapted for the handling or processing of medical images
- G16H30/40—ICT specially adapted for the handling or processing of medical images for processing medical images, e.g. editing
Definitions
- the present invention relates to the field of machine learning, typically in a multi- unit system
- the invention relates to image and scene analysis using machi ne I earni ng techni ques.
- Computer vision is used to monitor in-door and out-door spaces; in some cases to automatically detect, count and monitor human occupants in a space.
- Machine learning algorithms typically operate by iteratively training a model (also known as ' network , or ' parameter setj using example inputs (typically manually labeled true and false inputs) in order to make data- driven predictions or decisions. During each iteration, at least a part of the data set is examined, an intermediate calculation result is generated and this intermediate result is used to create an updated model.
- a processor running a learning process should be presented with a large and diverse example input data set
- Embodiments of the invention provide a method and system for creating large and diverse data sets, for example data sets of images or image data, without having to expend transmission bandwidth and without affecting privacy of imaged occupants.
- a method and system are provided for detecting occupants in images and for otherwise analyzing an imaged scene, using machine learning techniques but without impacting privacy of occupants.
- data collected by one or more sensor units is used, essentially as a growing distributed database, to improve a machine learning system, however, without transmitting any of the collected data, thereby avoiding issues such as, transmission bandwidth and privacy.
- Some embodiments of the invention enable training a machine learning process by using image based information collected from one or more units, without transmitting any images or visual information, thereby reducing bandwidth utilization and, by preventing access to visual information, avoiding privacy issues.
- a system includes at least one sensor unit and a central unit, e.g., a computing unit, in communication with the sensor unit
- a central unit e.g., a computing unit
- a local database in (or available to) the sensor unit is used to calculate an intermediate result which is transmitted to the computing unit and the computing unit calculates a new, typically improved or up-dated, parameter set based on the intermediate result
- a parameter set is generated locally at one or more sensor units by running a local training process at each sensor unit The one or more locally generated parameter set is then transmitted to the computing unit and the computing unit calculates a new, typically improved or up-dated, parameter set based on the locally generated one or more parameter sets.
- the parameter new set which may be transmitted back to the one (or more) sensor unit can then be used by a processor of the sensor unit or other units, for example, to detect an occupant i n i mages of a space or otherwise analyze an i imaged scene i n new i mages.
- B oth the i ntermediate result and/or the I ocal ly generated parameter set are generated based on data collected by a sensor unit e.g., image data collected by an image sensor, however they do not contain the collected data (e.g., contain no visual information) and cannot be used to reconstruct the data (eg., images).
- a multi-unit system By sharing parameter sets and intermediate results, but not actual data (e.g., visual information), a multi-unit system according to embodiments of the invention enables access to a large database of information collected by different units of the system Since no collected data is being transferred between units, privacy of occupants, for example, in a monitored space, can be maintained.
- transmitting parameter sets and intermediate result, but not collected data enables access to a large database of information collected by different units of the system without expending bandwidth due to transmission of largevolumes of data.
- a parameter set may be improved and updated internally within a sensor unit, based on collected data (eg., image or visual data) but without transmitti ng the col I ected data outsi de of the sensor unit
- FIG. 1 is a schematic illustration of a system operable according to embodiments of the invention.
- FIG. 2 is a schematic illustration of components of a system and method, according to an embodiment of the invention.
- FIG. 3 is a schematic illustration of a multi-unit system, according to an embodi ment of the i nventi on;
- FIG. 4 schematically illustrates a method for machine learning in a multi-unit system, according to embodiments of the invention
- FIG. 5 schematically illustrates a method for training a sensor unit, according to embodi ments of the i nventi on;
- FIG. 6 is a schematic illustration of a sensor unit operable according to embodi ments of the i nventi on.
- Embodiments of the invention provide a method and system for creating and using large and diverse data sets without transmitting collected data.
- a method and system for image analysis using machine learning techniques while preventing access to collected images and maintaining privacy of imaged scenes (e.g., scenes including locations and/or people).
- image based information which does not, however, contain any visual information, is used to update and improve units of a multi-unit system
- Visual information or ' image data_ refer to, inter alia, data such as values that represent the intensity of reflected light as well partial or full images or videos or data that can be used to reconstruct an image.
- determining occupancy may include detecting an occupant and/or monitoring one or more occupants throughout an imaged space e.g., counting occupants, tracking occupants, determining occupants " location in a space, etc.
- Obcupant may refer to any type of occupant such as a human and/or ani mal and/or inanimate object
- embodiments of the invention are not limited to the field of image analysis or computer vision, and may be applied to other fields.
- a system which includes a first unit, e.g., a computing unit, in communication with a second unit, e.g., a sensor unit which includes a processing unit running a machine learning process.
- the computing unit receives an intermediate result from the sensor unit
- the intermediate result is generated by using a local database at the sensor unit
- an intermediate result may be generated by a calculation process at the sensor unit.
- a multi-unit system in another embodiment, includes a plurality of units, each unit capable of generating a parameter set by running a local training process using a local database of training examples.
- the system also includes a computing unit to receive the parameter sets generated by the units and to combine the parameter sets to generate a new parameter set. The computing unit may then transmit the new parameter set to one or more units of the multi-unit system
- data such as image based, audio based or other data is collected from one or more units, to a local database.
- the local database includes true image examples of a space which include an occupant and false image examples of the space which do not include an occupant.
- Other true and false image examples may be used according to embodi ments of the i nventi on.
- the computing unit calculates an updated parameter set based on an intermediate result and/or locally generated parameter set which was sent to the computing unit from one or more of the sensor units.
- the computing unit then transmits the updated parameter set to one or more of the sensor units.
- the sensor uni3 ⁇ 4s) uses the updated parameter set in a machine learning process, for example, to detect an occupant i n i mages of a space and/or to determi ne occupancy i n an i mage of the space.
- a local database of training examples is accumulated automatically in the sensor unit
- the local database includes true examples, in one embodiment images or image parts, which can be identified with high probability to contain an occupant and false examples, namely, images or image parts identified with high probabi lity not to contai n occupants.
- T he I ocal database can be created by utilizing computer vision algorithms, by using data from other sensors, or by other means, either manual or automatic.
- FIG. 1 An example of a system operable according to embodiments of the invention is schematically illustrated in Fig. 1.
- Sensor unit 103 typically includes an interface 111 for wired or wireless communication with computing unit 105 and with other additional sensor units and/or other additional computing units (not shown).
- Computing unit 105 typically includes an i nterface 111 " to enabl e communi cati on wi th sensor uni 1 103 and other sensor uni ts and/or computing units.
- Communication between units of the system 100 may be through a wired connection (e.g., interfaces 111 and 111 " may include a USB or Ethernet port) or wireless link, such as through infrared (IR) communication, radio transmission, Bluetooth technology, ZigBee, Z-Wave and other suitable communication routes.
- a wired connection e.g., interfaces 111 and 111 " may include a USB or Ethernet port
- wireless link such as through infrared (IR) communication, radio transmission, Bluetooth technology, ZigBee, Z-Wave and other suitable communication routes.
- the image sensor 113 is associated with a processor 102 and a memory 12, which may be part of the sensor unit 103.
- Processor 102 runs algorithms and processes for image analysis, e.g., to detect an occupant and determine occupancy in the space (e.g., room 104) based on i nput from i mage sensor 113.
- Processor 102 may include, for example, one or more processors and may be a central processing unit (CPU), a digital signal processor (DSP), a microprocessor, a controller, a chip, a microchip, an integrated circuit (IC), or any other suitable multipurpose or specific processor or control ler.
- Memory unit(s) 12 may include, for example, a random access memory (RAM), a dynamic RAM (DRAM), a flash memory, a volatile memory, a non-volatile memory, a cache memory, a buffer, a short term memory unit, a long term memory unit, or other suitable memory units or storage units.
- RAM random access memory
- DRAM dynamic RAM
- flash memory a volatile memory
- non-volatile memory a non-volatile memory
- cache memory a buffer
- a short term memory unit a long term memory unit
- other suitable memory units or storage units or storage units.
- the processor 102 may run a machine learning process to detect an occupant in images of a space.
- An occupant may be detected based on properties or characteristics such as motion characteristics, shape, color and other properties or a combination of properties, e.g., based on a combination of motion characteristics and shape.
- properties or characteristics such as motion characteristics, shape, color and other properties or a combination of properties, e.g., based on a combination of motion characteristics and shape.
- an occupant exhibits human characteristics, as further detailed below.
- a machine learning process may run a set of algorithms that use multiple processing layers on an image to identify desired image features (image features may include any information obtainable from an image, e.g., the existence of objects or parts of objects, their location, their type and more).
- image features may include any information obtainable from an image, e.g., the existence of objects or parts of objects, their location, their type and more).
- Each processing layer receives input from the layer below and produces output that is given to the layer above, until the highest layer produces the desired image features.
- Activity of the different processing layers are typically ruled by a parameter set (which may include sets of adaptive weights e.g., numerical parameters which are typically tuned by a learning algorithm).
- a shape (or other property) of an object may be determined enabling the system to detect an occupant based on shape (or motion or color, etc.).
- the image sensor 113 is configured to obtain a top view of a space.
- image sensor 113 may be located on a ceiling of room 104 typically in parallel to the floor of the room to obtain a top view of the room or of part of the room 104.
- processor 102 generates a parameter set via a training process using a local database of training examples.
- Processor 102 may transmit the parameter set or an intermediate result from the parameter set to computing unit 105.
- Computing unit 105 may then calculate a new parameter set based on the parameter set transmitted by processor 102 (and possibly based on parameter sets transmitted by processors from additional units in the system) and/or based on the intermediate results).
- the computing unit 105 may then transmit the new parameter set back to sensor unit 103 and/or to another unit in the multi-unit system.
- the system 100 may include a plurality of image sensors 113, each to obtain images of the space 104 (typically each image sensor obtains images of different parts of the space), each image sensor associated with a different sensor unit 103.
- each sensor unit 103 uses the new parameter set transmitted by computing unit 105, to detect an occupant in an image of the space.
- an architecture is installed and maintained at sensor unit 103 and the processor 102 applies a specific parameter set to the architecture maintained at sensor unit 103.
- a system includes a first sensor unit 203 which is in communication with a second unit 205, possibly via a computing unit
- First sensor unit 203 may transmit image based information (but not visual information) to the second unit 205, the information being used to update units of the system
- Each update can trigger a gradient descent on the image data input to a learning process at the sensor units.
- Unit 203 may send to unit 205 information such as: timestamp, network ID, gradient, image data batch statistics, etc.
- unit 205 may automati cal ly run val i dati on and other processes or may be triggered by an operator to run val i dati on and other operati ons.
- true and false image data, 206 and 207 are used in a process to generate a local parameter set and/or an intermediate result of the local parameter set
- true and false image data 206 and 207 are input to a training process 208 and a local parameter set or an intermediate calculation result 210 is generated from the training process 208.
- batches of 1,000 images (or portions of images) are used in training process 208.
- the local parameter set or intermediate calculation result 210 may be transmitted to a second unit of the system, e.g., to second unit 205.
- true and false image data 206 and 207 are generated based on probability scores. For example, images or parts of images that have a probability above a threshold of being true may be saved in a local true database whereas the other images are saved in a local false database.
- a local image database 201 of training examples includes image data of a space.
- the database 201 may include true image data 206 which may include images (or portions of images) that include an occupant.
- true image data may include the portion of an image which depicts the occupant or part of the occupant.
- an occupant is an object exhibiting human characteristics, for example, human motion characteristics (e.g., non-repetitive motion, movements within a predetermined size range, etc.).
- an occupant is an object having predetermined shape characteristics, for example, a shape of a human, and in some embodi ments a top vi ew shape of a human.
- an occupant is an object having a predetermined shape and predetermined motion characteristics.
- true image data 206 and/or false image data 207 are generated by taking a snapshot of a tracker (which may be, for example, part of processor 102) that tracks objects in images obtained by image sensor 113.
- a tracker which may be, for example, part of processor 102
- an image may be labeled ' true_ if the tracked object exists in the images for over 30 seconds and has human motion characteristics. In this case, images obtained within this time frame (e.g., 30 seconds) have a probability above the threshold of being true.
- images from local image database 201 may, in addition or in parallel to being input to training process 208, go through a classification process run by classifier 212 at sensor unit 203.
- the classifier 212 processes the images, e.g., by running them through a machine learning process and generates output 213.
- the output 213 may be used, for example, to detect an occupant or determine occupancy in the space from the images.
- output 213 may include a signal transmitted to a remote device such as an al arm or a dev i ce to di spl ay i nf ormati on rel ati ng to the determi nati on of occupancy.
- the local parameter set or intermediate calculation result 210 obtained through training process 208 at sensor unit 203 is transmitted, possibly via a central unit, to second unit 205.
- the local parameter set or intermediate calculation result 210 generated at a first unit 203 can then be used by networks of different units of the system
- the local parameter set or intermediate calculation result 210 can be used by training process 218 at second unit 205 (or at another sensor unit of the system), together with images from an image database 211 maintained at the second unit 205 (or at another sensor unit of the system).
- the training process 218, run on architecture 215, at second unit 205 may generate a new, improved or up-dated parameter set 220.
- the new parameter set 220 is improved or up-dated because it is generated based on a local parameter set or intermediate calculation result 210 which was generated at a different sensor unit (e.g., unit 203) but used image data that was available only to the second unit 205 and was not avail able to the unit 203.
- new parameter set 220 can be calculated offline at a typically central computing unit, based on parameter sets and/or intermediate results input from a plurality of sensor units.
- a new, typically up-dated, parameter set can be calculated at a computing unit by combining (e.g., by averaging) several intermediate results and/or parameter sets.
- New parameter set 220 may be transmitted to sensor unit 203 and/or to other units where the new parameter set 220 is used with machine learning processes.
- Local parameter sets or intermediate calculation result 210 and new parameter set 220 are generated based on image data (e.g., images from local database 201) however they contain no visual information and cannot be used to reconstruct images. Since intermediate results and/or parameter sets, but no visual information, are transferred between units of the system, access to the visual information is prevented, thus maintaining privacy of imaged occupants.
- Images of a space collected at each sensor unit are processed within the sensor unit (e.g., by processing unit 202 and/or by classifier 212) and are not transmitted out of the sensor unit.
- the methods and systems according to embodiments of the invention enable access to a large database of information collected by several different units (e.g., first unit 203 and second unit 205) but since no visual information is accessible, privacy and/or other rights of imaged occupants are not violated.
- a parameter set (e.g., new parameter set 220) is calculated by using a training process (eg., training process 218) at the second unit 205, however other methods of calculating a parameter set, based on one or more local parameter sets or intermediate results, may be used according to embodiments of the invention.
- calculation of a parameter set includes using inputs from a plurality of sensor units.
- a new parameter set 220 may be calculated based on the local parameter set or intermediate calculation results 210 sent from senor unit 203 and based on additional parameter sets and/or intermediate results sent from additional sensor units in the system
- a new parameter set is transmitted to a sub-set of units of the multi-unit system.
- a central computing unit can generate a plurality of differing new parameter sets, and transmit each different new parameter set to a different sub-set of units of the multi-unit system, possibly based on predetermined criteria or based on criteria determined in real-time.
- a multi-unit system 300 includes a plurality of units 301, 302 and 303 or sub-sets of units.
- Unit 301 can receive a local parameter set or intermediate result A from unit 302 and local parameter set or intermediate result B from unit 303 and may calculate a different parameter set C based on the plurality of (possibly different) local parameter sets or intermediate results (A and B) received.
- unit 301 calculates several different parameters sets. For example, based on local parameter sets or intermediate results A and B, (or even just based on one of A or B) unit 301 may calculate parameter sets C and D. Unit 301 may transmit each of the different parameter sets to a different unit in the system For example, based on local parameter sets or intermediate results input to unit 301 a processor in unit 301 may calculate parameter set C which is more suitable for unit 302 and parameter set D which is more suitable for unit 303. Thus, unit 301 may transmit parameter set C to unit 302 to update unit 302 and parameter D to unit 303 to update unit 303.
- the decision, which parameter set is suitable to which unit or sub-set of units may be based on predetermined criteria, such as geographical or other location of the sub-set, or based on criteria developed in real-time, for example, based on content of images obtained at a sub- set of units.
- the locally generated parameter set or intermediate result is received in a first unit of a multi-unit system and the new parameter set, which is calculated by a processor of the first unit, is transmitted from the first unit to a second unit of the system to update the machine learning process and/or classification process at the second unit
- the method includes calculating a plurality of (different) parameter sets based on one or more locally generated parameter sets or intermediate results.
- each of the calculated parameter sets can be transmitted to a different unit or different sub- set of units of the system.
- calculating a new parameter set includes running a training process using a received intermediate result. In another embodiment calculating a new parameter set includes combining (e.g., by calculating an average) locally generated parameter sets. Other or additional mathematical functions may be used to calculate the new parameter set.
- the method is for image-based machine learning in a multi- unit system
- the intermediate result may be generated by using input images which include true images that include an occupant and false images that do not include an occupant.
- a database of training input is generated by a processor using image analysis processes for occupancy detection.
- a processor is used to label an image (or a portion of an image) from a sequence of images as ' true_ or ' false_. These automatically labeled images (or portions) may then be used (e.g., in a training process) in a sensor to detect occupancy.
- a training process may generate a parameter set to be used by a machine learning process at the sensor and the sensor may thus use the parameter set to classify new images from a new sequence of images and a determination of occupancy may be generated based on this classification.
- a method for determining occupancy in a space may include using a processor to label image data from a first sequence of images based on motion detection in the first sequence of images. The labeled image data may then be used to generate a parameter set for a machine learning process. The machine learning process or parameter set may be used to classify images from a second sequence of images and a determination of occupancy may be generated based on the classification.
- an image analysis process is applied to one or more images from a sequence of images of a space (502) to determine if an occupant or part of an occupant is depicted in a first image (or portion of the image) from the sequence images. If the first image (or portion of image) includes an occupant (or part of an occupant) (504) then the first image (or portion of image) is labeled ' true_ (506). If the first image (or portion of image) does not include an occupant (or part of occupant) then the image is labeled ' false_ (508). Each labeled image (or portion of image) may then be saved in an appropriate database to be used in a machine learning training process. This automatic labeling process may be repeated for a second (and for additi onal) i mage from the sequence of i mages.
- True and false images may be determined by having a probability above a threshold, e.g., as described above.
- An image analysis process for occupancy detection may include motion detection and/or shape detection and/or other image analysis techniques.
- an object suspected to be an occupant is detected in a first image from a sequence of images
- the object may be tracked throughout later images of the sequence and the first image can be labeled based on the tracking (e.g., if the tracking revealed motion typical of a human the first image may then be labeled ' truej.
- an image may be labeled by applying a shape detection algorithm on the image (or several images) (e.g., if an image (or portion of image) includes an object having a shape of a human then that image may be labeled ' truej.
- an image may be labeled by applying a combination of algorithms, e.g., shape detection and motion detection algorithms.
- the automatically labeled images may be input to a training process, which may generate a local parameter set or an intermediate result for a machine learning process to use in order to classify images from a second (typically later) sequence of images to determine occupancy in a space based on the second sequence of images.
- a device 500 (which may be, for example, a stand-alone unit or a unit in a multi-unit system) may include a processor 602 to label an image (or portion of image) from a sequence of images of a space and to self-train by using a first sequence of images to improve classification of a second sequence of images, without having to transmit or receive images from an external source.
- the device 600 includes processor 602 and image sensor 603 which is in communication with the processor 602.
- the image sensor 603 may be remote and not necessarily part of the device 600.
- image sensor 603 captures a first sequence of images which may be kept in a first local image database 661.
- the images from database 661 are processed by processor 602 such that images (or portions of the images) are automatically labeled (612) (e.g., true images contain an occupant whereas false images do not contain an occupant, true images contain a predetermined shape (e.g., a shape of a standing or sitting occupant) whereas false images do not contain the predetermined shape, etc.).
- the labeled images are then input to a process, e.g., a training process (613) run by processor 602 and a first parameter set or an intermediate result 614 is generated based on the training process (613).
- the first parameter set or intermediate result 614 is then used in a machine learning process, for example, as described above, for classifying images from a second sequence of images.
- the second sequence of images may include images obtained from image sensor 603 at a later time than the first sequence of images and which may be kept in a second image database 662.
- images or parts of images from database 662 are classified (615) to determine occupancy (616).
- an occupancy signal may be output
- the output 617 may be a signal such as an audio or visual signal to alert an operator.
- the output 617 may be a signal transmitted to a remote device such as an alarm or a device to display information relating to the determination of occupancy (616).
- the output 617 may be a signal to operate or modulate a HVAC (heating, ventilation and air conditioning) device or other environment comfort devices, based on the determination of occupancy (616)
- images and/or visual information are processed within units (stand-alone units or units that are part of a multi- unit system) and may be used to improve image analysis (e.g., by running a training process at a learning machine).
- the images and/or visual information are not transmitted out of the unit, thereby avoiding violating rights related to the images and reducing transmission bandwidth utilization.
- the stand alone units and/or multi-unit system enables access to a large database of information collected over time or from different units of the system while maintaining privacy of occupants in a monitored space.
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Physics & Mathematics (AREA)
- Evolutionary Computation (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Multimedia (AREA)
- Software Systems (AREA)
- Artificial Intelligence (AREA)
- Health & Medical Sciences (AREA)
- Computing Systems (AREA)
- General Health & Medical Sciences (AREA)
- Medical Informatics (AREA)
- Databases & Information Systems (AREA)
- Life Sciences & Earth Sciences (AREA)
- Biodiversity & Conservation Biology (AREA)
- Biomedical Technology (AREA)
- Molecular Biology (AREA)
- Data Mining & Analysis (AREA)
- General Engineering & Computer Science (AREA)
- Mathematical Physics (AREA)
- Image Analysis (AREA)
Abstract
Description
Claims
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US201662426541P | 2016-11-27 | 2016-11-27 | |
US62/426,541 | 2016-11-27 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2018096544A1 true WO2018096544A1 (en) | 2018-05-31 |
Family
ID=62195775
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/IL2017/051289 WO2018096544A1 (en) | 2016-11-27 | 2017-11-27 | Machine learning in a multi-unit system |
Country Status (1)
Country | Link |
---|---|
WO (1) | WO2018096544A1 (en) |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20140088989A1 (en) * | 2012-09-27 | 2014-03-27 | Balaji Krishnapuram | Rapid Learning Community for Predictive Models of Medical Knowledge |
US20150170053A1 (en) * | 2013-12-13 | 2015-06-18 | Microsoft Corporation | Personalized machine learning models |
US20150193695A1 (en) * | 2014-01-06 | 2015-07-09 | Cisco Technology, Inc. | Distributed model training |
US20150279051A1 (en) * | 2012-09-12 | 2015-10-01 | Enlighted, Inc. | Image detection and processing for building control |
US20160148044A1 (en) * | 2012-05-10 | 2016-05-26 | Pointgrab Ltd. | System and method for computer vision based tracking of an object |
-
2017
- 2017-11-27 WO PCT/IL2017/051289 patent/WO2018096544A1/en active Application Filing
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20160148044A1 (en) * | 2012-05-10 | 2016-05-26 | Pointgrab Ltd. | System and method for computer vision based tracking of an object |
US20150279051A1 (en) * | 2012-09-12 | 2015-10-01 | Enlighted, Inc. | Image detection and processing for building control |
US20140088989A1 (en) * | 2012-09-27 | 2014-03-27 | Balaji Krishnapuram | Rapid Learning Community for Predictive Models of Medical Knowledge |
US20150170053A1 (en) * | 2013-12-13 | 2015-06-18 | Microsoft Corporation | Personalized machine learning models |
US20150193695A1 (en) * | 2014-01-06 | 2015-07-09 | Cisco Technology, Inc. | Distributed model training |
Non-Patent Citations (1)
Title |
---|
"Product description, Pointgrab,", HTTPS://WEB.ARCHIVE.ORG/WEB/20161031113337/HTTP://WWW.POINTGRAB.COM/PRODUCT, 31 October 2016 (2016-10-31), XP055488077, Retrieved from the Internet <URL:https://web.archive.org/web/20161031113337/http://www.pointgrab.com/product> * |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US11710075B2 (en) | Hazard recognition | |
US20180137369A1 (en) | Method and system for automatically managing space related resources | |
JP6905850B2 (en) | Image processing system, imaging device, learning model creation method, information processing device | |
US10049304B2 (en) | Method and system for detecting an occupant in an image | |
US8208028B2 (en) | Object verification device and object verification method | |
US10748024B2 (en) | Method and system for detecting a person in an image based on location in the image | |
CN110925969B (en) | Air conditioner control method and device, electronic equipment and storage medium | |
Chen et al. | A fall detection system based on infrared array sensors with tracking capability for the elderly at home | |
KR102478335B1 (en) | Image Analysis Method and Server Apparatus for Per-channel Optimization of Object Detection | |
US10395124B2 (en) | Thermal image occupant detection | |
US10205891B2 (en) | Method and system for detecting occupancy in a space | |
DE102014203749A1 (en) | Method and device for monitoring at least one interior of a building and assistance system for at least one interior of a building | |
US20240046701A1 (en) | Image-based pose estimation and action detection method and apparatus | |
JP4813205B2 (en) | Video surveillance system and video concentrator | |
Hillyard et al. | Never use labels: Signal strength-based Bayesian device-free localization in changing environments | |
US11256910B2 (en) | Method and system for locating an occupant | |
CN114972727A (en) | System and method for multi-modal neural symbol scene understanding | |
CN109727417A (en) | Video processing unit is controlled to promote the method and controller of detection newcomer | |
US11281899B2 (en) | Method and system for determining occupancy from images | |
WO2018096544A1 (en) | Machine learning in a multi-unit system | |
US20180268554A1 (en) | Method and system for locating an occupant | |
WO2022059223A1 (en) | Video analyzing system and video analyzing method | |
CN112347834B (en) | Remote nursing method, equipment and readable storage medium based on personnel category attribute | |
Chatisa et al. | Object Detection and Monitor System for Building Security Based on Internet of Things (IoT) Using Illumination Invariant Face Recognition | |
US20170220870A1 (en) | Method and system for analyzing occupancy in a space |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 17873433 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 17873433 Country of ref document: EP Kind code of ref document: A1 |
|
32PN | Ep: public notification in the ep bulletin as address of the adressee cannot be established |
Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 25.02.2020) |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 17873433 Country of ref document: EP Kind code of ref document: A1 |