SE545091C2 - Detecting changes in a physical space - Google Patents

Detecting changes in a physical space

Info

Publication number
SE545091C2
SE545091C2 SE1951157A SE1951157A SE545091C2 SE 545091 C2 SE545091 C2 SE 545091C2 SE 1951157 A SE1951157 A SE 1951157A SE 1951157 A SE1951157 A SE 1951157A SE 545091 C2 SE545091 C2 SE 545091C2
Authority
SE
Sweden
Prior art keywords
image
physical space
instructions
map
detector
Prior art date
Application number
SE1951157A
Other languages
Swedish (sv)
Other versions
SE1951157A1 (en
Inventor
Gustav Ryd
Kenneth Pernyer
Original Assignee
Assa Abloy Ab
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Assa Abloy Ab filed Critical Assa Abloy Ab
Priority to SE1951157A priority Critical patent/SE545091C2/en
Priority to PCT/EP2020/078385 priority patent/WO2021069649A1/en
Publication of SE1951157A1 publication Critical patent/SE1951157A1/en
Publication of SE545091C2 publication Critical patent/SE545091C2/en

Links

Classifications

    • GPHYSICS
    • G08SIGNALLING
    • G08BSIGNALLING OR CALLING SYSTEMS; ORDER TELEGRAPHS; ALARM SYSTEMS
    • G08B13/00Burglar, theft or intruder alarms
    • G08B13/18Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength
    • G08B13/189Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength using passive radiation detection systems
    • G08B13/194Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength using passive radiation detection systems using image scanning and comparing systems
    • G08B13/196Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength using passive radiation detection systems using image scanning and comparing systems using television cameras
    • G08B13/19602Image analysis to detect motion of the intruder, e.g. by frame subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/52Surveillance or monitoring of activities, e.g. for recognising suspicious objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/194Segmentation; Edge detection involving foreground-background segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/44Event detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/20Movements or behaviour, e.g. gesture recognition
    • G06V40/23Recognition of whole body movements, e.g. for sport training

Abstract

It is provided a method for detecting changes in a physical space. The method is performed by an object detector and comprises the steps of: obtaining an image of at least part of the physical space; detecting and classifying at least one object in the image; determining a position of each detected object; storing, in an object map, the classification and position of each detected object; repeating the steps of obtaining an image, detecting and classifying, determining a position and storing, respectively, for a plurality of images; and comparing the object map of an image with an object map of a previous image of the physical space, to thereby detect when there is a change in an object in the physical space.

Description

TECHNICAL FIELD
[0001] The present disclosure relates to the field of detection of one or more objects in a physical space and in particular to detecting changes to such one or more objects.
BACKGROUND
[0002] People are increasingly consuming services to be provided directly at a property, such as a home or commercial property. The service can e.g. be a delivery of a product, a cleaning service, a builder/plumber/electrician, assisted living, etc. The person providing the service is here denoted a service agent and the company providing the service is denoted service provider.
[0003] The services can sometimes be provided even when no person is present at the property. Access to the property can be provided e.g. using remote controlled locks.
[0004] However, consuming services in this way does present a risk, e.g. that the service agent could potentially steal one or more items from the premises.
[0005] This is not only an issue for the owner of the premises, who is worried that a theft may occur, but also for the service provider, who could be illegitimately accused of theft.
[0006] One way to reduce the risk of thefts and theft accusations is to deploy camera supervision of the property. However, camera supervision may require explicit consent from service providers, and it is time consuming and inefficient to examine a large amount of image material when a theft has occurred.
SUMMARY
[0007] One objective is to improve detection of objects in a physical space over time.
[0008] According to a first aspect, it is provided a method for detecting changes in a physical space. The method is performed by an object detector and comprises the steps of: obtaining an image of at least part of the physical space; detecting and classifying at least one object in the image; determining a position of each detected object; storing, in an object map, the Classification and position of each detected object; repeating the steps of obtaining an image, detecting and classifying, determining a position and storing, respectively, for a plurality of images; and comparing the object map of an image with an object map of a previous image of the physical space, to thereby detect when there is a change in an object in the physical space.
[0009] Each iteration of obtaining an image may comprise obtaining an image with essentially the same camera angle.
[0010] Each iteration of obtaining an image may comprise obtaining an image from a mobile camera. In this case, the step of determining a position of the object comprises determining the position based on a machine learning model.
[0011] The method may be triggered to begin when a person is detected in the physical space.
[0012] The method may be triggered by receiving a trigger signal from an external system.
[0013] The object map may be associated with a single image. In this case, the object map has a timestamp corresponding to a capturing time of the associated image. [0014] The image may comprise depth information.
[0015] According to a second aspect, it is provided an object detector for detecting changes in a physical space. The object detector comprises: a processor; and a memory storing instructions that, when executed by the processor, cause the object detector to: obtain an image of at least part of the physical space; detect and classifying at least one object in the image; determine a position of each detected object; store, in an object map, the classification and position of each detected object; repeat the instructions to obtain an image, detect and classify, determine a position and store, respectively, for a plurality of images; and compare the object map of an image with an object map of a previous image of the physical space, to thereby detect when there is a change in an object in the physical space.[0016] The instructions may be repeated and each iteration of the instructions to obtain an image comprise instructions that, when executed by the processor, cause the object detector to obtain an image with essentially the same camera angle.
[0017] The instructions may be repeated and each iteration of the instructions to obtain an image comprise instructions that, when executed by the processor, cause the object detector to obtain an image from a mobile camera. The instructions to determine a position of the object then comprise instructions that, when executed by the processor, cause the object detector to determine the position based on a machine learning model.
[0018] The instructions may be triggered to begin when a person is detected in the physical space.
[0019] The instructions may be triggered by receiving a trigger signal from an external system.
[0020] The object map may be associated with a single image, in which case the object map has a timestamp corresponding to a capturing time of the associated image. [0021] The image may comprise depth information.
[0022] According to a third aspect, it is provided a computer program for detecting changes in a physical space. The computer program comprises computer program code which, when run on an object detector causes the object detector to: obtain an image of at least part of the physical space; detect and classifying at least one object in the image; determine a position of each detected object; store, in an object map, the classification and position of each detected object; repeat the computer program code to obtain an image, detect and classify, determine a position and store, respectively, for a plurality of images; and compare the object map with an object map of a previous image of the physical space, to thereby detect when there is a change in an object in the physical space.
[0023] According to a fourth aspect, it is provided a computer program product comprising a computer program according to the third aspect and a computer readable means on which the computer program is stored.[0024] Generally, all terms used in the claims are to be interpreted according to their ordinary meaning in the technical field, unless explicitly defined otherwise herein. All references to "a/ an/ the element, apparatus, component, means, step, etc." are to be interpreted openly as referring to at least one instance of the element, apparatus, component, means, step, etc., unless explicitly stated otherwise. The steps of any method disclosed herein do not have to be performed in the exact order disclosed, unless explicitly stated. BRIEF DESCRIPTION OF THE DRAWINGS [0025] Aspects and embodiments are now described, by way of example, with refer- ence to the accompanying drawings, in which:
[0026] Fig 1 is a schematic diagram illustrating an environment in which embodiments presented herein can be applied;
[0027] Fig 2A-C are schematic diagrams illustrating different changes in objects which can occur, compared to the example of Fig 1;
[0028] Figs 3A-C are schematic diagrams illustrating embodiments of where the object detector can be implemented;
[0029] Fig 4 is a flow chart illustrating embodiments of methods for detecting changes in a physical space;
[0030] Fig 5 is a schematic diagram illustrating components of the object detector of Figs 3A-C; and
[0031] Fig 6 shows one example of a computer program product 90 comprising computer readable means.
DETAILED DESCRIPTION
[0032] The aspects of the present disclosure will now be described more fully hereinafter with reference to the accompanying drawings, in which certain embodiments of the invention are shown. These aspects may, however, be embodied in many different forms and should not be construed as limiting; rather, theseembodiments are provided by way of example so that this disclosure will be thorough and complete, and to fully convey the scope of all aspects of invention to those skilled in the art. Like numbers refer to like elements throughout the description.
[0033] Fig 1 is a schematic diagram illustrating an environment in which embodiments presented herein can be applied. A physical space 15 is here shown in the form of a room. It is be noted that the physical space can be any other type of physical space, e.g. house, garden, office, factory, storage space, etc.
[0034] In the physical space 15, objects 5a-d are provided. The objects 5a-d are physical objects. In this example, there is a first object 5a in the form of a bag, a second object 5b in the form of a cone-shaped toy. A third object 5c is in the form of a Wallet, which lies on a fourth object 5d in the form of a table. There can be more or fewer objects in the physical space 15; the objects 5a-d shown in Fig 1 only form part of an example.
[0035] An imaging device 2 is positioned to capture images of at least part of the physical space 15. The imaging device 2 can be a traditional digital camera which captures images in two dimensions, or a three-dimensional image capturing device, e.g. based on Lidar, radar or stereo imaging (dual cameras). The imaging device 2 is connected to an object detector 1 which is configured to detect and classify objects in images captured by the imaging device, as described in more detail below. While the imaging device 2 is shown in a fixed position in Fig 1, the imaging device 2 could also be a portable device, such as a smartphone.
[0036] Optionally, the object detector 1 is connected to a wide-area network 7, such as the Internet. In this way, the object detector 1 can communicate with a server 3, which is also connected to the wide-area network 7. The server 3 can be implemented as a single device or over several devices. The server 3 can form part of what is commonly known as the cloud.
[0037] As explained in more detail below, images captured by the imaging device 2 are analysed by the object detector 1. Objects are identified and positioned, and this information is stored in an object map. The object map is implemented any suitable data structures which can hold information about objects for a particular point in time. Changes in any of the objects can be detected by comparing object maps from different images, relating to different points in time.
[0038] Fig 2A-C are schematic diagrams illustrating different changes in objects which can occur, compared to the scenario of Fig
[0039] In Fig 2A, it is illustrated how, compared to the example of Fig 1, the third object 5c (the wallet) has been removed from the physical space 15. This can e.g. be due to someone having stolen the wallet or that the owner of the wallet has taken it out of the physical space
[0040] In Fig 2B, it is illustrated how, compared to the example of Fig 1, the first object 5a (the bag) is in a different orientation. This can e.g. be due to a thief having picked the bag up and stolen items from within the bag and put the bag back, or that the owner of the bag has changed the orientation of the bag.
[0041] In Fig 2C, it is illustrated how, compared to the example of Fig 1, a fifth object 5e, in the form of a box, has been placed within the physical space 15. The fifth object 5e can e.g. be a delivery of an item from a courier firm. Using embodiments presented herein, it can also be detected if an object is removed and it is later provided back in the same place again, e.g. the third object in the form of a wallet.
[0042] Figs 3A-C are schematic diagrams illustrating embodiments of where the object detector 1 can be implemented.
[0043] In Fig 3A, the object detector 1 is shown as implemented in the imaging device 2. The imaging device 2 is thus the host device for the object detector 1 in this embodiment. In this embodiment, the communication between the imaging device 2 and the object detector is internal communication, whereby the reliance on network availability is reduced or even eliminated.
[0044] In Fig 3B, the object detector 1 is shown as implemented in the server 3. The server 3 is thus the host device for the object detector 1 in this embodiment. In thisembodiment, the object detector 1 can be used for a large number of imaging devices and corresponding physical spaces.
[0045] In Fig 3C, the object detector 1 is shown as implemented as a stand-alone device. The object detector 1 thus does not have a host device in this embodiment.
[0046] Fig 4 is a flow chart illustrating embodiments of methods for detecting changes in a physical space. The method is performed in the object detector
[0047] The method can be triggered to begin when a person is detected in the physical space. In this way, any changes to objects are likely to be captured using this method. Alternatively or additionally, the method is triggered when a previously detected person is not detectable anymore. In other words, a change in presence of people (either the entry or people or exit of people) in the physical space can trigger the method to be started. In this way, any changes to objects (e.g. moved, added, removed) can be detected and associated with the person coming or going. This provides a usable context in relation with which it may be very useful to detect objects. The detection of one or more people can be based e.g. on an infrared camera, a sensor detecting when a door is opened and/ or closed, and/ or when a lock is disengaged and/ or engaged.
[0048] Alternatively or additionally, the method is triggered by receiving a trigger signal from an external system, e.g. an alarm system triggering the method when an alarm is triggered.
[0049] In an obtain image step 40, the object detector obtains an image of at least part of the physical space.
[0050] In a detect & classify 0bject(s) step 42, the object detector detects and classifies at least one object in the image. For instance detected objects can be classified to be a certain type of object, e.g. chair, table, bag, wallet, set of keys, etc. This detection and classification can be based on machine learning algorithms. Even a non- deterministic classification of an object can be useful, such as an object with a specific shape and size. This can be represented by a point cloud.[0051] In a determine position step 44, the object detector determines a position of each detected object. The position can be a three-dimensional (3D) position or a two- dimensional (2D) position in the plane of the image. In order to obtain three- dimensional position information, the image can comprise depth information, obtained e.g. from a Lidar sensor, a radar sensor, a 3D camera sensor, using stereographic imaging, structure from motion, etc.
[0052] In a store in object map step 46, the object detector stores, in an object map, the classification and position of each detected object. For instance, it can be stored that a bag is located centred around a 3D position x, y and z. The object map is associated with a single image, the one currently being processed. The object map is a logical representation of objects that have been detected and classified in the image. The object map can have a timestamp corresponding to the capturing time of the associated image.
[0053] In a conditional more image(s) step 48, the object detector determines if there are any more images to process. If this is the case, the method returns to the obtain image step 40. Otherwise, the method proceeds to a compare object map step
[0054] In the compare object map step 50, the object detector compares the object map with an object map of a previous image of the physical space. In this way, the object detector can detect when there is a change in an object (or several objects) in the physical space, e.g. any one or more of the changes illustrated in Figs 2A-C. Optionally, step 50 is performed for every new image or for any few images.
[0055] The method is repeated and each iteration of obtaining an image can comprise obtaining an image with essentially the same camera angle.
[0056] Using the embodiments presented herein, changes to objects in the physical space are detected by analysing the object map. This can be used to detect when objects disappear, e.g. thefts, and when objects appear, e.g. deliveries. By associating the object map with a time of the image, a time window of when the object change occurred can also be determined. In this way, it can be deduced if a particular service provider agent was involved in the object change, when a time of when the service provider agent wasin the physical space is known. This can clear suspicions of an innocent service provider agent and can pinpoint a guilty party.
[0057] Fig 5 is a schematic diagram illustrating components of the object detector of Figs 3A-C. It is to be noted that one or more of the mentioned components can be shared with the host device, when present. A processor 60 is provided using any combination of one or more of a suitable central processing unit (CPU), multiprocessor, microcontroller, digital signal processor (DSP), etc., capable of executing software instructions 67 stored in a memory 64, which can thus be a computer program product. The processor 60 could alternatively be implemented using an application specific integrated circuit (ASIC), field programmable gate array (FPGA), etc. The processorcan be configured to execute the method described with reference to Fig 4 above.
[0058] The memory 64 can be any combination of random-access memory (RAM) and/ or read-only memory (ROM). The memory 64 also comprises persistent storage, which, for example, can be any single one or combination of magnetic memory, optical memory, solid-state memory or even remotely mounted memory.
[0059] A data memory 66 is also provided for reading and/ or storing data during execution of software instructions in the processor 60. The data memory 66 can be any combination of RAM and/ or ROM.
[0060] The object detector further comprises an I/ O interface 62 for communicating with external and/ or internal entities. Optionally, the I/ O interface 62 also includes a user interface.
[0061] Other components of the object detector 1 are omitted in order not to obscure the concepts presented herein.
[0062] Fig 6 shows one example of a computer program product 90 comprising computer readable means. On this computer readable means, a computer program 91 can be stored, which computer program can cause a processor to execute a method according to embodiments described herein. In this example, the computer program product is an optical disc, such as a CD (compact disc) or a DVD (digital versatile disc) or a Blu-Ray disc. As explained above, the computer program product could also be embodied in a memory of a device, such as the computer program product 64 of Fig 5. While the computer program 91 is here schematically shown as a track on the depicted optical disk, the computer program can be stored in any way which is suitable for the computer program product, such as a removable solid-state memory, e.g. a Universal Serial Bus (USB) drive.
[0063] The aspects of the present disclosure have mainly been described above with reference to a few embodiments. However, as is readily appreciated by a person skilled in the art, other embodiments than the ones disclosed above are equally possible within the scope of the invention, as defined by the appended patent claims. Thus, while various aspects and embodiments have been disclosed herein, other aspects and embodiments will be apparent to those skilled in the art. The various aspects and embodiments disclosed herein are for purposes of illustration and are not intended to be limiting, with the true scope and spirit being indicated by the following claims.

Claims (16)

1. A method for detecting changes in a physical space (15), the method being performed by an object detector (1) and comprising the steps of: obtaining (40) an image of at least part of the physical space (15); detecting (42) and classifying at least one object in the image. yvšherein the elassifvírig cornnrises classiftfíng the at least orie object to be a certain type of obiect; determining (44) a position of each detected object; storing (46), in an object map, the classification and position of each detected object; repeating (48) the steps of obtaining (40) an image, detecting (42) and classifying, determining (44) a position and storing (46), respectively, for a plurality of images; and comparing (50) the object map of an image with an object map of a previous image of the physical space (15), to thereby detect when there is a change in an object in the physical space (15).
2. The method according to claim 1, wherein each iteration of obtaining (40) an image comprises obtaining an image with essentially the same camera angle.
3. The method according to claim 1, wherein each iteration of obtaining (40) an image comprises obtaining an image from a mobile camera; and wherein the step of determining (44) a position of the object comprises determining the position based on a machine learning model.
4. The method according to any one of claims 1 to 3, wherein the method is triggered to begin when a person is detected in the physical space (15).
5. The method according to any one of claims 1 to 3, wherein the method is triggered by receiving a trigger signal from an external system.
6. The method according to any one of the preceding claims, wherein the object map is associated with a single image, and wherein the object map has a timestamp corresponding to a capturing time of the associated image.
7. The method according to any one of the preceding claims, wherein the image comprises depth information.
8. An object detector (1) for detecting changes in a physical space (15), the object detector (1) comprising:a processor (60); and a memory (64) storing instructions (67) that, when executed by the processor, cause the object detector to: obtain an image of at least part of the physical space (15); detect and classifyifig an object in the image. xfæfhereiz: the eíassifvirrä comïirises classiíšfang the at. leæzst one object to be a. cfertain type of tëbieazt; determine a position of each detected object; store, in an object map, the classification and position of each detected object; repeat the instructions to obtain an image, detect and classify, determine a position and store, respectively, for a plurality of images; and compare the object map of an image with an object map of a previous image of the physical space (15), to thereby detect when there is a change in an object in the physical space (15).
9. The object detector (1) according to claim 8, wherein the instructions are repeated and each iteration of the instructions to obtain an image comprise instructions (67) that, when executed by the processor, cause the object detector to obtain an image with essentially the same camera angle.
10. The object detector (1) according to claim 8 wherein the instructions are repeated and each iteration of the instructions to obtain an image comprise instructions (67) that, when executed by the processor, cause the object detector to obtain an image from a mobile camera; and wherein the instructions to determine a position of the object comprise instructions (67) that, when executed by the processor, cause the object detector to determine the position based on a machine learning model.
11. The object detector (1) according to claim 8 or 9, wherein the instructions are triggered to begin when a person is detected in the physical space (15).
12. The object detector (1) according to claim 8 or 9, wherein the instructions are triggered by receiving a trigger signal from an external system.
13. The object detector (1) according to any one of claims 8 to 12, wherein the object map is associated with a single image, and wherein the object map has a timestamp corresponding to a capturing time of the associated image.
14. The object detector (1) according to any one of claims 8 to 13, wherein the image comprises depth information.
15. A computer program (67, 91) for detecting changes in a physical space (15), the computer program comprising computer program code which, when run on an object detector causes the object detector to: obtain an image of at least part of the physical space (15); detect and classifyiræg at least one object in the image ævíiereiri the clarssifxfirifi' comnrises (ilassifaårifl the at lezast one :zbiecït to be a certairi tfme zzf zzbieait; determine a position of each detected object; store, in an object map, the classification and position of each detected object; repeat the computer program code to obtain an image, detect and classify, determine a position and store, respectively, for a plurality of images; and compare the object map with an object map of a previous image of the physical space (15), to thereby detect when there is a change in an object in the physical space (15)-
16. A computer program product (64, 90) comprising a computer program according to claim 15 and a computer readable means on which the computer program is stored.
SE1951157A 2019-10-11 2019-10-11 Detecting changes in a physical space SE545091C2 (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
SE1951157A SE545091C2 (en) 2019-10-11 2019-10-11 Detecting changes in a physical space
PCT/EP2020/078385 WO2021069649A1 (en) 2019-10-11 2020-10-09 Detecting changes in a physical space

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
SE1951157A SE545091C2 (en) 2019-10-11 2019-10-11 Detecting changes in a physical space

Publications (2)

Publication Number Publication Date
SE1951157A1 SE1951157A1 (en) 2021-04-12
SE545091C2 true SE545091C2 (en) 2023-03-28

Family

ID=72840541

Family Applications (1)

Application Number Title Priority Date Filing Date
SE1951157A SE545091C2 (en) 2019-10-11 2019-10-11 Detecting changes in a physical space

Country Status (2)

Country Link
SE (1) SE545091C2 (en)
WO (1) WO2021069649A1 (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20040246123A1 (en) * 2003-06-09 2004-12-09 Tsuyoshi Kawabe Change detecting method and apparatus and monitoring system using the method or apparatus
US20080158361A1 (en) * 2006-10-23 2008-07-03 Masaya Itoh Video surveillance equipment and video surveillance system
US20140079280A1 (en) * 2012-09-14 2014-03-20 Palo Alto Research Center Incorporated Automatic detection of persistent changes in naturally varying scenes
WO2014209724A1 (en) * 2013-06-26 2014-12-31 Amazon Technologies, Inc. Detecting item interaction and movement
US20190197313A1 (en) * 2016-09-23 2019-06-27 Hitachi Kokusai Electric Inc. Monitoring device

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
FR2941317B1 (en) * 2009-01-21 2016-06-24 Rockwell-Collins France METHOD AND SYSTEM FOR DETECTING OBJECTS IN A FINISHED SPACE
US10235762B1 (en) * 2018-09-12 2019-03-19 Capital One Services, Llc Asset tracking systems

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20040246123A1 (en) * 2003-06-09 2004-12-09 Tsuyoshi Kawabe Change detecting method and apparatus and monitoring system using the method or apparatus
US20080158361A1 (en) * 2006-10-23 2008-07-03 Masaya Itoh Video surveillance equipment and video surveillance system
US20140079280A1 (en) * 2012-09-14 2014-03-20 Palo Alto Research Center Incorporated Automatic detection of persistent changes in naturally varying scenes
WO2014209724A1 (en) * 2013-06-26 2014-12-31 Amazon Technologies, Inc. Detecting item interaction and movement
US20190197313A1 (en) * 2016-09-23 2019-06-27 Hitachi Kokusai Electric Inc. Monitoring device

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
Image Change Detection Algorithms: A Systematic Survey Richard J. Radke, Member, IEEE, Srinivas Andra, Student Member, IEEE, Omar Al-Kofahi, and Badrinath Roysam, Member, IEEE, Student Member, IEEE URL: https://ieeexplore.ieee.org/stamp/stamp.jsp?arnumber=1395984 *

Also Published As

Publication number Publication date
SE1951157A1 (en) 2021-04-12
WO2021069649A1 (en) 2021-04-15

Similar Documents

Publication Publication Date Title
US9396400B1 (en) Computer-vision based security system using a depth camera
CN104519318B (en) Frequency image monitoring system and surveillance camera
CN110472515B (en) Goods shelf commodity detection method and system
Golparvar-Fard et al. Monitoring changes of 3D building elements from unordered photo collections
US10719740B2 (en) Method and a system for identifying reflective surfaces in a scene
US8266174B2 (en) Behavior history retrieval apparatus and behavior history retrieval method
EP3766044A1 (en) Three-dimensional environment modeling based on a multicamera convolver system
Cetin et al. Methods and techniques for fire detection: signal, image and video processing perspectives
CN107122743B (en) Security monitoring method and device and electronic equipment
CN109766779A (en) It hovers personal identification method and Related product
JP2019532387A (en) Infant detection for electronic gate environments
CN111179329A (en) Three-dimensional target detection method and device and electronic equipment
Wong et al. RigidFusion: RGB‐D Scene Reconstruction with Rigidly‐moving Objects
US20170200203A1 (en) Item detection based on temporal imaging analysis
JP2016085602A (en) Sensor information integrating method, and apparatus for implementing the same
Kim et al. Room layout estimation with object and material attributes information using a spherical camera
Bahirat et al. A study on lidar data forensics
US9965612B2 (en) Method and system for visual authentication
WO2018210039A1 (en) Data processing method, data processing device, and storage medium
SE545091C2 (en) Detecting changes in a physical space
US11928942B2 (en) Systems and methods for theft prevention and detection
US20160133023A1 (en) Method for image processing, presence detector and illumination system
US10643078B2 (en) Automatic camera ground plane calibration method and system
JP2011013965A (en) Intruder detection system
Rafiee et al. Improving indoor security surveillance by fusing data from BIM, UWB and video