SE1951157A1 - Detecting changes in a physical space - Google Patents
Detecting changes in a physical spaceInfo
- Publication number
- SE1951157A1 SE1951157A1 SE1951157A SE1951157A SE1951157A1 SE 1951157 A1 SE1951157 A1 SE 1951157A1 SE 1951157 A SE1951157 A SE 1951157A SE 1951157 A SE1951157 A SE 1951157A SE 1951157 A1 SE1951157 A1 SE 1951157A1
- Authority
- SE
- Sweden
- Prior art keywords
- image
- physical space
- instructions
- map
- detector
- Prior art date
Links
Classifications
-
- G—PHYSICS
- G08—SIGNALLING
- G08B—SIGNALLING OR CALLING SYSTEMS; ORDER TELEGRAPHS; ALARM SYSTEMS
- G08B13/00—Burglar, theft or intruder alarms
- G08B13/18—Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength
- G08B13/189—Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength using passive radiation detection systems
- G08B13/194—Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength using passive radiation detection systems using image scanning and comparing systems
- G08B13/196—Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength using passive radiation detection systems using image scanning and comparing systems using television cameras
- G08B13/19602—Image analysis to detect motion of the intruder, e.g. by frame subtraction
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/50—Context or environment of the image
- G06V20/52—Surveillance or monitoring of activities, e.g. for recognising suspicious objects
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/194—Segmentation; Edge detection involving foreground-background segmentation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/40—Scenes; Scene-specific elements in video content
- G06V20/44—Event detection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/20—Movements or behaviour, e.g. gesture recognition
- G06V40/23—Recognition of whole body movements, e.g. for sport training
Landscapes
- Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Multimedia (AREA)
- Human Computer Interaction (AREA)
- Data Mining & Analysis (AREA)
- Psychiatry (AREA)
- Social Psychology (AREA)
- Health & Medical Sciences (AREA)
- Life Sciences & Earth Sciences (AREA)
- Artificial Intelligence (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Bioinformatics & Computational Biology (AREA)
- General Health & Medical Sciences (AREA)
- Evolutionary Biology (AREA)
- Evolutionary Computation (AREA)
- General Engineering & Computer Science (AREA)
- Image Analysis (AREA)
- Air Bags (AREA)
- Measuring Fluid Pressure (AREA)
- Inspection Of Paper Currency And Valuable Securities (AREA)
Abstract
It is provided a method for detecting changes in a physical space. The method is performed by an object detector and comprises the steps of: obtaining an image of at least part of the physical space; detecting and classifying at least one object in the image; determining a position of each detected object; storing, in an object map, the classification and position of each detected object; repeating the steps of obtaining an image, detecting and classifying, determining a position and storing, respectively, for a plurality of images; and comparing the object map of an image with an object map of a previous image of the physical space, to thereby detect when there is a change in an object in the physical space.
Description
DETECTING CHANGES IN A PHYSICAL SPACE TECHNICAL FIELD 1. 1. id="p-1" id="p-1" id="p-1" id="p-1" id="p-1" id="p-1" id="p-1"
id="p-1"
[0001] The present disclosure relates to the field of detection of one or more objects in a physical space and in particular to detecting changes to such one or more objects.
BACKGROUND 2. 2. id="p-2" id="p-2" id="p-2" id="p-2" id="p-2" id="p-2" id="p-2"
id="p-2"
[0002] People are increasingly consuming services to be provided directly at aproperty, such as a home or commercial property. The service can e.g. be a delivery of aproduct, a cleaning service, a builder/plumber/electrician, assisted living, etc. Theperson providing the service is here denoted a service agent and the company providing the service is denoted service provider. 3. 3. id="p-3" id="p-3" id="p-3" id="p-3" id="p-3" id="p-3" id="p-3"
id="p-3"
[0003] The services can sometimes be provided even when no person is present at the property. Access to the property can be provided e.g. using remote controlled locks. 4. 4. id="p-4" id="p-4" id="p-4" id="p-4" id="p-4" id="p-4" id="p-4"
id="p-4"
[0004] However, consuming services in this way does present a risk, e.g. that the service agent could potentially steal one or more items from the premises. . . id="p-5" id="p-5" id="p-5" id="p-5" id="p-5" id="p-5" id="p-5"
id="p-5"
[0005] This is not only an issue for the owner of the premises, who is worried that atheft may occur, but also for the service provider, who could be illegitimately accused oftheft. 6. 6. id="p-6" id="p-6" id="p-6" id="p-6" id="p-6" id="p-6" id="p-6"
id="p-6"
[0006] One way to reduce the risk of thefts and theft accusations is to deploy camerasupervision of the property. However, camera supervision may require explicit consentfrom service providers, and it is time consuming and inefficient to examine a large amount of image material when a theft has occurred.
SUMMARY 7. 7. id="p-7" id="p-7" id="p-7" id="p-7" id="p-7" id="p-7" id="p-7"
id="p-7"
[0007] One objective is to improve detection of objects in a physical space over time. 8. 8. id="p-8" id="p-8" id="p-8" id="p-8" id="p-8" id="p-8" id="p-8"
id="p-8"
[0008] According to a first aspect, it is provided a method for detecting changes in aphysical space. The method is performed by an object detector and comprises the stepsof: obtaining an image of at least part of the physical space; detecting and classifying at least one object in the image; determining a position of each detected object; storing, in an object map, the Classification and position of each detected object; repeating thesteps of obtaining an image, detecting and classifying, determining a position andstoring, respectively, for a plurality of images; and comparing the object map of animage with an object map of a previous image of the physical space, to thereby detect when there is a change in an object in the physical space. 9. 9. id="p-9" id="p-9" id="p-9" id="p-9" id="p-9" id="p-9" id="p-9"
id="p-9"
[0009] Each iteration of obtaining an image may comprise obtaining an image with essentially the same camera angle. . . id="p-10" id="p-10" id="p-10" id="p-10" id="p-10" id="p-10" id="p-10"
id="p-10"
[0010] Each iteration of obtaining an image may comprise obtaining an image froma mobile camera. In this case, the step of determining a position of the object comprises determining the position based on a machine learning model. 11. 11. id="p-11" id="p-11" id="p-11" id="p-11" id="p-11" id="p-11" id="p-11"
id="p-11"
[0011] The method may be triggered to begin when a person is detected in the physical space. 12. 12. id="p-12" id="p-12" id="p-12" id="p-12" id="p-12" id="p-12" id="p-12"
id="p-12"
[0012] The method may be triggered by receiving a trigger signal from an external system. 13. 13. id="p-13" id="p-13" id="p-13" id="p-13" id="p-13" id="p-13" id="p-13"
id="p-13"
[0013] The object map may be associated with a single image. In this case, the object map has a timestamp corresponding to a capturing time of the associated image.[0014] The image may comprise depth information. . . id="p-15" id="p-15" id="p-15" id="p-15" id="p-15" id="p-15" id="p-15"
id="p-15"
[0015] According to a second aspect, it is provided an object detector for detectingchanges in a physical space. The object detector comprises: a processor; and a memorystoring instructions that, when executed by the processor, cause the object detector to:obtain an image of at least part of the physical space; detect and classifying at least oneobject in the image; determine a position of each detected object; store, in an objectmap, the classification and position of each detected object; repeat the instructions toobtain an image, detect and classify, determine a position and store, respectively, for aplurality of images; and compare the object map of an image with an object map of aprevious image of the physical space, to thereby detect when there is a change in an object in the physical space. 3 16. 16. id="p-16" id="p-16" id="p-16" id="p-16" id="p-16" id="p-16" id="p-16"
id="p-16"
[0016] The instructions may be repeated and each iteration of the instructions toobtain an image comprise instructions that, when executed by the processor, cause the object detector to obtain an image with essentially the same camera angle. 17. 17. id="p-17" id="p-17" id="p-17" id="p-17" id="p-17" id="p-17" id="p-17"
id="p-17"
[0017] The instructions may be repeated and each iteration of the instructions toobtain an image comprise instructions that, when executed by the processor, cause theobject detector to obtain an image from a mobile camera. The instructions to determinea position of the object then comprise instructions that, when executed by the processor, cause the object detector to determine the position based on a machine learning model. 18. 18. id="p-18" id="p-18" id="p-18" id="p-18" id="p-18" id="p-18" id="p-18"
id="p-18"
[0018] The instructions may be triggered to begin when a person is detected in the physical space. 19. 19. id="p-19" id="p-19" id="p-19" id="p-19" id="p-19" id="p-19" id="p-19"
id="p-19"
[0019] The instructions may be triggered by receiving a trigger signal from an external system. . . id="p-20" id="p-20" id="p-20" id="p-20" id="p-20" id="p-20" id="p-20"
id="p-20"
[0020] The object map may be associated with a single image, in which case the object map has a timestamp corresponding to a capturing time of the associated image.[0021] The image may comprise depth information. 22. 22. id="p-22" id="p-22" id="p-22" id="p-22" id="p-22" id="p-22" id="p-22"
id="p-22"
[0022] According to a third aspect, it is provided a computer program for detectingchanges in a physical space. The computer program comprises computer program codewhich, when run on an object detector causes the object detector to: obtain an image ofat least part of the physical space; detect and classifying at least one object in the image;determine a position of each detected object; store, in an object map, the classificationand position of each detected object; repeat the computer program code to obtain animage, detect and classify, determine a position and store, respectively, for a plurality ofimages; and compare the object map with an object map of a previous image of thephysical space, to thereby detect when there is a change in an object in the physical space. 23. 23. id="p-23" id="p-23" id="p-23" id="p-23" id="p-23" id="p-23" id="p-23"
id="p-23"
[0023] According to a fourth aspect, it is provided a computer program productcomprising a computer program according to the third aspect and a computer readable means on which the computer program is stored. 4 24. 24. id="p-24" id="p-24" id="p-24" id="p-24" id="p-24" id="p-24" id="p-24"
id="p-24"
[0024] Generally, all terms used in the claims are to be interpreted according to theirordinary meaning in the technical field, unless explicitly defined otherwise herein. Allreferences to "a/ an/ the element, apparatus, component, means, step, etc." are to beinterpreted openly as referring to at least one instance of the element, apparatus,component, means, step, etc., unless explicitly stated otherwise. The steps of anymethod disclosed herein do not have to be performed in the exact order disclosed, unless explicitly stated.BRIEF DESCRIPTION OF THE DRAWINGS[0025] Aspects and embodiments are now described, by way of example, with refer- ence to the accompanying drawings, in which: 26. 26. id="p-26" id="p-26" id="p-26" id="p-26" id="p-26" id="p-26" id="p-26"
id="p-26"
[0026] Fig 1 is a schematic diagram illustrating an environment in which embodiments presented herein can be applied; 27. 27. id="p-27" id="p-27" id="p-27" id="p-27" id="p-27" id="p-27" id="p-27"
id="p-27"
[0027] Fig 2A-C are schematic diagrams illustrating different changes in objects which can occur, compared to the example of Fig 1; 28. 28. id="p-28" id="p-28" id="p-28" id="p-28" id="p-28" id="p-28" id="p-28"
id="p-28"
[0028] Figs 3A-C are schematic diagrams illustrating embodiments of where the object detector can be implemented; 29. 29. id="p-29" id="p-29" id="p-29" id="p-29" id="p-29" id="p-29" id="p-29"
id="p-29"
[0029] Fig 4 is a flow chart illustrating embodiments of methods for detecting changes in a physical space; . . id="p-30" id="p-30" id="p-30" id="p-30" id="p-30" id="p-30" id="p-30"
id="p-30"
[0030] Fig 5 is a schematic diagram illustrating components of the object detector ofFigs 3A-C; and 31. 31. id="p-31" id="p-31" id="p-31" id="p-31" id="p-31" id="p-31" id="p-31"
id="p-31"
[0031] Fig 6 shows one example of a computer program product 90 comprising computer readable means.
DETAILED DESCRIPTION 32. 32. id="p-32" id="p-32" id="p-32" id="p-32" id="p-32" id="p-32" id="p-32"
id="p-32"
[0032] The aspects of the present disclosure will now be described more fullyhereinafter with reference to the accompanying drawings, in which certainembodiments of the invention are shown. These aspects may, however, be embodied in many different forms and should not be construed as limiting; rather, these embodiments are provided by way of example so that this disclosure will be thoroughand complete, and to fully convey the scope of all aspects of invention to those skilled in the art. Like numbers refer to like elements throughout the description. 33. 33. id="p-33" id="p-33" id="p-33" id="p-33" id="p-33" id="p-33" id="p-33"
id="p-33"
[0033] Fig 1 is a schematic diagram illustrating an environment in whichembodiments presented herein can be applied. A physical space 15 is here shown in theform of a room. It is be noted that the physical space can be any other type of physical space, e.g. house, garden, office, factory, storage space, etc. 34. 34. id="p-34" id="p-34" id="p-34" id="p-34" id="p-34" id="p-34" id="p-34"
id="p-34"
[0034] In the physical space 15, objects 5a-d are provided. The objects 5a-d arephysical objects. In this example, there is a first object 5a in the form of a bag, a secondobject 5b in the form of a cone-shaped toy. A third object 5c is in the form of a Wallet,which lies on a fourth object 5d in the form of a table. There can be more or fewerobjects in the physical space 15; the objects 5a-d shown in Fig 1 only form part of an example. . . id="p-35" id="p-35" id="p-35" id="p-35" id="p-35" id="p-35" id="p-35"
id="p-35"
[0035] An imaging device 2 is positioned to capture images of at least part of thephysical space 15. The imaging device 2 can be a traditional digital camera whichcaptures images in two dimensions, or a three-dimensional image capturing device, e.g.based on Lidar, radar or stereo imaging (dual cameras). The imaging device 2 isconnected to an object detector 1 which is configured to detect and classify objects inimages captured by the imaging device, as described in more detail below. While theimaging device 2 is shown in a fixed position in Fig 1, the imaging device 2 could also be a portable device, such as a smartphone. 36. 36. id="p-36" id="p-36" id="p-36" id="p-36" id="p-36" id="p-36" id="p-36"
id="p-36"
[0036] Optionally, the object detector 1 is connected to a wide-area network 7, suchas the Internet. In this way, the object detector 1 can communicate with a server 3,which is also connected to the wide-area network 7. The server 3 can be implemented asa single device or over several devices. The server 3 can form part of what is commonly known as the cloud. 37. 37. id="p-37" id="p-37" id="p-37" id="p-37" id="p-37" id="p-37" id="p-37"
id="p-37"
[0037] As explained in more detail below, images captured by the imaging device 2are analysed by the object detector 1. Objects are identified and positioned, and this information is stored in an object map. The object map is implemented any suitable data structures which can hold information about objects for a particular point in time.Changes in any of the objects can be detected by comparing object maps from different images, relating to different points in time. 38. 38. id="p-38" id="p-38" id="p-38" id="p-38" id="p-38" id="p-38" id="p-38"
id="p-38"
[0038] Fig 2A-C are schematic diagrams illustrating different changes in objects which can occur, compared to the scenario of Fig 1. 39. 39. id="p-39" id="p-39" id="p-39" id="p-39" id="p-39" id="p-39" id="p-39"
id="p-39"
[0039] In Fig 2A, it is illustrated how, compared to the example of Fig 1, the thirdobject 5c (the wallet) has been removed from the physical space 15. This can e.g. be dueto someone having stolen the wallet or that the owner of the wallet has taken it out of the physical space 15. 40. 40. id="p-40" id="p-40" id="p-40" id="p-40" id="p-40" id="p-40" id="p-40"
id="p-40"
[0040] In Fig 2B, it is illustrated how, compared to the example of Fig 1, the firstobject 5a (the bag) is in a different orientation. This can e.g. be due to a thief havingpicked the bag up and stolen items from within the bag and put the bag back, or that the owner of the bag has changed the orientation of the bag. 41. 41. id="p-41" id="p-41" id="p-41" id="p-41" id="p-41" id="p-41" id="p-41"
id="p-41"
[0041] In Fig 2C, it is illustrated how, compared to the example of Fig 1, a fifth object5e, in the form of a box, has been placed within the physical space 15. The fifth object 5ecan e.g. be a delivery of an item from a courier firm. Using embodiments presentedherein, it can also be detected if an object is removed and it is later provided back in the same place again, e.g. the third object in the form of a wallet. 42. 42. id="p-42" id="p-42" id="p-42" id="p-42" id="p-42" id="p-42" id="p-42"
id="p-42"
[0042] Figs 3A-C are schematic diagrams illustrating embodiments of where the object detector 1 can be implemented. 43. 43. id="p-43" id="p-43" id="p-43" id="p-43" id="p-43" id="p-43" id="p-43"
id="p-43"
[0043] In Fig 3A, the object detector 1 is shown as implemented in the imaging device 2. The imaging device 2 is thus the host device for the object detector 1 in thisembodiment. In this embodiment, the communication between the imaging device 2and the object detector is internal communication, whereby the reliance on network availability is reduced or even eliminated. 44. 44. id="p-44" id="p-44" id="p-44" id="p-44" id="p-44" id="p-44" id="p-44"
id="p-44"
[0044] In Fig 3B, the object detector 1 is shown as implemented in the server 3. The server 3 is thus the host device for the object detector 1 in this embodiment. In this 7 embodiment, the object detector 1 can be used for a large number of imaging devices and corresponding physical spaces. 45. 45. id="p-45" id="p-45" id="p-45" id="p-45" id="p-45" id="p-45" id="p-45"
id="p-45"
[0045] In Fig 3C, the object detector 1 is shown as implemented as a stand-alone device. The object detector 1 thus does not have a host device in this embodiment. 46. 46. id="p-46" id="p-46" id="p-46" id="p-46" id="p-46" id="p-46" id="p-46"
id="p-46"
[0046] Fig 4 is a flow chart illustrating embodiments of methods for detecting changes in a physical space. The method is performed in the object detector 1. 47. 47. id="p-47" id="p-47" id="p-47" id="p-47" id="p-47" id="p-47" id="p-47"
id="p-47"
[0047] The method can be triggered to begin when a person is detected in thephysical space. In this way, any changes to objects are likely to be captured using thismethod. Alternatively or additionally, the method is triggered when a previouslydetected person is not detectable anymore. In other words, a change in presence ofpeople (either the entry or people or exit of people) in the physical space can trigger themethod to be started. In this way, any changes to objects (e.g. moved, added, removed)can be detected and associated with the person coming or going. This provides a usablecontext in relation with which it may be very useful to detect objects. The detection ofone or more people can be based e.g. on an infrared camera, a sensor detecting when a door is opened and/ or closed, and/ or when a lock is disengaged and/ or engaged. 48. 48. id="p-48" id="p-48" id="p-48" id="p-48" id="p-48" id="p-48" id="p-48"
id="p-48"
[0048] Alternatively or additionally, the method is triggered by receiving a triggersignal from an external system, e.g. an alarm system triggering the method when an alarm is triggered. 49. 49. id="p-49" id="p-49" id="p-49" id="p-49" id="p-49" id="p-49" id="p-49"
id="p-49"
[0049] In an obtain image step 40, the object detector obtains an image of at least part of the physical space. 50. 50. id="p-50" id="p-50" id="p-50" id="p-50" id="p-50" id="p-50" id="p-50"
id="p-50"
[0050] In a detect & classify 0bject(s) step 42, the object detector detects andclassifies at least one object in the image. For instance detected objects can be classifiedto be a certain type of object, e.g. chair, table, bag, wallet, set of keys, etc. This detectionand classification can be based on machine learning algorithms. Even a non-deterministic classification of an object can be useful, such as an object with a specific shape and size. This can be represented by a point cloud. 8 51. 51. id="p-51" id="p-51" id="p-51" id="p-51" id="p-51" id="p-51" id="p-51"
id="p-51"
[0051] In a determine position step 44, the object detector determines a position ofeach detected object. The position can be a three-dimensional (3D) position or a two-dimensional (2D) position in the plane of the image. In order to obtain three-dimensional position information, the image can comprise depth information, obtainede.g. from a Lidar sensor, a radar sensor, a 3D camera sensor, using stereographic imaging, structure from motion, etc. 52. 52. id="p-52" id="p-52" id="p-52" id="p-52" id="p-52" id="p-52" id="p-52"
id="p-52"
[0052] In a store in object map step 46, the object detector stores, in an object map,the classification and position of each detected object. For instance, it can be stored thata bag is located centred around a 3D position x, y and z. The object map is associatedwith a single image, the one currently being processed. The object map is a logicalrepresentation of objects that have been detected and classified in the image. The object map can have a timestamp corresponding to the capturing time of the associated image. 53. 53. id="p-53" id="p-53" id="p-53" id="p-53" id="p-53" id="p-53" id="p-53"
id="p-53"
[0053] In a conditional more image(s) step 48, the object detector determines ifthere are any more images to process. If this is the case, the method returns to the obtain image step 40. Otherwise, the method proceeds to a compare object map step 50. 54. 54. id="p-54" id="p-54" id="p-54" id="p-54" id="p-54" id="p-54" id="p-54"
id="p-54"
[0054] In the compare object map step 50, the object detector compares the objectmap with an object map of a previous image of the physical space. In this way, the objectdetector can detect when there is a change in an object (or several objects) in thephysical space, e.g. any one or more of the changes illustrated in Figs 2A-C. Optionally, step 50 is performed for every new image or for any few images. 55. 55. id="p-55" id="p-55" id="p-55" id="p-55" id="p-55" id="p-55" id="p-55"
id="p-55"
[0055] The method is repeated and each iteration of obtaining an image can comprise obtaining an image with essentially the same camera angle. 56. 56. id="p-56" id="p-56" id="p-56" id="p-56" id="p-56" id="p-56" id="p-56"
id="p-56"
[0056] Using the embodiments presented herein, changes to objects in the physicalspace are detected by analysing the object map. This can be used to detect when objectsdisappear, e.g. thefts, and when objects appear, e.g. deliveries. By associating the objectmap with a time of the image, a time window of when the object change occurred canalso be determined. In this way, it can be deduced if a particular service provider agent was involved in the object change, when a time of when the service provider agent was 9 in the physical space is known. This can clear suspicions of an innocent service provider agent and can pinpoint a guilty party. 57. 57. id="p-57" id="p-57" id="p-57" id="p-57" id="p-57" id="p-57" id="p-57"
id="p-57"
[0057] Fig 5 is a schematic diagram illustrating components of the object detector ofFigs 3A-C. It is to be noted that one or more of the mentioned components can beshared with the host device, when present. A processor 60 is provided using anycombination of one or more of a suitable central processing unit (CPU), multiprocessor,microcontroller, digital signal processor (DSP), etc., capable of executing softwareinstructions 67 stored in a memory 64, which can thus be a computer program product.The processor 60 could alternatively be implemented using an application specificintegrated circuit (ASIC), field programmable gate array (FPGA), etc. The processor 60 can be configured to execute the method described with reference to Fig 4 above. 58. 58. id="p-58" id="p-58" id="p-58" id="p-58" id="p-58" id="p-58" id="p-58"
id="p-58"
[0058] The memory 64 can be any combination of random-access memory (RAM)and/ or read-only memory (ROM). The memory 64 also comprises persistent storage,which, for example, can be any single one or combination of magnetic memory, optical memory, solid-state memory or even remotely mounted memory. 59. 59. id="p-59" id="p-59" id="p-59" id="p-59" id="p-59" id="p-59" id="p-59"
id="p-59"
[0059] A data memory 66 is also provided for reading and/ or storing data duringexecution of software instructions in the processor 60. The data memory 66 can be anycombination of RAM and/ or ROM. 60. 60. id="p-60" id="p-60" id="p-60" id="p-60" id="p-60" id="p-60" id="p-60"
id="p-60"
[0060] The object detector further comprises an I/ O interface 62 for communicatingwith external and/ or internal entities. Optionally, the I/ O interface 62 also includes a user interface. 61. 61. id="p-61" id="p-61" id="p-61" id="p-61" id="p-61" id="p-61" id="p-61"
id="p-61"
[0061] Other components of the object detector 1 are omitted in order not to obscure the concepts presented herein. 62. 62. id="p-62" id="p-62" id="p-62" id="p-62" id="p-62" id="p-62" id="p-62"
id="p-62"
[0062] Fig 6 shows one example of a computer program product 90 comprisingcomputer readable means. On this computer readable means, a computer program 91can be stored, which computer program can cause a processor to execute a methodaccording to embodiments described herein. In this example, the computer programproduct is an optical disc, such as a CD (compact disc) or a DVD (digital versatile disc) or a Blu-Ray disc. As explained above, the computer program product could also be embodied in a memory of a device, such as the computer program product 64 of Fig 5.While the computer program 91 is here schematically shown as a track on the depictedoptical disk, the computer program can be stored in any way which is suitable for thecomputer program product, such as a removable solid-state memory, e.g. a UniversalSerial Bus (USB) drive. 63. 63. id="p-63" id="p-63" id="p-63" id="p-63" id="p-63" id="p-63" id="p-63"
id="p-63"
[0063] The aspects of the present disclosure have mainly been described above withreference to a few embodiments. However, as is readily appreciated by a person skilledin the art, other embodiments than the ones disclosed above are equally possible withinthe scope of the invention, as defined by the appended patent claims. Thus, whilevarious aspects and embodiments have been disclosed herein, other aspects andembodiments will be apparent to those skilled in the art. The various aspects andembodiments disclosed herein are for purposes of illustration and are not intended to be limiting, with the true scope and spirit being indicated by the following claims.
Claims (16)
1. A method for detecting changes in a physical space (15), the method beingperformed by an object detector (1) and comprising the steps of: obtaining (40) an image of at least part of the physical space (15); detecting (42) and classifying at least one object in the image; determining (44) a position of each detected object; storing (46), in an object map, the classification and position of each detectedobject; repeating (48) the steps of obtaining (40) an image, detecting (42) and classifying,determining (44) a position and storing (46), respectively, for a plurality of images; and comparing (50) the object map of an image with an object map of a previous imageof the physical space (15), to thereby detect when there is a change in an object in thephysical space (15).
2. The method according to claim 1, wherein each iteration of obtaining (40) an image comprises obtaining an image with essentially the same camera angle.
3. The method according to claim 1, wherein each iteration of obtaining (40) animage comprises obtaining an image from a mobile camera; and wherein the step ofdetermining (44) a position of the object comprises determining the position based on a machine learning model.
4. The method according to any one of claims 1 to 3, wherein the method is triggered to begin when a person is detected in the physical space (15).
5. The method according to any one of claims 1 to 3, wherein the method is triggered by receiving a trigger signal from an external system.
6. The method according to any one of the preceding claims, wherein the object mapis associated with a single image, and wherein the object map has a timestamp corresponding to a capturing time of the associated image.
7. The method according to any one of the preceding claims, wherein the image comprises depth information.
8. An object detector (1) for detecting changes in a physical space (15), the object detector (1) comprising: 12 a processor (60); and a memory (64) storing instructions (67) that, when executed by the processor,cause the object detector to: obtain an image of at least part of the physical space (15); detect and classifying an object in the image; determine a position of each detected object; store, in an object map, the classification and position of each detected object; repeat the instructions to obtain an image, detect and classify, determine aposition and store, respectively, for a plurality of images; and compare the object map of an image with an object map of a previous image of thephysical space (15), to thereby detect when there is a change in an object in the physicalspace (15).
9. The object detector (1) according to claim 8, wherein the instructions are repeatedand each iteration of the instructions to obtain an image comprise instructions (67) that,when executed by the processor, cause the object detector to obtain an image with essentially the same camera angle.
10. The object detector (1) according to claim 8 wherein the instructions are repeatedand each iteration of the instructions to obtain an image comprise instructions (67) that,when executed by the processor, cause the object detector to obtain an image from amobile camera; and wherein the instructions to determine a position of the objectcomprise instructions (67) that, when executed by the processor, cause the object detector to determine the position based on a machine learning model.
11. The object detector (1) according to claim 8 or 9, wherein the instructions are triggered to begin when a person is detected in the physical space (15).
12. The object detector (1) according to claim 8 or 9, wherein the instructions are triggered by receiving a trigger signal from an external system.
13. The object detector (1) according to any one of claims 8 to 12, wherein the objectmap is associated with a single image, and wherein the object map has a timestamp corresponding to a capturing time of the associated image.
14. The object detector (1) according to any one of claims 8 to 13, wherein the image comprises depth information. 13
15. A computer program (67, 91) for detecting changes in a physical space (15), thecomputer program comprising computer program code which, when run on an objectdetector causes the object detector to: obtain an image of at least part of the physical space (15); detect and classifying at least one object in the image; determine a position of each detected object; store, in an object map, the classification and position of each detected object; repeat the computer program code to obtain an image, detect and classify,determine a position and store, respectively, for a plurality of images; and compare the object map with an object map of a previous image of the physicalspace (15), to thereby detect when there is a change in an object in the physical space (15)-
16. A computer program product (64, 90) comprising a computer program according to claim 15 and a computer readable means on which the computer program is stored.
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
SE1951157A SE545091C2 (en) | 2019-10-11 | 2019-10-11 | Detecting changes in a physical space |
PCT/EP2020/078385 WO2021069649A1 (en) | 2019-10-11 | 2020-10-09 | Detecting changes in a physical space |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
SE1951157A SE545091C2 (en) | 2019-10-11 | 2019-10-11 | Detecting changes in a physical space |
Publications (2)
Publication Number | Publication Date |
---|---|
SE1951157A1 true SE1951157A1 (en) | 2021-04-12 |
SE545091C2 SE545091C2 (en) | 2023-03-28 |
Family
ID=72840541
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
SE1951157A SE545091C2 (en) | 2019-10-11 | 2019-10-11 | Detecting changes in a physical space |
Country Status (2)
Country | Link |
---|---|
SE (1) | SE545091C2 (en) |
WO (1) | WO2021069649A1 (en) |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20040246123A1 (en) * | 2003-06-09 | 2004-12-09 | Tsuyoshi Kawabe | Change detecting method and apparatus and monitoring system using the method or apparatus |
US20080158361A1 (en) * | 2006-10-23 | 2008-07-03 | Masaya Itoh | Video surveillance equipment and video surveillance system |
US20140079280A1 (en) * | 2012-09-14 | 2014-03-20 | Palo Alto Research Center Incorporated | Automatic detection of persistent changes in naturally varying scenes |
WO2014209724A1 (en) * | 2013-06-26 | 2014-12-31 | Amazon Technologies, Inc. | Detecting item interaction and movement |
US20190197313A1 (en) * | 2016-09-23 | 2019-06-27 | Hitachi Kokusai Electric Inc. | Monitoring device |
Family Cites Families (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
FR2941317B1 (en) * | 2009-01-21 | 2016-06-24 | Rockwell-Collins France | METHOD AND SYSTEM FOR DETECTING OBJECTS IN A FINISHED SPACE |
US10235762B1 (en) * | 2018-09-12 | 2019-03-19 | Capital One Services, Llc | Asset tracking systems |
-
2019
- 2019-10-11 SE SE1951157A patent/SE545091C2/en unknown
-
2020
- 2020-10-09 WO PCT/EP2020/078385 patent/WO2021069649A1/en active Application Filing
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20040246123A1 (en) * | 2003-06-09 | 2004-12-09 | Tsuyoshi Kawabe | Change detecting method and apparatus and monitoring system using the method or apparatus |
US20080158361A1 (en) * | 2006-10-23 | 2008-07-03 | Masaya Itoh | Video surveillance equipment and video surveillance system |
US20140079280A1 (en) * | 2012-09-14 | 2014-03-20 | Palo Alto Research Center Incorporated | Automatic detection of persistent changes in naturally varying scenes |
WO2014209724A1 (en) * | 2013-06-26 | 2014-12-31 | Amazon Technologies, Inc. | Detecting item interaction and movement |
US20190197313A1 (en) * | 2016-09-23 | 2019-06-27 | Hitachi Kokusai Electric Inc. | Monitoring device |
Non-Patent Citations (1)
Title |
---|
Image Change Detection Algorithms: A Systematic Survey Richard J. Radke, Member, IEEE, Srinivas Andra, Student Member, IEEE, Omar Al-Kofahi, and Badrinath Roysam, Member, IEEE, Student Member, IEEE URL: https://ieeexplore.ieee.org/stamp/stamp.jsp?arnumber=1395984 * |
Also Published As
Publication number | Publication date |
---|---|
SE545091C2 (en) | 2023-03-28 |
WO2021069649A1 (en) | 2021-04-15 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110472515B (en) | Goods shelf commodity detection method and system | |
CN104519318B (en) | Frequency image monitoring system and surveillance camera | |
CN103443743B (en) | For the method and apparatus that the enhancing of context-aware is mutual | |
WO2013183108A1 (en) | Information processing device, method, and program | |
KR101869895B1 (en) | Object recognition server and object recognition system and object recognition method based on deep learning | |
US20160178790A1 (en) | Vehicle inspection system and method with vehicle reference image retrieval and comparison function | |
US10930100B2 (en) | Detecting unauthorized physical access via wireless electronic device identifiers | |
JP5889408B2 (en) | Information processing apparatus, method, and program | |
JP5860144B2 (en) | Information processing apparatus, method, and program | |
US20190370612A1 (en) | Method and a system for identifying reflective surfaces in a scene | |
KR101412022B1 (en) | Method, apparatus and computer readable recording medium of mapping a cctv information based on a spacial data automatically | |
JP2018088157A (en) | Detection recognizing system | |
US20170200203A1 (en) | Item detection based on temporal imaging analysis | |
KR101212082B1 (en) | Image Recognition Apparatus and Vison Monitoring Method thereof | |
US9965612B2 (en) | Method and system for visual authentication | |
FR3076028A1 (en) | METHOD OF RECOGNIZING OBJECTS IN A THREE DIMENSIONED SCENE | |
SE1951157A1 (en) | Detecting changes in a physical space | |
JP2022526468A (en) | Systems and methods for adaptively constructing a 3D face model based on two or more inputs of a 2D face image | |
Chen et al. | Sound localization from motion: Jointly learning sound direction and camera rotation | |
US10289921B2 (en) | Method of operating an in-vehicle camera | |
US10365396B2 (en) | Three-dimensional radiograph security system | |
Van Crombrugge et al. | People tracking with range cameras using density maps and 2D blob splitting | |
SE1951220A1 (en) | Controlling camera-based supervision of a physical space | |
JP6131312B2 (en) | Information processing apparatus, method, and program | |
CN112528792B (en) | Fatigue state detection method, device, medium and electronic equipment |