US20220114826A1 - Method for identifying potential associates of at least one target person, and an identification device - Google Patents
Method for identifying potential associates of at least one target person, and an identification device Download PDFInfo
- Publication number
- US20220114826A1 US20220114826A1 US17/558,864 US202117558864A US2022114826A1 US 20220114826 A1 US20220114826 A1 US 20220114826A1 US 202117558864 A US202117558864 A US 202117558864A US 2022114826 A1 US2022114826 A1 US 2022114826A1
- Authority
- US
- United States
- Prior art keywords
- target person
- video
- videos
- appearance
- appearances
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Abandoned
Links
- 238000000034 method Methods 0.000 title claims abstract description 47
- 230000001815 facial effect Effects 0.000 claims description 18
- 230000003542 behavioural effect Effects 0.000 claims description 8
- 238000004891 communication Methods 0.000 description 26
- 238000004590 computer program Methods 0.000 description 21
- 230000008569 process Effects 0.000 description 9
- 239000007787 solid Substances 0.000 description 7
- 230000000694 effects Effects 0.000 description 5
- 230000003287 optical effect Effects 0.000 description 5
- 230000005540 biological transmission Effects 0.000 description 4
- 230000006399 behavior Effects 0.000 description 3
- 230000008901 benefit Effects 0.000 description 3
- 230000006870 function Effects 0.000 description 2
- 238000012545 processing Methods 0.000 description 2
- 230000009471 action Effects 0.000 description 1
- 238000007596 consolidation process Methods 0.000 description 1
- 238000010276 construction Methods 0.000 description 1
- 238000010586 diagram Methods 0.000 description 1
- 230000005055 memory storage Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000012544 monitoring process Methods 0.000 description 1
- 238000009877 rendering Methods 0.000 description 1
- 239000000126 substance Substances 0.000 description 1
- 230000001052 transient effect Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/103—Static body considered as a whole, e.g. static pedestrian or occupant recognition
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N7/00—Television systems
- H04N7/18—Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
- H04N7/188—Capturing isolated or intermittent images triggered by the occurrence of a predetermined event, e.g. an object reaching a predetermined position
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/40—Scenes; Scene-specific elements in video content
- G06V20/49—Segmenting video sequences, i.e. computational techniques such as parsing or cutting the sequence, low-level clustering or determining units such as shots or scenes
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/50—Context or environment of the image
- G06V20/52—Surveillance or monitoring of activities, e.g. for recognising suspicious objects
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/161—Detection; Localisation; Normalisation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/20—Movements or behaviour, e.g. gesture recognition
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/172—Classification, e.g. identification
- G06V40/173—Classification, e.g. identification face re-identification, e.g. recognising unknown faces across different face tracks
Abstract
There is provided a method for identifying potential associates of at least one target person, the method comprising: providing a plurality of videos; identifying appearances of the at least one target person in the plurality of videos; establishing a plurality of video scenes from the plurality of videos, wherein each one of the plurality of video scenes begins at a first predetermined duration before a first appearance of the at least one target person in the respective video scene and ends at a second predetermined duration after a last appearance of said at least one target person in the respective video scene; determining individuals who appear in more than a predetermined threshold number of the plurality of video scenes; and identifying the individuals as potential associates of the at least one target person.
Description
- The present application is a continuation application of U.S. patent application Ser. No. 16/642,279 filed on Feb. 26, 2020, which is a National Stage Entry of international application PCT/JP2019/032161, filed on Aug. 16, 2019, which claims the benefit of priority from Singaporean Patent Application 10201807678 W filed on Sep. 6, 2018, the disclosures of all of which are incorporated in their entirety by reference herein.
- The present invention generally relates to methods for identifying potential associates of at least one target person, and identification devices.
- An organized crime group can be defined as a group of people working together on a continuing basis for coordination and planning of criminal activities. Their group structures vary, often consisting of a durable core of key individuals, cluster of subordinates, specialists, and other more transient members, plus an extended network of associates. Many such groups are often loose networks of criminals that come together for a specific criminal activity, acting in different roles depending on their skills and expertise.
- To discover an organized crime group network of associates, apart from digital/cyberspace monitoring, the physical world's video surveillance systems can be the extended eye of law enforcement agencies to monitor and discover the potential network of associates.
- According to a first aspect, there is provided a method for identifying potential associates of at least one target person, the method comprising: providing a plurality of videos; identifying appearances of the at least one target person in the plurality of videos; establishing a plurality of video scenes from the plurality of videos, wherein each one of the plurality of video scenes begins at a first predetermined duration before a first appearance of the at least one target person in the respective video scene and ends at a second predetermined duration after a last appearance of said at least one target person in the respective video scene; determining individuals who appear in more than a threshold number of the plurality of video scenes; and identifying the individuals as potential associates of the at least one target person.
- According to a second aspect, there is provided an identification device configured to identify potential associates of at least one target person, the identification device comprising: a receiving module configured to receive a plurality of videos; an appearance search module configured to identify appearances of the at least one target person in the plurality of videos; an appearance consolidator module configured to establish a plurality of video scenes from the plurality of videos, wherein each one of the plurality of video scenes begins at a first predetermined duration before a first appearance of the at least one target person in the respective video scene and ends at a second predetermined duration after a last appearance of said at least one target person in the respective video scene; a co-appearance search module configured to search for individuals who appear in the plurality of video scenes; an appearance analyzer module configured to determine which of the individuals appear in more than a predetermined threshold number of the plurality of video scenes; and an output module configured to identify the individuals who appear in more than a predetermined threshold number of the plurality of video scenes as the potential associates of the at least one target person.
- According to a third aspect, there is provided a non-transitory computer readable medium having stored thereon instructions which, when executed by a processor, make the processor carry out a method for identifying potential associates of at least one target person, the method comprising: receiving a plurality of videos; identifying appearances of the at least one target person in the plurality of videos; establishing a plurality of video scenes from the plurality of videos, wherein each one of the plurality of video scenes begins at a first predetermined duration before a first appearance of the at least one target person in the respective video scene and ends at a second predetermined duration after a last appearance of said at least one target person in the respective video scene; determining individuals who appear in more than a threshold number of the plurality of video scenes; and identifying the individuals as potential associates of the at least one target person.
- The accompanying figures, where like reference numerals refer to identical or functionally similar elements throughout the separate views and which together with the detailed description below are incorporated in and form part of the specification, serve to illustrate various embodiments and to explain various principles and advantages in accordance with a present embodiment.
-
FIG. 1 shows a flow diagram illustrating a method for identifying potential associates of at least one target person according to various embodiments; -
FIG. 2 shows an identification device for implementing the method illustrated inFIG. 1 , according to various embodiments; -
FIG. 3 illustrates a video scene analysis for a single location and a single target person according to various embodiments; -
FIG. 4 illustrates a video scene analysis for more than one location and more than one target person according to various embodiments; -
FIG. 5 shows an illustration of how potential associates are identified according to various embodiments; and -
FIG. 6 depicts an exemplary device according to various embodiments. - Various embodiments provide devices and methods for identifying potential associates of at least one target person.
- The physical world's video surveillance systems have long been the extended eye of law enforcement agencies to monitor criminal activities and discover the potential associates of organized crime groups, apart from digital/cyber surveillance.
- Video surveillance systems are usually built and deployed to identify registered personnel, but some of the more advanced surveillance system also has the ability to track and associate people who are captured on camera together to build a registered person connection network. Video surveillance systems are usually built and deployed to identify specific targeted persons, but some of the more advanced surveillance systems also have the ability to track and associate people that are seen together with them to build a registered person connection network.
- These existing solutions might be useful, but are limited to or are more suitable to discover the relation link among family members as well as friends and colleagues. Such solutions will also fail to discover hidden associates of a target person, especially in situations where the target person and his associates are not captured on camera together. For example, organized crime group's key individuals and their extended network of associates tend to stay off the grid and avoid being seen together to hide their connection during planning or execution of criminal activities. Most of them try to avoid communication through phone, emails, social networks (facebook, linkedin, etc) and instant messengers (whatsapp, line, wechat, etc) where there is a possibility to obtain the communication evidence through digital tracing by authorized law enforcers.
- Further, some organized crime group members might make indirect contact or exchange information with their extended network of associates in crowded public areas that make it easier to cover their tracks and appearance. There is no surprise that some associates might not even know who they are communicating with. For instance, a first associate may be required to retrieve a physical object left in a public location by a second associate. By the time the first associate arrives at the designated public location to retrieve the object, the second associate may already have left. Even with a video surveillance system installed to monitor the location, both associates will not be caught on camera together since there is no direct communication between them.
- As a result of such careful ways of communication, there is difficulty for law enforcers to monitor and discover associates of such organized crime groups.
- Hence, there exists a need to provide a solution to the above-mentioned problem.
- The present invention provides a solution to the above-mentioned problem. During analysis of videos captured by surveillance cameras to identify possible associates of a target person, by extending the analysis range to include a period of time before a first appearance of the target person at a location captured by the surveillance cameras and another period of time after a last appearance of the target person at the same location, it is possible to discover unknown associates of the target person.
- The results are further improved when videos of more than one target persons who belong to a same group are analysed. For example, if an unknown individual is found to appear in more than a threshold number of the videos, the probability that the unknown individual is an associate of the target persons is higher.
- Advantageously, the present invention allows identification of potential associates of a target person, even if they are not co-appearing together in the videos.
- Advantageously, the probability that the identified potential associates are indeed associates of the target person is increased when videos of more than one target persons are analysed.
-
FIG. 1 shows a flow chart illustrating a method for identifying potential associates of at least one target person. In 102, a plurality of videos is provided. The plurality of videos may be video recordings of locations captured by surveillance cameras, hand phone cameras, CCTV (closed-circuit television) cameras, web-cams or other similar devices. The locations may be places where the at least one target person has been seen, known to have been to or frequented, or suspected locations where the at least one target person provides or receives information to or from associates of the same criminal group. The plurality of videos may be in a file format such as mp4, avi, mkv, wmv, mov or other similar video format. Further, each of the plurality of videos may indicate a time, date and location at which each respective video is recorded. In an embodiment, the plurality of videos may be processed into an entry database consisting of one or more entries, wherein each of the one or more entries represents an appearance of a person at a time, date and location in the plurality of videos, wherein each of the one or more entries indicates an attribute of the person. - In 104, appearances of the at least one target person in the plurality of videos are identified. This identification process may be achieved by determining an attribute of the respective target person, and then identifying, from the plurality of videos, an individual possessing the attribute as the respective target person. For example, the attribute may be facial information of the at least one target person which may be determined from a picture of the at least one target person's face. The attribute may also be a physical characteristic of the at least one target person, for example height, body size, hair colour, skin colour, other physical features or combinations thereof of such features that may be used to identify the at least one target person from the plurality of videos. The attribute may also be a behavioural characteristic of the at least one target person such as, for example, the way the at least one target person walks, stands, moves, talks, other similar characteristics or combinations thereof that may be used to identify the target person from the plurality of videos.
- In 106, a plurality of video scenes is established from the plurality of videos, wherein each one of the plurality of video scenes begins at a first predetermined duration before a first appearance of the at least one target person in the respective video scene and ends at a second predetermined duration after a last appearance of said at least one target person in the respective video scene. Each of the plurality of video scenes may comprise surveillance footage of a location where at least one appearance of the targeted person is identified. Practically, most locations would typically have more than one surveillance camera installed to monitor the respective locations, such that each of these surveillance cameras may either provide surveillance for different parts of the location, or monitor the location from different views or angles. Therefore, each of the plurality of video scenes may further comprise one or more camera surveillance footages of a respective location where at least one appearance of the respective target person is identified. Advantageously, taking into consideration all available surveillance footages of a location can cover scenarios in which the target person is at a spot where only one of the surveillance cameras can capture the person on video.
- Further, each of the plurality of video scenes is established such that each video scene begins at a first predetermined duration before a first identified appearance of the at least one target person, and ends at a second predetermined duration after a last appearance of the at least one target person. For example, where a first and a last appearance of a target person at a location is at 2 pm and 3 pm on a same date respectively with intermediate appearances at 2.10 pm, 2.25 pm, 2.40 pm and 2.50 pm, and the first and second predetermined duration are set as 20 minutes and 25 minutes respectively, then the resulting video scene will begin at 1.40 pm and end at 3.25 pm on the same date.
- In 108, individuals who appear in more than a predetermined threshold number of the plurality of video scenes are determined. The individuals refer to all other persons besides the at least one target person who appear in the plurality of video scenes. These individuals do not need to be seen communicating with the at least one target person in the plurality of video scenes in order to be considered as potential associates, as long as they are found to appear in more than a predetermined threshold number of video scenes. The predetermined threshold number may be determined by trial and error, and may vary depending on the quantity or quality of videos to be analysed. Appearances of each individual may be identified based on a determined attribute of the respective individual, such as facial information, physical characteristics, behaviour characteristics or other attributes that may be used to identify the individual.
- In 110, the individuals who appear in more than the predetermined threshold number of the video scenes are identified as potential associates of the at least one target person.
-
FIG. 2 shows anidentification device 200 configured to implement the method illustrated inFIG. 1 . Thedevice 200 includes a receivingmodule 202, anappearance search module 204, aconsolidator module 206, aco-appearance search module 208, ananalyser module 210 and anoutput module 212. - The receiving
module 202 is configured to receive a plurality of videos. The plurality of videos may be video recordings of locations captured by surveillance cameras, hand phone cameras, CCTV (closed-circuit television) cameras, web-cams or other similar devices. The locations may be places where the at least one target person has been seen, known to have been to or frequented, or suspected locations where the at least one target person provides or receives information to or from associates of the same criminal group. The plurality of videos may be in a file format such as mp4, avi, mkv, wmv, mov or other similar video format. Further, each of the plurality of videos may indicate a time, date and location at which each respective video is recorded. - The
appearance search module 204 is configured to identify appearances of the at least one target person in the plurality of videos. In an embodiment, theappearance search module 204 may be further configured to determine an attribute of a respective target person of the at least one target person and identify, from the plurality of videos, an individual possessing the attribute as the respective target person. For example, the attribute may comprise facial information, a physical characteristic or a behavioural characteristic of the respective target person. - The
appearance consolidator module 206 is configured to establish a plurality of video scenes from the plurality of videos, wherein each one of the plurality of video scenes begins at a first predetermined duration before a first appearance of the at least one target person in the respective video scene and ends at a second predetermined duration after a last appearance of said at least one target person in the respective video scene. In an embodiment, the plurality of video scenes may further comprise one or more camera surveillance footage of a location. Further, each of the one or more camera surveillance footages may show a different view of the location. - The
co-appearance search module 208 is configured to search for individuals who appear in the plurality of video scenes. In an embodiment, appearances of each individual may be identified based on a determined attribute of the respective individual, such as facial information, physical characteristics, behaviour characteristics or other attributes that may be used to identify the individual. - The
appearance analyzer module 210 is configured to determine which of the individuals appear in more than a predetermined threshold number of the plurality of video scenes. Theoutput module 212 is configured to identify the individuals who appear in more than a predetermined threshold number of the plurality of video scenes as the potential associates of the at least one target person. -
FIG. 3 illustrates a video scene analysis for a single location and a single target person according to various embodiments. Avideo scene 300 comprises video footage from one or more surveillance cameras of a single location at a particular date. In this embodiment, a first appearance of atarget person 302 occurs at 2145 hours and a last appearance of the target person occurs at 2148 hours. Further, a first predetermined duration and a second predetermined duration are both set as 5 minutes. Accordingly, thevideo scene 300 begins at the first predetermined duration before the first appearance of the target person, which is at 2140 hours, and ends at the second predetermined duration after the last appearance of the target person, which is 2153 hours. Further, thevideo scene 300 may not require a continuous presence of thetarget person 302. For example, thetarget person 302 is not present for 2 minutes between 2146 hours and 2148 hours in thevideo scene 300. Since the 2 minute absence of thetarget person 302 is shorter than the second predetermined duration of 5 minutes, the appearance of thetarget person 302 at 2146 hours is not considered as the last appearance. Therefore, the period of time from 2145 hours to 2148 hours ofvideo scene 300 comprises one logical appearance of thetarget person 302. - In an embodiment, there may be a third predetermined duration for limiting a duration of each time that a target person can be absent in a video scene. Referring to
video scene 300, a third predetermined duration may be set as, for example, 20 minutes. This means that a maximum duration for each time that thetarget person 302 can be absent in thevideo scene 300 is 20 minutes. In thevideo scene 300, thetarget person 302 is not present for 2 minutes between 2146 hours and 2148 hours. Since the 2 minute absence of thetarget person 302 is shorter than the third predetermined duration of 20 minutes, the appearance of thetarget person 302 at 2146 hours is not considered as a last appearance. Therefore, the period of time from 2145 hours to 2148 hours ofvideo scene 300 comprises one logical appearance of thetarget person 302. If, for example, the period of absence starting from 2147 hours of thetarget person 302 exceeds the third predetermined duration, thevideo scene 300 will instead end at the second predetermined duration of 5 minutes after 2147 hours, at 2152 hours. Further, if thetarget person 302 then reappears in the plurality of videos after 2152 hours, for example at 2230 hours, a new video scene will be established starting at the first predetermined duration of 5 minutes before 2230 hours, at 2225 hours. In this case, the period of time from 2145 hours to 2147 hours comprises one logical appearance of thetarget person 302, and the period of time that starts at 2230 hours until a next last appearance of thetarget person 302 comprises another logical appearance of thetarget person 302. It will be appreciated that the first, second and third predetermined duration may be set to any duration that may be deemed suitable for analysis of the video scenes. - Next, individuals other than the
target person 302 are identified. In thevideo scene 300, a firstunknown individual 304 appears walking alone at 2140 hours, a secondunknown individual 306 appears walking beside thetarget person 302 at 2146 hours, a thirdunknown individual 308 appears walking at a distance fromtarget person 302, and a fourthunknown individual 310 is seen walking alone at 2153 hours. Accordingly, an attribute of each of these four unknown individuals are determined for comparison with other video scenes. For example, the attribute may be facial information which may be determined from captured videos of the each of the four unknown individuals' faces. The attribute may also be a physical characteristic of each of the four unknown individuals, for example height, body size, hair colour, skin colour, and other physical features or combinations thereof. The attribute may also be a behavioural characteristic of each of the four unknown individuals such as, for example, the way each of the four unknown individuals walk, stand, move, talk, other similar characteristics or combinations thereof. -
FIG. 4 illustrates a video scene analysis for more than one location and more than one target person according to various embodiments. Twovideo scenes Video scene 400 comprises video surveillance footage for a Location A on 2nd April, at which afirst target person 402 appears at 2145 hours.Video scene 401 comprises video surveillance footage for a Location B on 11th May, at which asecond target person 404 appears at 1125 hours. Invideo scene 400, anunknown individual 406 appears at 2141 hours, 4 minutes before the appearance of thetarget person 402. Invideo scene 401, the sameunknown individual 406 appears at 1128 hours, 3 minutes after the appearance oftarget person 404. Accordingly, theunknown individual 406 is now determined to appear in 2 video scenes. In an embodiment where the predetermined threshold number is set as 1, theunknown individual 406 will be identified as a potential associate oftarget persons -
FIG. 5 shows anillustration 500 of how potential associates are identified. Firstly, an attribute of at least one target person is determined. For example, the attribute may be facial information of the at least one target person which may be determined from a picture of the at least one target person's face. The attribute may also be a physical characteristic of the at least one target person, for example height, body size, hair colour, skin colour, and other physical features or combinations thereof. The attribute may also be a behavioural characteristic of the at least one target person such as, for example, the way the at least one target person walks, stands, moves, talks, other similar characteristics or combinations thereof. In the present embodiment, at 508, a group photo or multiple photos of threetarget persons 502, 504 and 506 are provided. At 510, facial information oftarget persons 502, 504 and 506 are detected from the provided photos. The detected facial information may then be used as the attribute. It will be appreciated that the photographs may be physical copies or soft copies, where the physical copies may be scanned to detect the facial features of the target persons. Further, other mediums such as videos can also be used for determining the attributes. - Further, a plurality of videos is provided. The plurality of videos may be video recordings of locations captured by surveillance cameras, hand phone cameras, CCTV (closed-circuit television) cameras, web-cams or other similar devices. The locations may be places where the at least one target person has been seen, known to have been to or frequented, or suspected locations where the at least one target person provides or receives information to or from associates of the same criminal group. The plurality of videos may be in a file format such as mp4, avi, mkv, wmv, mov or other similar video format. Further, each of the plurality of videos may indicate a time, date and location at which each respective video is recorded. In an embodiment, the plurality of videos may be processed into an entry database consisting of one or more entries, wherein each of the one or more entries represents an appearance of a person at a time, date and location in the plurality of videos, wherein each of the one or more entries indicates an attribute of the person.
- At 512, appearances of the three
target persons 502, 504 and 506 are identified from the plurality of videos. This may be achieved by identifying, from the plurality of videos, an individual possessing the determined attribute as the respective target person. In the present embodiment, the attribute used for the identification oftarget persons 502, 504 and 506 in the plurality of videos is the facial information as determined in 510. For example, an individual appearing in the plurality of videos and having the same facial information astarget person 502 will be identified as thetarget person 502, an individual appearing in the plurality of videos and having the same facial information as target person 504 will be identified as the target person 504, and an individual appearing in the plurality of videos and having the same facial information as target person 506 will be identified as the target person 506. - After identifying all video appearances of the
target persons 502, 504 and 506 in the plurality of videos, at 514, an appearance consolidator consolidates the identified video appearances of the threetarget persons 502, 504 and 506 from the plurality of videos. For example, identifiedvideo appearances 522 is based on the identified appearances in the plurality of videos oftarget person 502, identifiedvideo appearances 524 is based on the identified appearances in the plurality of videos of target person 504 and identifiedvideo appearances 526 is based on the identified appearances in the plurality of videos of target person 506. The consolidation may be based on a time range, a date, a location or a combination thereof, wherein identified appearances of a target person that occur at a same location, date and/or time range may be grouped together to form a logical appearance sequence. - The identified video appearances of the target persons may come from one or more videos of the plurality of videos. In the present embodiment, identified
video appearances 526 is based on appearances of target person 506 in one or more videos of the plurality of videos, wherein the one or more videos may occur at a same time range, date, location or a combination thereof, such that thevideo appearances 526 comprises one logical appearance of the target person 506.Identified video appearances 522 is based on appearances oftarget person 502 in at least two videos of the plurality of videos, wherevideo appearances 528 oftarget person 502 are identified from a first batch of one or more videos, andvideo appearances 530 oftarget person 502 are identified from a second batch of one or more videos. The first batch of one or more videos may occur at a same time range, date, location or a combination thereof, such that thevideo appearances 528 comprises one logical appearance of thetarget person 502. Likewise, the second batch of one or more videos may occur at a same time range, date, location or a combination thereof, such that thevideo appearances 530 comprises one logical appearance of thetarget person 502. For example, the first and second batch of one or more videos may be surveillance videos of a location recorded on a same date, wherevideo appearances 528 oftarget person 502 from the first batch of one or more videos may be occurring at an earlier time andvideo appearances 530 oftarget person 502 from the second batch of one or more videos may be occurring at a later time, such thatvideo appearances 528 forms a first logical appearance oftarget person 502, whilevideo appearances 530 forms a second logical appearance oftarget person 502. Accordingly, theconsolidated video appearances 522 comprises two logical appearances oftarget person 502. - Further, identified
appearances 524 is based on appearances of target person 504 in at least two videos of the plurality of videos, wherevideo appearances 532 of target person 504 are identified from a first batch of one or more videos, andvideo appearances 534 of target person 504 are identified from a second batch of one or more videos. The first batch of one or more videos may occur at a same time range, date, location or a combination thereof, such that thevideo appearances 532 comprises one logical appearance of the target person 504. Likewise, the second batch of one or more videos may occur at a same time range, date, location or a combination thereof, such that thevideo appearances 534 comprises one logical appearance of the target person 504. For example, the first and second batch of one or more videos may be surveillance videos of a location recorded on a same date, wherevideo appearances 532 of target person 504 from the first batch of one or more videos may be occurring at an earlier time andvideo appearances 534 of target person 504 from the second batch of one or more videos may be occurring at a later time, such thatvideo appearances 532 forms a first logical appearance of target person 504, whilevideo appearances 534 from a second logical appearance of target person 504. Accordingly, theconsolidated video appearances 524 comprises two logical appearances of target person 504. It will be appreciated that more than one consolidated video appearances for each target person may be formed based on the identified appearances, where each consolidated appearance may correspond to a time range, a date, a location, or combinations thereof in which the identified appearances occur in the plurality of videos. - Based on the identified logical appearances that are consolidated at 514, a plurality of video scenes is established by the appearance consolidator. At 516, video scene 536 is established based on, for example, the
consolidated appearances 526 of target person 506. The video scene 536 comprises afirst portion 540, asecond portion 542 and athird portion 544. Thefirst portion 540 may comprise a one or more video footages from whichconsolidated video appearances 526 of target person 506 are identified. Thefirst portion 540 may further comprise one or more video footages in which appearances of the target person 506 are not found, but these one or more video footages are of a time, a date, a location, or combinations thereof that matches the time, the date, the location, or combinations thereof of the one or more video footages from which theconsolidated video appearances 526 of target person 506 are identified. Advantageously, this will take into consideration all available surveillance footage of a location, so as to cover scenarios in which the target person is at a spot where only one of the surveillance cameras can capture the person on video. - In addition to the
first portion 540 of the video scene 536, thesecond portion 542 extends the duration of the video scene 536 by a first predetermined duration, such that the video scene 536 begins at the first predetermined duration before a first appearance of the target person 506 as identified in thefirst portion 540 of the video scene 536. Accordingly, thesecond portion 542 may comprise one or more video footages of a time, a date, a location, or combinations thereof that matches the time, the date, the location, or combinations thereof of the one or more video footages of thefirst portion 540 of video scene 536, wherein the one or more video footages of thesecond portion 542 begins at the first predetermined duration before the first appearance of the target person 506 as identified in thefirst portion 540 of the video scene 536. Advantageously, the inclusion of thesecond portion 542 of the video scene 536 allows identification of potential associates of the target person 506 even if they are not co-appearing together with the target person 506 in the videos, but only appearing before the target person 506 arrives at the recorded location, possibly just to leave an object for retrieval by the target person 506. - Further, there is the
third portion 544 of the video scene 536 that extends the duration of the video scene 536 by a second predetermined duration, such that the video scene 536 ends at the second predetermined duration after a last appearance of the target person 506 as identified in thefirst portion 540 of the video scene 536. Accordingly, thethird portion 544 may comprise one or more video footages of a time, a date, a location, or combinations thereof that matches the time, the date, the location, or combinations thereof of the one or more video footages of thefirst portion 540 of video scene 536, wherein the one or more video footages of thethird portion 544 ends at the second predetermined duration after the last appearance of the target person 506 as identified in thefirst portion 540 of the video scene 536. Advantageously, the inclusion of thethird portion 544 of the video scene 536 allows identification of potential associates of the target person 506 even if they are not co-appearing together with the target person 506 in the videos, but only appearing after the target person 506 leaves the recorded location, possibly to retrieve an object that was intentionally left behind by the target person 506. - Similar to video scene 536,
video scene 538 is established based on, for example,video appearances 532 in theconsolidated appearances 524 of target person 504. Thevideo scene 538 comprises a first portion 546, asecond portion 548 and athird portion 550. The first portion 546 may comprise one or more video footages from whichvideo appearances 532 of target person 504 are identified. The first portion 546 may further comprise one or more video footages in which appearances of the target person 504 are not found, but these one or more video footages are of a time, a date, a location, or combinations thereof that matches the time, the date, the location, or combinations thereof of the one or more video footages from which theconsolidated video appearances 532 of target person 504 are identified. Advantageously, this will take into consideration all available surveillance footage of a location, so as to cover scenarios in which the target person is at a spot where only one of the surveillance cameras can capture the person on video. - In addition to the first portion 546 of the
video scene 538, thesecond portion 548 extends the duration of thevideo scene 538 by a first predetermined duration, such that thevideo scene 538 begins at the first predetermined duration before a first appearance of the target person 504 as identified in the first portion 546 of thevideo scene 538. Accordingly, thesecond portion 548 may comprise one or more video footages of a time, a date, a location, or combinations thereof that matches the time, the date, the location, or combinations thereof of the one or more video footages of the first portion 546 ofvideo scene 538, wherein the one or more video footages of thesecond portion 548 begins at the first predetermined duration before the first appearance of the target person 504 as identified in the first portion 546 of thevideo scene 538. Advantageously, the inclusion of thesecond portion 548 of thevideo scene 538 allows identification of potential associates of the target person 504 even if they are not co-appearing together with the target person 504 in the videos, but only appearing before the target person 504 arrives at the recorded location, possibly just to leave an object for retrieval by the target person 504. - Further, there is the
third portion 550 of thevideo scene 538 that extends the duration of thevideo scene 538 by a second predetermined duration, such that thevideo scene 538 ends at the second predetermined duration after a last appearance of the target person 504 as identified in the first portion 546 of thevideo scene 538. Accordingly, thethird portion 550 may comprise one or more video footages of a time, a date, a location, or combinations thereof that matches the time, the date, the location, or combinations thereof of the one or more video footages of the first portion 546 ofvideo scene 538, wherein the one or more video footages of thethird portion 550 ends at the second predetermined duration after the last appearance of the target person 504 as identified in the first portion 546 of thevideo scene 538. Advantageously, the inclusion of thethird portion 550 of thevideo scene 538 allows identification of potential associates of the target person 504 even if they are not co-appearing together with the target person 504 in the videos, but only appearing after the target person 504 leaves the recorded location, possibly to retrieve an object that was intentionally left behind by the target person 504. - After establishment of the
video scenes 536 and 538, a co-appearance search module determines every individual besides thetarget persons 502, 504 and 506 who appear in the video scenes. The determination process may comprise determining an attribute of each of the one or more individuals who appear in any of thevideo scenes 536 and 538. The attribute may be, for example, facial information, physical characteristics, behaviour characteristics, other similar characteristics or combinations thereof that may be used to identify each of the one or more individuals. The determination process may further comprise determining a time, a date, a location, a target person who appeared in the same video scene as the respective individual, or combinations thereof for each of the one or more individuals who appear in any of thevideo scenes 536 and 538. The determination process includes all threeportions portions video scene 538. Advantageously, individuals who appear within the first predetermined duration before the first appearance of the respective target person in the respective video scene, and individuals who appear within the second predetermined duration after the last appearance of the respective target person in the respective video scene are considered in the determination process. It will be understood that video scenes will similarly be established for each of the remainingvideo appearances - After determining each of the one or more individuals appearing in the
video scenes 536 and 538, an appearance analyser determines the individuals who appear in more than a predetermined threshold number of the video scenes. Referring to 518, three persons A, B and C are found to have appeared in any of the video scene 536 and/orvideo scene 538. In the present embodiment, the determination process comprises determining an attribute and a location for each of the one or more individuals who appear in any of thevideo scenes 536 and 538, where video scene 536 comprises one or more camera surveillance footage of a first location andvideo scene 538 comprises one or more camera surveillance footage of a second location. Based on the results of the co-appearance search module at 516, Person A is found to have appeared in the video scene 536. Accordingly, as shown in 552, Person A has one appearance in one location. Person B is found to have appeared in thevideo scene 538. Accordingly, as shown in 554, Person B also has one appearance in one location. Person C, however, is found to have appeared in bothvideo scenes 536 and 538. Accordingly, as shown in 556, Person C has two appearances in two locations. - In the present embodiment, the predetermined threshold number is set as 1. Therefore, if an individual is determined to have appeared in 2 or more video scenes, the individual is then determined to be a potential associate of the target persons. In this case, since Person C is found to have appeared in two video scenes, namely video scene 536 and
video scene 538, Person C will be output at 520 as the potential associate oftarget persons 506 and 508. It will be understood that the predetermined threshold number may be set according to any other number that may produce an optimal result, and may vary according to the number of video scenes being considered. -
FIG. 6 depicts anexemplary computing device 600, hereinafter interchangeably referred to as acomputer system 600 or as adevice 600, where one or moresuch computing devices 600 may be used to implement theidentification device 200 shown inFIG. 2 . The following description of thecomputing device 600 is provided by way of example only and is not intended to be limiting. - As shown in
FIG. 6 , theexample computing device 600 includes aprocessor 604 for executing software routines. Although a single processor is shown for the sake of clarity, thecomputing device 600 may also include a multi-processor system. Theprocessor 604 is connected to acommunication infrastructure 606 for communication with other components of thecomputing device 600. Thecommunication infrastructure 606 may include, for example, a communications bus, cross-bar, or network. - The
computing device 600 further includes aprimary memory 608, such as a random access memory (RAM), and asecondary memory 610. Thesecondary memory 610 may include, for example, astorage drive 612, which may be a hard disk drive, a solid state drive or a hybrid drive and/or aremovable storage drive 614, which may include a magnetic tape drive, an optical disk drive, a solid state storage drive (such as a USB flash drive, a flash memory device, a solid state drive or a memory card), or the like. Theremovable storage drive 614 reads from and/or writes to aremovable storage medium 644 in a well-known manner. Theremovable storage medium 644 may include magnetic tape, optical disk, non-volatile memory storage medium, or the like, which is read by and written to byremovable storage drive 614. As will be appreciated by persons skilled in the relevant art(s), theremovable storage medium 644 includes a computer readable storage medium having stored therein computer executable program code instructions and/or data. - In an alternative implementation, the
secondary memory 610 may additionally or alternatively include other similar means for allowing computer programs or other instructions to be loaded into thecomputing device 600. Such means can include, for example, aremovable storage unit 622 and aninterface 640. Examples of aremovable storage unit 622 andinterface 640 include a program cartridge and cartridge interface (such as that found in video game console devices), a removable memory chip (such as an EPROM or PROM) and associated socket, a removable solid state storage drive (such as a USB flash drive, a flash memory device, a solid state drive or a memory card), and otherremovable storage units 622 andinterfaces 640 which allow software and data to be transferred from theremovable storage unit 622 to thecomputer system 600. - The
computing device 600 also includes at least onecommunication interface 624. Thecommunication interface 624 allows software and data to be transferred betweencomputing device 600 and external devices via acommunication path 626. In various embodiments of the inventions, thecommunication interface 624 permits data to be transferred between thecomputing device 600 and a data communication network, such as a public data or private data communication network. Thecommunication interface 624 may be used to exchange data betweendifferent computing devices 600 whichsuch computing devices 600 form part an interconnected computer network. Examples of acommunication interface 624 can include a modem, a network interface (such as an Ethernet card), a communication port (such as a serial, parallel, printer, GPIB, IEEE 1394, RJ45, USB), an antenna with associated circuitry and the like. Thecommunication interface 624 may be wired or may be wireless. Software and data transferred via thecommunication interface 624 are in the form of signals which can be electronic, electromagnetic, optical or other signals capable of being received bycommunication interface 624. These signals are provided to the communication interface via thecommunication path 626. - As shown in
FIG. 6 , thecomputing device 600 further includes adisplay interface 602 which performs operations for rendering images to an associateddisplay 630 and anaudio interface 632 for performing operations for playing audio content via associated speaker(s) 634. - As used herein, the term “computer program product” (or computer readable medium, which may be a non-transitory computer readable medium) may refer, in part, to
removable storage medium 644,removable storage unit 622, a hard disk installed instorage drive 612, or a carrier wave carrying software over communication path 626 (wireless link or cable) tocommunication interface 624. Computer readable storage media (or computer readable media) refers to any non-transitory, non-volatile tangible storage medium that provides recorded instructions and/or data to thecomputing device 600 for execution and/or processing. Examples of such storage media include magnetic tape, CD-ROM, DVD, Blu-ray™ Disc, a hard disk drive, a ROM or integrated circuit, a solid state storage drive (such as a USB flash drive, a flash memory device, a solid state drive or a memory card), a hybrid drive, a magneto-optical disk, or a computer readable card such as a PCMCIA card and the like, whether or not such devices are internal or external of thecomputing device 600. Examples of transitory or non-tangible computer readable transmission media that may also participate in the provision of software, application programs, instructions and/or data to thecomputing device 600 include radio or infra-red transmission channels as well as a network connection to another computer or networked device, and the Internet or Intranets including e-mail transmissions and information recorded on Websites and the like. - The computer programs (also called computer program code) are stored in
primary memory 608 and/orsecondary memory 610. Computer programs can also be received via thecommunication interface 624. Such computer programs, when executed, enable thecomputing device 600 to perform one or more features of embodiments discussed herein. In various embodiments, the computer programs, when executed, enable theprocessor 604 to perform features of the above-described embodiments. Accordingly, such computer programs represent controllers of thecomputer system 600. - Software may be stored in a computer program product and loaded into the
computing device 600 using theremovable storage drive 614, thestorage drive 612, or theinterface 640. The computer program product may be a non-transitory computer readable medium. Alternatively, the computer program product may be downloaded to thecomputer system 600 over thecommunications path 626. The software, when executed by theprocessor 604, causes thecomputing device 600 to perform functions of embodiments described herein. - It is to be understood that the embodiment of
FIG. 6 is presented merely by way of example. Therefore, in some embodiments one or more features of thecomputing device 600 may be omitted. Also, in some embodiments, one or more features of thecomputing device 600 may be combined together. Additionally, in some embodiments, one or more features of thecomputing device 600 may be split into one or more component parts. Theprimary memory 608 and/or thesecondary memory 610 may serve(s) as the memory for thedevice 200; while theprocessor 604 may serve as the processor of theidentification device 200. - Some portions of the description herein are explicitly or implicitly presented in terms of algorithms and functional or symbolic representations of operations on data within a computer memory. These algorithmic descriptions and functional or symbolic representations are the means used by those skilled in the data processing arts to convey most effectively the substance of their work to others skilled in the art. An algorithm is here, and generally, conceived to be a self-consistent sequence of steps leading to a desired result. The steps are those requiring physical manipulations of physical quantities, such as electrical, magnetic or optical signals capable of being stored, transferred, combined, compared, and otherwise manipulated.
- Unless specifically stated otherwise, and as apparent from the description herein, it will be appreciated that throughout the present specification, discussions utilizing terms such as “receiving”, “providing”, “identifying”, “scanning”, “determining”, “generating”, “outputting”, or the like, refer to the action and processes of a computer system, or similar electronic device, that manipulates and transforms data represented as physical quantities within the computer system into other data similarly represented as physical quantities within the computer system or other information storage, transmission or display devices.
- The present specification also discloses apparatus for performing the operations of the methods. Such apparatus may be specially constructed for the required purposes, or may comprise a computer or other device selectively activated or reconfigured by a computer program stored in the computer. The algorithms and displays presented herein are not inherently related to any particular computer or other apparatus. Various machines may be used with programs in accordance with the teachings herein. Alternatively, the construction of more specialized apparatus to perform the required method steps may be appropriate. The structure of a computer suitable for executing the various methods/processes described herein will appear from the description herein.
- In addition, the present specification also implicitly discloses a computer program, in that it would be apparent to the person skilled in the art that the individual steps of the method described herein may be put into effect by computer code. The computer program is not intended to be limited to any particular programming language and implementation thereof. It will be appreciated that a variety of programming languages and coding thereof may be used to implement the teachings of the disclosure contained herein. Moreover, the computer program is not intended to be limited to any particular control flow. There are many other variants of the computer program, which can use different control flows without departing from the spirit or scope of the invention.
- Furthermore, one or more of the steps of the computer program may be performed in parallel rather than sequentially. Such a computer program may be stored on any computer readable medium. The computer readable medium may include storage devices such as magnetic or optical disks, memory chips, or other storage devices suitable for interfacing with a computer. The computer readable medium may also include a hard-wired medium such as exemplified in the Internet system, or wireless medium such as exemplified in the GSM mobile telephone system. The computer program when loaded and executed on such a computer effectively results in an apparatus that implements the steps of the preferred method.
- According to various embodiments, a “module” may be understood as any kind of a logic implementing entity, which may be special purpose circuitry or a processor executing software stored in a memory, firmware, or any combination thereof. Thus, in an embodiment, a “module” may be a hard-wired logic circuit or a programmable logic circuit such as a programmable processor, e.g. a microprocessor (e.g. a Complex Instruction Set Computer (CISC) processor or a Reduced Instruction Set Computer (RISC) processor). A “module” may also be a processor executing software, e.g. any kind of computer program, e.g. a computer program using a virtual machine code such as e.g. Java. Any other kind of implementation of the respective functions which will be described in more detail below may also be understood as a “module” in accordance with an alternative embodiment.
- It will be appreciated by a person skilled in the art that numerous variations and/or modifications may be made to the present invention as shown in the specific embodiments without departing from the spirit or scope of the invention as broadly described. The present embodiments are, therefore, to be considered in all respects to be illustrative and not restrictive.
- For example, the whole or part of the exemplary embodiments disclosed above can be described as, but not limited to, the following supplementary notes.
- (Supplementary Note 1)
- A method for identifying potential associates of at least one target person, the method comprising:
- providing a plurality of videos;
- identifying appearances of the at least one target person in the plurality of videos;
- establishing a plurality of video scenes from the plurality of videos, wherein each one of the plurality of video scenes begins at a first predetermined duration before a first appearance of the at least one target person in the respective video scene and ends at a second predetermined duration after a last appearance of said at least one target person in the respective video scene;
- determining individuals who appear in more than a predetermined threshold number of the plurality of video scenes; and
- identifying the individuals as potential associates of the at least one target person.
- (Supplementary Note 2)
- The method according to
note 1, wherein identifying the appearances of a respective target person of the at least one target person from the plurality of videos further comprises: - determining an attribute of the respective target person; and
identifying, from the plurality of videos, an individual possessing the attribute as the respective target person. - (Supplementary Note 3)
- The method according to
note 2, wherein the attribute further comprises facial information of the respective target person. - (Supplementary Note 4)
- The method according to
note 2, wherein the attribute further comprises a physical characteristic of the respective target person. - (Supplementary Note 5)
- The method according to
note 2, wherein the attribute further comprises a behavioural characteristic of the respective target person. - (Supplementary Note 6)
- The method according to
note 1, wherein any one of the plurality of video scenes further comprises one or more camera surveillance footage of a location. - (Supplementary Note 7)
- The method according to note 6, wherein each of the one or more camera surveillance footage shows a different view of the location.
- (Supplementary Note 8)
- An identification device configured to identify potential associates of at least one target person, the identification device comprising:
- a receiving module configured to receive a plurality of videos;
- an appearance search module configured to identify appearances of the at least one target person in the plurality of videos;
- an appearance consolidator module configured to establish a plurality of video scenes from the plurality of videos, wherein each one of the plurality of video scenes begins at a first predetermined duration before a first appearance of the at least one target person in the respective video scene and ends at a second predetermined duration after a last appearance of said at least one target person in the respective video scene;
- a co-appearance search module configured to search for individuals who appear in the plurality of video scenes;
- an appearance analyzer module configured to determine which of the individuals appear in more than a predetermined threshold number of the plurality of video scenes; and
- an output module configured to identify the individuals who appear in more than a predetermined threshold number of the plurality of video scenes as the potential associates of the at least one target person.
- (Supplementary Note 9)
- The identification device according to note 8, wherein the appearance search module is further configured to:
- determine an attribute of a respective target person of the at least one target person; and
- identify, from the plurality of videos, an individual possessing the attribute as the respective target person.
- (Supplementary Note 10)
- The identification device according to note 9, wherein the attribute further comprises facial information of the respective target person.
- (Supplementary Note 11)
- The identification device according to note 9, wherein the attribute further comprises a physical characteristic of the respective target person.
- (Supplementary Note 12)
- The identification device according to note 9, wherein the attribute further comprises a behavioural characteristic of the respective target person.
- (Supplementary Note 13)
- The identification device according to note 8, wherein any one of the plurality of video scenes further comprises one or more camera surveillance footage of a location.
- (Supplementary Note 14)
- The identification device according to note 13, wherein each of the one or more camera surveillance footage shows a different view of the location.
- (Supplementary Note 15)
- A non-transitory computer readable medium having stored thereon instructions which, when executed by a processor, make the processor carry out a method for identifying potential associates of at least one target person, the method comprising:
- receiving a plurality of videos;
- identifying appearances of the at least one target person in the plurality of videos;
- establishing a plurality of video scenes from the plurality of videos, wherein each one of the plurality of video scenes begins at a first predetermined duration before a first appearance of the at least one target person in the respective video scene and ends at a second predetermined duration after a last appearance of said at least one target person in the respective video scene;
- determining individuals who appear in more than a threshold number of the plurality of video scenes; and
- identifying the individuals as potential associates of the at least one target person.
- This application is based upon and claims the benefit of priority from Singapore Patent Application No. 10201807678 W, filed on Sep. 6, 2018, the disclosure of which is incorporated herein in its entirety by reference.
-
- 202 Receiving Module
- 204 Appearance Search Module
- 206 Consolidator Module
- 208 Co-appearance Search Module
- 210 Analyser Module
- 212 Output Module
- 302 Target Person
- 304 First Unknown Individual
- 306 Second Unknown Individual
- 308 Third Unknown Individual
- 402 First Target Person
- 404 Second Target Person
- 406 Unknown Individual
Claims (15)
1. A method for identifying potential associates of at least one target person, the method comprising:
providing a plurality of videos;
identifying appearances of the at least one target person in the plurality of videos;
establishing a plurality of video scenes from the plurality of videos, wherein each one of the plurality of video scenes begins at a first predetermined duration before a first appearance of the at least one target person in the respective video scene and ends at a second predetermined duration after a last appearance of said at least one target person in the respective video scene;
determining individuals who appear in more than a predetermined threshold number of the plurality of video scenes; and
identifying the individuals as potential associates of the at least one target person.
2. The method according to claim 1 , wherein identifying the appearances of a respective target person of the at least one target person from the plurality of videos further comprises:
determining an attribute of the respective target person; and
identifying, from the plurality of videos, an individual possessing the attribute as the respective target person.
3. The method according to claim 2 , wherein the attribute further comprises facial information of the respective target person.
4. The method according to claim 2 , wherein the attribute further comprises a physical characteristic of the respective target person.
5. The method according to claim 2 , wherein the attribute further comprises a behavioural characteristic of the respective target person.
6. The method according to claim 1 , wherein any one of the plurality of video scenes further comprises one or more camera surveillance footage of a location.
7. The method according to claim 6 , wherein each of the one or more camera surveillance footage shows a different view of the location.
8. An identification device configured to identify potential associates of at least one target person, the identification device comprising:
at least one memory storing instructions, and at least one processor configured to execute the instructions to;
receive a plurality of videos;
identify appearances of the at least one target person in the plurality of videos;
establish a plurality of video scenes from the plurality of videos, wherein each one of the plurality of video scenes begins at a first predetermined duration before a first appearance of the at least one target person in the respective video scene and ends at a second predetermined duration after a last appearance of said at least one target person in the respective video scene;
search for individuals who appear in the plurality of video scenes;
determine which of the individuals appear in more than a predetermined threshold number of the plurality of video scenes; and
identify the individuals who appear in more than a predetermined threshold number of the plurality of video scenes as the potential associates of the at least one target person.
9. The identification device according to claim 8 , wherein the processor configured to execute the instructions to;
determine an attribute of a respective target person of the at least one target person; and
identify, from the plurality of videos, an individual possessing the attribute as the respective target person.
10. The identification device according to claim 9 , wherein the attribute further comprises facial information of the respective target person.
11. The identification device according to claim 9 , wherein the attribute further comprises a physical characteristic of the respective target person.
12. The identification device according to claim 9 , wherein the attribute further comprises a behavioural characteristic of the respective target person.
13. The identification device according to claim 8 , wherein any one of the plurality of video scenes further comprises one or more camera surveillance footage of a location.
14. The identification device according to claim 13 , wherein each of the one or more camera surveillance footage shows a different view of the location.
15. A non-transitory computer readable medium having stored thereon instructions which, when executed by a processor, make the processor carry out a method for identifying potential associates of at least one target person, the method comprising:
receiving a plurality of videos;
identifying appearances of the at least one target person in the plurality of videos;
establishing a plurality of video scenes from the plurality of videos, wherein each one of the plurality of video scenes begins at a first predetermined duration before a first appearance of the at least one target person in the respective video scene and ends at a second predetermined duration after a last appearance of said at least one target person in the respective video scene;
determining individuals who appear in more than a threshold number of the plurality of video scenes; and
identifying the individuals as potential associates of the at least one target person.
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US17/558,864 US20220114826A1 (en) | 2018-09-06 | 2021-12-22 | Method for identifying potential associates of at least one target person, and an identification device |
US18/139,580 US20230260313A1 (en) | 2018-09-06 | 2023-04-26 | Method for identifying potential associates of at least one target person, and an identification device |
Applications Claiming Priority (5)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
SG10201807678WA SG10201807678WA (en) | 2018-09-06 | 2018-09-06 | A method for identifying potential associates of at least one target person, and an identification device |
SG10201807678W | 2018-09-06 | ||
PCT/JP2019/032161 WO2020049980A1 (en) | 2018-09-06 | 2019-08-16 | A method for identifying potential associates of at least one target person, and an identification device |
US202016642279A | 2020-02-26 | 2020-02-26 | |
US17/558,864 US20220114826A1 (en) | 2018-09-06 | 2021-12-22 | Method for identifying potential associates of at least one target person, and an identification device |
Related Parent Applications (2)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/JP2019/032161 Continuation WO2020049980A1 (en) | 2018-09-06 | 2019-08-16 | A method for identifying potential associates of at least one target person, and an identification device |
US16/642,279 Continuation US11250251B2 (en) | 2018-09-06 | 2019-08-16 | Method for identifying potential associates of at least one target person, and an identification device |
Related Child Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US18/139,580 Continuation US20230260313A1 (en) | 2018-09-06 | 2023-04-26 | Method for identifying potential associates of at least one target person, and an identification device |
Publications (1)
Publication Number | Publication Date |
---|---|
US20220114826A1 true US20220114826A1 (en) | 2022-04-14 |
Family
ID=69722900
Family Applications (3)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US16/642,279 Active US11250251B2 (en) | 2018-09-06 | 2019-08-16 | Method for identifying potential associates of at least one target person, and an identification device |
US17/558,864 Abandoned US20220114826A1 (en) | 2018-09-06 | 2021-12-22 | Method for identifying potential associates of at least one target person, and an identification device |
US18/139,580 Pending US20230260313A1 (en) | 2018-09-06 | 2023-04-26 | Method for identifying potential associates of at least one target person, and an identification device |
Family Applications Before (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US16/642,279 Active US11250251B2 (en) | 2018-09-06 | 2019-08-16 | Method for identifying potential associates of at least one target person, and an identification device |
Family Applications After (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US18/139,580 Pending US20230260313A1 (en) | 2018-09-06 | 2023-04-26 | Method for identifying potential associates of at least one target person, and an identification device |
Country Status (4)
Country | Link |
---|---|
US (3) | US11250251B2 (en) |
JP (3) | JP6780803B2 (en) |
SG (1) | SG10201807678WA (en) |
WO (1) | WO2020049980A1 (en) |
Families Citing this family (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2021002091A (en) * | 2019-06-19 | 2021-01-07 | 富士ゼロックス株式会社 | Information processing system and program |
US11250271B1 (en) * | 2019-08-16 | 2022-02-15 | Objectvideo Labs, Llc | Cross-video object tracking |
CN114342360A (en) * | 2019-08-30 | 2022-04-12 | 日本电气株式会社 | Processing device, processing system, processing method, and program |
US11847814B2 (en) * | 2020-09-14 | 2023-12-19 | Dragonfruit Ai, Inc. | Video data search using color wheel associations |
US11714882B2 (en) * | 2020-10-09 | 2023-08-01 | Dragonfruit Ai, Inc. | Management of attributes associated with objects in video data |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20040170326A1 (en) * | 2003-01-27 | 2004-09-02 | Tomonori Kataoka | Image-processing method and image processor |
US20050151842A1 (en) * | 2004-01-09 | 2005-07-14 | Honda Motor Co., Ltd. | Face image acquisition method and face image acquisition system |
US7843495B2 (en) * | 2002-07-10 | 2010-11-30 | Hewlett-Packard Development Company, L.P. | Face recognition in a digital imaging system accessing a database of people |
US20140205158A1 (en) * | 2013-01-21 | 2014-07-24 | Sony Corporation | Information processing apparatus, information processing method, and program |
WO2017163282A1 (en) * | 2016-03-25 | 2017-09-28 | パナソニックIpマネジメント株式会社 | Monitoring device and monitoring system |
US20190342491A1 (en) * | 2018-05-02 | 2019-11-07 | Qualcomm Incorporated | Subject priority based image capture |
US20210174652A1 (en) * | 2016-02-05 | 2021-06-10 | Panasonic Intellectual Property Management Co., Ltd. | Tracking assistance device, tracking assistance system, and tracking assistance method |
Family Cites Families (17)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2003348569A (en) | 2002-05-28 | 2003-12-05 | Toshiba Lighting & Technology Corp | Monitoring camera system |
JP4438443B2 (en) | 2004-02-18 | 2010-03-24 | オムロン株式会社 | Image acquisition device and search device |
JP2008085874A (en) * | 2006-09-28 | 2008-04-10 | Toshiba Corp | Person monitoring system, and person monitoring method |
JP2010191620A (en) * | 2009-02-17 | 2010-09-02 | Fujitsu Ltd | Method and system for detecting suspicious person |
JP6213843B2 (en) * | 2012-09-13 | 2017-10-18 | 日本電気株式会社 | Image processing system, image processing method, and program |
JP6144966B2 (en) * | 2013-05-23 | 2017-06-07 | グローリー株式会社 | Video analysis apparatus and video analysis method |
JP6270410B2 (en) | 2013-10-24 | 2018-01-31 | キヤノン株式会社 | Server apparatus, information processing method, and program |
US10552713B2 (en) | 2014-04-28 | 2020-02-04 | Nec Corporation | Image analysis system, image analysis method, and storage medium |
JP2016143335A (en) | 2015-02-04 | 2016-08-08 | 富士通株式会社 | Group mapping device, group mapping method, and group mapping computer program |
JP6631712B2 (en) | 2015-08-28 | 2020-01-15 | 日本電気株式会社 | Analysis apparatus, analysis method, and program |
US11494579B2 (en) * | 2016-01-29 | 2022-11-08 | Nec Corporation | Information processing apparatus, information processing method, and program |
US9977970B2 (en) * | 2016-06-29 | 2018-05-22 | Conduent Business Services, Llc | Method and system for detecting the occurrence of an interaction event via trajectory-based analysis |
JP6885682B2 (en) * | 2016-07-15 | 2021-06-16 | パナソニックi−PROセンシングソリューションズ株式会社 | Monitoring system, management device, and monitoring method |
US9965687B2 (en) * | 2016-07-27 | 2018-05-08 | Conduent Business Services, Llc | System and method for detecting potential mugging event via trajectory-based analysis |
JP2018061212A (en) | 2016-10-07 | 2018-04-12 | パナソニックIpマネジメント株式会社 | Monitored video analysis system and monitored video analysis method |
JP7040463B2 (en) | 2016-12-22 | 2022-03-23 | 日本電気株式会社 | Analysis server, monitoring system, monitoring method and program |
US10579877B2 (en) * | 2017-01-09 | 2020-03-03 | Allegro Artificial Intelligence Ltd | System and method for selective image processing based on type of detected object |
-
2018
- 2018-09-06 SG SG10201807678WA patent/SG10201807678WA/en unknown
-
2019
- 2019-08-16 WO PCT/JP2019/032161 patent/WO2020049980A1/en active Application Filing
- 2019-08-16 JP JP2020508641A patent/JP6780803B2/en active Active
- 2019-08-16 US US16/642,279 patent/US11250251B2/en active Active
-
2020
- 2020-10-13 JP JP2020172466A patent/JP7302566B2/en active Active
-
2021
- 2021-12-22 US US17/558,864 patent/US20220114826A1/en not_active Abandoned
-
2023
- 2023-03-14 JP JP2023040180A patent/JP2023078279A/en active Pending
- 2023-04-26 US US18/139,580 patent/US20230260313A1/en active Pending
Patent Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US7843495B2 (en) * | 2002-07-10 | 2010-11-30 | Hewlett-Packard Development Company, L.P. | Face recognition in a digital imaging system accessing a database of people |
US20040170326A1 (en) * | 2003-01-27 | 2004-09-02 | Tomonori Kataoka | Image-processing method and image processor |
US20050151842A1 (en) * | 2004-01-09 | 2005-07-14 | Honda Motor Co., Ltd. | Face image acquisition method and face image acquisition system |
US20140205158A1 (en) * | 2013-01-21 | 2014-07-24 | Sony Corporation | Information processing apparatus, information processing method, and program |
US20210174652A1 (en) * | 2016-02-05 | 2021-06-10 | Panasonic Intellectual Property Management Co., Ltd. | Tracking assistance device, tracking assistance system, and tracking assistance method |
WO2017163282A1 (en) * | 2016-03-25 | 2017-09-28 | パナソニックIpマネジメント株式会社 | Monitoring device and monitoring system |
US20190132556A1 (en) * | 2016-03-25 | 2019-05-02 | Panasonic Intellectual Property Management Co., Ltd. | Monitoring device and monitoring system |
US20190342491A1 (en) * | 2018-05-02 | 2019-11-07 | Qualcomm Incorporated | Subject priority based image capture |
Also Published As
Publication number | Publication date |
---|---|
JP2020530970A (en) | 2020-10-29 |
US20200394395A1 (en) | 2020-12-17 |
JP7302566B2 (en) | 2023-07-04 |
US20230260313A1 (en) | 2023-08-17 |
JP6780803B2 (en) | 2020-11-04 |
US11250251B2 (en) | 2022-02-15 |
JP2021013188A (en) | 2021-02-04 |
SG10201807678WA (en) | 2020-04-29 |
WO2020049980A1 (en) | 2020-03-12 |
JP2023078279A (en) | 2023-06-06 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US11250251B2 (en) | Method for identifying potential associates of at least one target person, and an identification device | |
CN105069075B (en) | Photo be shared method and apparatus | |
CN104767963B (en) | Participant's information demonstrating method in video conference and device | |
JP7380812B2 (en) | Identification method, identification device, identification system and identification program | |
CN108197586A (en) | Recognition algorithms and device | |
US20210166040A1 (en) | Method and system for detecting companions, electronic device and storage medium | |
CN108848334A (en) | A kind of method, apparatus, terminal and the storage medium of video processing | |
CN109948494A (en) | Image processing method and device, electronic equipment and storage medium | |
WO2020050002A1 (en) | Duration and potential region of interest for suspicious activities | |
CN113378616A (en) | Video analysis method, video analysis management method and related equipment | |
CN111553372A (en) | Training image recognition network, image recognition searching method and related device | |
CN115702446A (en) | Identifying objects within images from different sources | |
CN109145878B (en) | Image extraction method and device | |
US20190082002A1 (en) | Media file sharing method, media file sharing device, and terminal | |
CN107733874A (en) | Information processing method, device, computer equipment and storage medium | |
US20150081699A1 (en) | Apparatus, Method and Computer Program for Capturing Media Items | |
CN105653623B (en) | Picture collection method and device | |
US20240037760A1 (en) | Method, apparatus and non-transitory computer readable medium | |
EP3975132A1 (en) | Identifying partially covered objects utilizing simulated coverings | |
CN115379260B (en) | Video privacy processing method and device, storage medium and electronic device | |
US20230195934A1 (en) | Device And Method For Redacting Records Based On A Contextual Correlation With A Previously Redacted Record | |
US20240062635A1 (en) | A method, an apparatus and a system for managing an event to generate an alert indicating a subject is likely to be unauthorized | |
CN115221547A (en) | Image processing method and device, electronic equipment and computer storage medium | |
CN106101531B (en) | Network image acquisition and content playing system | |
KR20230006078A (en) | Recording medium for recording a program for operating an apparatus providing a face image conversion processing service |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
STPP | Information on status: patent application and granting procedure in general |
Free format text: DOCKETED NEW CASE - READY FOR EXAMINATION |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: FINAL REJECTION MAILED |
|
STCB | Information on status: application discontinuation |
Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION |