WO2011008071A1 - Method and system for use in gaze suspicious analysis - Google Patents
Method and system for use in gaze suspicious analysis Download PDFInfo
- Publication number
- WO2011008071A1 WO2011008071A1 PCT/MY2010/000118 MY2010000118W WO2011008071A1 WO 2011008071 A1 WO2011008071 A1 WO 2011008071A1 MY 2010000118 W MY2010000118 W MY 2010000118W WO 2011008071 A1 WO2011008071 A1 WO 2011008071A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- sight
- line
- area
- head
- map
- Prior art date
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/50—Context or environment of the image
- G06V20/52—Surveillance or monitoring of activities, e.g. for recognising suspicious objects
Definitions
- the present invention relates to a system and method thereof for use in determining the common area-of-sight; and more particularly to a system and method thereof for use in determining the common area-of-sight in relation to a gaze suspicious analysis.
- the primary objective is to observe and thereby record detailed gaze directions of a human.
- Such observations are typically coupled with advanced machinery tools and the likes, for instance optical cameras or any suitable image capturing means installed within the desired area.
- an incoming suspicious occurrence can be detected in the event that the existing gaze directing deviates from the common area-of-sight.
- the common area-of-sight is defined as an area having majority if not all gaze directions focused at throughout a predetermined period.
- the present invention has been accordingly developed to focus on providing a more robust and automated system in addition to providing a solution for existing core shortcomings in relation to the existing systems and methods for gaze direction analysis as discussed in the preceding paragraphs.
- FIG 1 shows the overall view of the system and method in accordance with a preferred embodiment of the present invention
- FIG 2 shows the offline and online processes involved in determining line of sight
- FIG 3 shows the head angle in plurality of directions, represented by value from 1 to 8 based on a preferred embodiment of the present invention
- FIG 4 shows an exemplary of line-of-sight map drawn based on the coordination of (5, 5) with head angle value equivalent to 6;
- FIG 5 shows an exemplary of output from the line-of-sight detection unit based on a preferred embodiment of the present invention
- FIG 6 shows a block diagram of the area-of-sight detection unit based on the preferred embodiment of the present invention
- FIG 7 shows an overall view of the steps involved in determining the coverage field for each line-of-sight
- FIG 8 shows the left and right (adjacent coverage field) of a pixel in the line-of-sight
- FIG 9 shows the coverage field of line-of-sight with an example of direction and coverage value
- FIG 10 (a) to FIG 10 (d) provide the coverage field of first line-of-sight obtained; the coverage field of second line-of-sight obtained; the weights incremented by the first line-of- sight and continuous weight incremented by the second line-of-sight respectively; and
- FIG 11 shows the overall view of the steps involved in normalizing the area-of-sight map.
- the present invention provides a method for use in determining a common area-of-sight in detecting prior suspicious event within a monitored area, said method comprising the steps of: a) providing image sequence on the monitored area; b) detecting and thus determining the line-of-sights based on human gaze direction within said area; said detecting the line-of- sights further comprising the steps of: subjecting said image sequence to a motion detection process; subjecting said resultant from the motion detection process to a human detection process; subjecting said resultant from the human detection process to a head detection process; determining head angles based on the resultant from the head detection process; creating classifiers for head angle directions; predicting the head angles and using said head angles for estimating and thus determine the line-of-sights based on the created classifiers; wherein said estimating line-of-sights comprises the step of quantizing said head angles into a plurality of different directions; c) detecting and thus determining the area- of -sight in consideration of the line-of-
- a system for use in determining a common area-of-sight in detecting prior suspicious event within a monitored area comprising: at least one means (100) configured for detecting and thus determining the line- of-sights based on human gaze direction within said area; at least one means (102) configured for detecting and thus determining the area- of -sight in consideration of the line- of-sight information; and at least one means(104) configured for providing a suspicious event alert based on the area-of-sight information.
- FIG 1 illustrates the overall architecture of the system and method in accordance with a preferred embodiment of the present invention, whereby the core components comprise of at least one unit or sub-system for use in line-of-sight detection (100), at least one unit for area-of-sight detection (102) and at least one unit for use in providing suspicious alertness (104) feature.
- the core components comprise of at least one unit or sub-system for use in line-of-sight detection (100), at least one unit for area-of-sight detection (102) and at least one unit for use in providing suspicious alertness (104) feature.
- the first process is an offline process, whereby its primary role is to create classifier for the at least one head angle direction from a set of labeled head images with known directions.
- the second process referred herein as the online process, is provided to predict the at least one head angle as line-of-sight based on the detected head created with the classifier from the offline process.
- the line-of-sight unit (102) detects gaze directions as line-of-sights, defined as a particular line propagating at a predetermined detected direction from the centre point of a person's head.
- the offline and online processes are performed by this unit on every person or individual that appears or present within the monitored area.
- the image sequence captured with suitable capturing means installed within the monitored area is fed into a motion detection process, at which the moving object is delineated from the background.
- the delineation or extraction process involves the steps of estimating the background based on a specified number of empty images which do not contain any moving objects.
- the subsequent images within the image sequence are compared against the estimated background and the difference based on intensity on each pixel is determined. It should be noted that a substantial or significant change of intensity on each pixel may indicate that the image is a motion pixel.
- the outcome of this process is motion blobs, which defines the grouping of all detected and connected motion pixels.
- the respective motion blobs are then subsequently subjected to human detection process so as to accordingly detect the presence of human blobs.
- the blob area for each human blob is analyzed so as to determine the head location by means of heuristic approach. From here, each detected head with known direction is used as training image in the classifier training process. Upon obtained the suitable amount sufficient for training purpose of the classifier training process, the training process is initialized and continues until an optimum classifier is successfully obtained, and the resulted classifier is stored within a classifier storage.
- the second mode subsequent to the offline mode as described above is the online mode or process, whereby the steps involved are similar to those of the offline mode. These steps are the motion detection process, human detection process and head detection process.
- the head detection process does not involve the classifier training process.
- the detected head from the head detection process is used as a source to determine head angle in the head angle determination process.
- the relevant features of each detected head are extracted from the head location and the features are used as a form of input into the trained classifier that retrieves from classifier storage.
- the output obtained from the trained classifier is referred as the predicted head angle.
- the predicted head angle is fed into line-of-sight estimation process in order to estimate the coordinates of the line-of-sight appear in the image.
- FIG 3 shows a step involved in the line-of-sight process.
- the predicted head angle obtained from the previous steps is quantized into eight directions at which each direction is deviated from the other direction with a preferred angle of 45 degrees.
- the quantized head is labeled in eight numerical values as seen in FIG 3.
- the values for instance 1 through 8, preferably represent angles of the head, for instance the first value represents a left angle, second value represents a top left angle, a third value represents the top angle, a fourth value represents a top right angle, a fifth value represents a right angle, a sixth value represents a bottom angle, the seventh value represents the bottom right angle and the eight value represents the bottom left angle of the head.
- the primary purpose of having at least eight head directions or angles is to allow the line-of-sight to be easily determined on the image, wherein the head angles can aid to decide or identify which adjacent pixels are related to the respective line-of-sight.
- a line drawn is from the centre point of the detected head area to the predicted head angle.
- the pixels in the image of size 10x10 is determined as pixel related to the line-of-sight based on the X-Y coordinate of the centre point of the detect head is (5,5) and head angle equivalent to value 6.
- the adjacent pixels are continuously determined as line pixels with respect to the head angle until it reaches the side end of the image.
- FIG 5 shows a sample of the output from the line-of-sight detection unit, whereby it can be seen from the sample that there are three line-of-sights detected.
- one line-of-sight originated from the (X, Y) coordinate of (5, 5), thereby providing a head angle value of 6.
- Another line-of-sight originated from the coordination of (2, 4) thereby providing a head angle value of 8
- the third line-of-sight with originating coordination of (9, 5) provides the head angle value of 6.
- FIG 6 shows the block diagram of area-of-sight detection unit in accordance with a preferred embodiment of the present invention.
- the information obtained from the line-of- sight unit is used to determine the area-of-sight within the area-of-sight unit.
- the dimension of the common area map is equivalent to that of the captured image within the monitored area. Initially, the values at locations in the common are map is set to 0.
- the normalization of common area map is within the range of 0 to 100, throughout the predefined duration of time.
- the pixel in the line-of-sight is determined as coverage field.
- a counter value is assigned for each pixel in the line-of-sight, whereby the counter value preferably represents the sequence of pixels in the line-of-sight as depicted in EQUATION 1 below. For instance, the first pixel in the line-of-sight has a counter value of 0. Accordingly:
- Counter Value / - 1 where i is index sequence of pixel ⁇ 1••• number of pixels
- Counter Value ⁇ i - m ⁇ , where i refers to the column number of neighboring pixel and m refers to the column number of the origins of the head
- the next step of the first process is the calculation of the number of offset pixels used to compute the coverage field for each pixel in the line-of-sight.
- the number of offset pixels is dependant on the location of the pixel in the line-of-sight and the predefined coverage value.
- the offset numbers are calculated based on the following equation (EQUATION 2):
- Number of offset pixels counter value * cov erage value .... (EQUATION 2) .
- the number of offset pixels is 5 for the pixel of line-of-sight that has counter value equals to 10 and thereby based on the above equation the predefined coverage value is 0.5.
- the next step involves computing the left and right adjacent pixels at the similar direction of the head angle of the line-of-sight to be coverage field.
- the number of left and right adjacent pixels is equivalent to the number of offset pixels, which is calculated based on the second step. It is noted that the left or right adjacent pixels can be smaller than that of the number of offset pixels if the far left or right adjacent pixel is located outside of the image's boundaries.
- FIG 8 illustrates the distribution of left and right adjacent pixels at the pixel of the line-of-sight with head angle equivalent to 7, counter value is four (4) and coverage value is 0.5. Still referring to FIG 8, the number of offset pixel is 2 (4 x 0.5). Therefore, the coverage field is distributed two pixels to the left and right from the pixel with counter value equals to 4 of the line-of-sight.
- FIG 9 depicts the full coverage field of the line-of-sight that similar in Figure 8. Referring to FIG 9, the coverage field started to distribute along the pixels in the line-of-sight until the coverage field reach the side of the image.
- the second process performed by the area-of-sight detection unit involves initializing the weights in the common area map to 0.
- the weight of each coverage field is incremented by 1.
- the weights are incremented based on the number of coverage fields, as suitably shown in FIG 10(a) to FIG 10(d).
- FIG 10 (a) and FIG 10 (b) are the coverage fields of first and second line-of-sights respectively, whilst FIG 10(c) provides the weights at the locations that overlap with the coverage field of the first line- of-sight being incremented by 1.
- the weights of the similar common area map are incremented at the locations that overlap with the coverage field of the second line-of-sight, as shown in FIG 10 (d).
- the common area map is updated throughout the predefined period of time and also normalized for at least one image interval.
- the weights in the common area map represent the confidence value for each point in the map to be area-of-sight.
- the confidence values may range from 0 to 100.
- FIG 11 shows the steps involved in determining the largest weight in the area-of- sight map at each image interval, thus the steps for normalizing the area-of-sight map.
- each weight in the area-of-sight map is accordingly reduced by the difference value of the calculated largest weight and 100, as provided in EQUATION 3.
- each weight in the area-of-sight map is accordingly reduced by a predefined deteriorate value, as provided in EQUATION 4.
- New Weight Old Weight - (L arg est Weight - 100) (EQU ATION 3 )
- New Weight Old Weight - D, where D is det e ⁇ or ⁇ te value (EQUATION 4)
- the suspicious alertness unit plays a role in determining the suspicious level within the scene.
- the common area map is subject to this unit to be further analyzed.
- the area-of-sight is determined in the event that the weights in the map are found more than a predefined average value. If the gaze direction is deviated from the area-of- sights larger than a predefined allowed range, the unit is prompted to activate or actuate an alert for users.
- system and method of the present invention allows automated notification of any suspicious event within an area, whereby users or security persons can be alerted beforehand on any suspicious occurrences which may lead to perilous outcome.
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- Theoretical Computer Science (AREA)
- Human Computer Interaction (AREA)
- Eye Examination Apparatus (AREA)
- User Interface Of Digital Computer (AREA)
Abstract
The present invention relates to a method and system for use in gaze suspicious analysis or systems of the likes. The method comprises the main steps of providing image sequence based on monitored area; detecting and thus determining the line-of-sights based on human gaze direction within said area; detecting and thus determining the area- of -sight in consideration of the line-of-sight information obtained from steps a) to b); and finally providing a suspicious event alert based on the information obtained from steps a) to c).
Description
METHOD AND SYSTEM FOR USE IN GAZE SUSPICIOUS ANALYSIS FIELD OF INVENTION The present invention relates to a system and method thereof for use in determining the common area-of-sight; and more particularly to a system and method thereof for use in determining the common area-of-sight in relation to a gaze suspicious analysis.
BACKGROUND OF INVENTION
At present, a great majority of the out in the market systems necessitate monitoring tasks as they are configured to be manually operated and act as a means to assist the responsible security persons to monitor an area based on camera views, to detect presence of any suspicious events. However, the presence of semi-automated systems progressively replacing the manual cumbersome systems makes it possible for security person to be readily alerted prior to the presence of any incoming suspicious events within the monitored area. In order to accomplish this, gaze analysis is rather vital to obtain results.
In gaze direction analysis, the primary objective is to observe and thereby record detailed gaze directions of a human. Such observations are typically coupled with advanced machinery tools and the likes, for instance optical cameras or any suitable image capturing means installed within the desired area. Suitably, an incoming suspicious occurrence can be detected in the event that the existing gaze directing deviates from the common area-of-sight.
The common area-of-sight is defined as an area having majority if not all gaze directions focused at throughout a predetermined period. The revolutionary changes in technology over
the past decades have resulted to the presence of various systems and methods to support or perform gaze direction analysis in order to obtain the common area of sight. These existing numerous systems and methods, mainly to meet the users demand and overcome core challenges commonly surfaced in such systems, which may include accuracy with respect to detection, simplicity of mechanism and cost effective.
Existing prior arts include determination of point of interest, based on intersection of line-of-sights, facial expressions and the intersection between an object of interest and the line of sight of an individual.
The present invention has been accordingly developed to focus on providing a more robust and automated system in addition to providing a solution for existing core shortcomings in relation to the existing systems and methods for gaze direction analysis as discussed in the preceding paragraphs.
It is therefore a primary object of the present invention to provide a system and method for use in determining common area-of-sight for gaze analysis in detecting prior suspicious occurrence within an area. It is still further object of the invention to provide system and method for use in determining common area-of-sight for gaze analysis by way of determining as to whether the gaze directions deviates larger than the provided or allowed range from the common area-of- sight.
It is yet another object of the present invention to provide a system and method for use in determining common area-of-sight for gaze analysis, said system comprising line-of-
sight detection, area-of-sight detection and suspicious alertness sub systems.
These and other objects, features and advantages of the present invention will be apparent from the following detailed description of preferred embodiments thereof.
BRIEF DESCRIPTION OF DRAWINGS
The invention will be more understood by reference to the description below taken in conjunction with the accompanying drawings herein:
FIG 1 shows the overall view of the system and method in accordance with a preferred embodiment of the present invention;
FIG 2 shows the offline and online processes involved in determining line of sight;
FIG 3 shows the head angle in plurality of directions, represented by value from 1 to 8 based on a preferred embodiment of the present invention;
FIG 4 shows an exemplary of line-of-sight map drawn based on the coordination of (5, 5) with head angle value equivalent to 6;
FIG 5 shows an exemplary of output from the line-of-sight detection unit based on a preferred embodiment of the present invention; FIG 6 shows a block diagram of the area-of-sight detection unit based on the preferred
embodiment of the present invention;
FIG 7 shows an overall view of the steps involved in determining the coverage field for each line-of-sight;
FIG 8 shows the left and right (adjacent coverage field) of a pixel in the line-of-sight;
FIG 9 shows the coverage field of line-of-sight with an example of direction and coverage value;
FIG 10 (a) to FIG 10 (d) provide the coverage field of first line-of-sight obtained; the coverage field of second line-of-sight obtained; the weights incremented by the first line-of- sight and continuous weight incremented by the second line-of-sight respectively; and FIG 11 shows the overall view of the steps involved in normalizing the area-of-sight map. SUMMARY
The present invention provides a method for use in determining a common area-of-sight in detecting prior suspicious event within a monitored area, said method comprising the steps of: a) providing image sequence on the monitored area; b) detecting and thus determining the line-of-sights based on human gaze direction within said area; said detecting the line-of- sights further comprising the steps of: subjecting said image sequence to a motion detection process; subjecting said resultant from the motion detection process to a human detection process; subjecting said resultant from the human detection process to a head detection
process; determining head angles based on the resultant from the head detection process; creating classifiers for head angle directions; predicting the head angles and using said head angles for estimating and thus determine the line-of-sights based on the created classifiers; wherein said estimating line-of-sights comprises the step of quantizing said head angles into a plurality of different directions; c) detecting and thus determining the area- of -sight in consideration of the line-of-sight information obtained from steps a) to b); and d) providing a suspicious event alert based on the information obtained from steps a) to c).
In another embodiment there is provided a system for use in determining a common area-of-sight in detecting prior suspicious event within a monitored area, said system comprising: at least one means (100) configured for detecting and thus determining the line- of-sights based on human gaze direction within said area; at least one means (102) configured for detecting and thus determining the area- of -sight in consideration of the line- of-sight information; and at least one means(104) configured for providing a suspicious event alert based on the area-of-sight information.
DETAILED DESCRIPTION
In line with the above summary, the following description of a number of specific and alternative embodiments is provided to understand the inventive features of the present invention. It shall be apparent to one skilled in the art, however that this invention may be practiced without such specific details.
It is further noted that the exemplifications and standard procedures which may be provided meant to better elucidate the operational effect and embodiments of the present invention and therefore should not be construed as limiting the scope of protection. FIG 1 illustrates the overall architecture of the system and method in accordance with a preferred embodiment of the present invention, whereby the core components comprise of at least one unit or sub-system for use in line-of-sight detection (100), at least one unit for area-of-sight detection (102) and at least one unit for use in providing suspicious alertness (104) feature.
Line-of-Sight Unit
There are two predicting and detecting processes involved with respect to the line-of- sight unit (102), the first process is an offline process, whereby its primary role is to create classifier for the at least one head angle direction from a set of labeled head images with known directions. The second process, referred herein as the online process, is provided to predict the at least one head angle as line-of-sight based on the detected head created with the classifier from the offline process.
It is noted that the line-of-sight unit (102) detects gaze directions as line-of-sights, defined as a particular line propagating at a predetermined detected direction from the centre point of a person's head.
Referring now to FIG 2, the offline and online processes are performed by this unit on every person or individual that appears or present within the monitored area. In the first process which is the offline mode, the image sequence captured with suitable capturing means installed within the monitored area is fed into a motion detection process, at which the
moving object is delineated from the background. The delineation or extraction process involves the steps of estimating the background based on a specified number of empty images which do not contain any moving objects. The subsequent images within the image sequence are compared against the estimated background and the difference based on intensity on each pixel is determined. It should be noted that a substantial or significant change of intensity on each pixel may indicate that the image is a motion pixel. The outcome of this process is motion blobs, which defines the grouping of all detected and connected motion pixels. The respective motion blobs are then subsequently subjected to human detection process so as to accordingly detect the presence of human blobs. The blob area for each human blob is analyzed so as to determine the head location by means of heuristic approach. From here, each detected head with known direction is used as training image in the classifier training process. Upon obtained the suitable amount sufficient for training purpose of the classifier training process, the training process is initialized and continues until an optimum classifier is successfully obtained, and the resulted classifier is stored within a classifier storage.
The second mode subsequent to the offline mode as described above is the online mode or process, whereby the steps involved are similar to those of the offline mode. These steps are the motion detection process, human detection process and head detection process. However, for the online mode, the head detection process does not involve the classifier training process. The detected head from the head detection process is used as a source to determine head angle in the head angle determination process.
From the above, the relevant features of each detected head are extracted from the head location and the features are used as a form of input into the trained classifier that retrieves from classifier storage. The output obtained from the trained classifier is referred
as the predicted head angle. The predicted head angle is fed into line-of-sight estimation process in order to estimate the coordinates of the line-of-sight appear in the image. The details of the line-of-sight estimation process will be described further herein. FIG 3 shows a step involved in the line-of-sight process. Suitably, the predicted head angle obtained from the previous steps is quantized into eight directions at which each direction is deviated from the other direction with a preferred angle of 45 degrees. The quantized head is labeled in eight numerical values as seen in FIG 3. The values, for instance 1 through 8, preferably represent angles of the head, for instance the first value represents a left angle, second value represents a top left angle, a third value represents the top angle, a fourth value represents a top right angle, a fifth value represents a right angle, a sixth value represents a bottom angle, the seventh value represents the bottom right angle and the eight value represents the bottom left angle of the head. It is noted that the primary purpose of having at least eight head directions or angles is to allow the line-of-sight to be easily determined on the image, wherein the head angles can aid to decide or identify which adjacent pixels are related to the respective line-of-sight.
In the determination of line-of-sight, a line drawn is from the centre point of the detected head area to the predicted head angle. For instance, now still referring to FIG 4, the pixels in the image of size 10x10 is determined as pixel related to the line-of-sight based on the X-Y coordinate of the centre point of the detect head is (5,5) and head angle equivalent to value 6. The adjacent pixels are continuously determined as line pixels with respect to the head angle until it reaches the side end of the image. FIG 5 shows a sample of the output from the line-of-sight detection unit, whereby it
can be seen from the sample that there are three line-of-sights detected. It can be observed that one line-of-sight originated from the (X, Y) coordinate of (5, 5), thereby providing a head angle value of 6. Another line-of-sight originated from the coordination of (2, 4) thereby providing a head angle value of 8, and finally the third line-of-sight with originating coordination of (9, 5) provides the head angle value of 6.
Area-of-Sight Detection Unit
FIG 6 shows the block diagram of area-of-sight detection unit in accordance with a preferred embodiment of the present invention. The information obtained from the line-of- sight unit is used to determine the area-of-sight within the area-of-sight unit. There are three core processes involved; said processes are computing the coverage field for each line-of-sight, incrementing the weights at the locations of the common area map that overlaps with the coverage field and normalizing the common area map. The dimension of the common area map is equivalent to that of the captured image within the monitored area. Initially, the values at locations in the common are map is set to 0. The normalization of common area map is within the range of 0 to 100, throughout the predefined duration of time.
In the first core process for this unit as seen in FIG 7, which is computing the coverage field for each line-of-sight, the pixel in the line-of-sight is determined as coverage field. A counter value is assigned for each pixel in the line-of-sight, whereby the counter value preferably represents the sequence of pixels in the line-of-sight as depicted in EQUATION 1 below. For instance, the first pixel in the line-of-sight has a counter value of 0. Accordingly:
Counter Value = / - 1 where i is index sequence of pixel {1••• number of pixels)
In which;
Counter Value = \i - m\, where i refers to the column number of neighboring pixel and m refers to the column number of the origins of the head
(It should be noted that / is the column number of the pixel to be evaluated)
(EQUATION 1)
The next step of the first process is the calculation of the number of offset pixels used to compute the coverage field for each pixel in the line-of-sight. The number of offset pixels is dependant on the location of the pixel in the line-of-sight and the predefined coverage value. The offset numbers are calculated based on the following equation (EQUATION 2):
Number of offset pixels = counter value * cov erage value .... (EQUATION 2) .
For example, the number of offset pixels is 5 for the pixel of line-of-sight that has counter value equals to 10 and thereby based on the above equation the predefined coverage value is 0.5.
The next step involves computing the left and right adjacent pixels at the similar direction of the head angle of the line-of-sight to be coverage field. The number of left and right adjacent pixels is equivalent to the number of offset pixels, which is calculated based on the second step. It is noted that the left or right adjacent pixels can be smaller than that of the number of offset pixels if the far left or right adjacent pixel is located outside of the image's boundaries.
FIG 8 illustrates the distribution of left and right adjacent pixels at the pixel of the line-of-sight with head angle equivalent to 7, counter value is four (4) and coverage value is 0.5. Still referring to FIG 8, the number of offset pixel is 2 (4 x 0.5). Therefore, the coverage field is distributed two pixels to the left and right from the pixel with counter
value equals to 4 of the line-of-sight.
The first three steps are repeated for all the pixels in the line-of-sight. FIG 9 depicts the full coverage field of the line-of-sight that similar in Figure 8. Referring to FIG 9, the coverage field started to distribute along the pixels in the line-of-sight until the coverage field reach the side of the image.
The second process performed by the area-of-sight detection unit involves initializing the weights in the common area map to 0. The weight of each coverage field is incremented by 1. In the event that there are more coverage fields within the same set of weight in the common area map, the weights are incremented based on the number of coverage fields, as suitably shown in FIG 10(a) to FIG 10(d). FIG 10 (a) and FIG 10 (b) are the coverage fields of first and second line-of-sights respectively, whilst FIG 10(c) provides the weights at the locations that overlap with the coverage field of the first line- of-sight being incremented by 1. The weights of the similar common area map are incremented at the locations that overlap with the coverage field of the second line-of-sight, as shown in FIG 10 (d).
In the third and final process of the area-of-sight detection unit, the common area map is updated throughout the predefined period of time and also normalized for at least one image interval. It should be mentioned that the weights in the common area map represent the confidence value for each point in the map to be area-of-sight. The confidence values may range from 0 to 100. FIG 11 shows the steps involved in determining the largest weight in the area-of-
sight map at each image interval, thus the steps for normalizing the area-of-sight map. In the case when the largest weight is more than 100 for each interval, each weight in the area-of-sight map is accordingly reduced by the difference value of the calculated largest weight and 100, as provided in EQUATION 3. In the case where the weight is less than 100, each weight in the area-of-sight map is accordingly reduced by a predefined deteriorate value, as provided in EQUATION 4.
New Weight = Old Weight - (L arg est Weight - 100) (EQU ATION 3 )
New Weight = Old Weight - D, where D is det eήorαte value (EQUATION 4)
Suspicious Alertness Unit
The suspicious alertness unit plays a role in determining the suspicious level within the scene. For this process, the common area map is subject to this unit to be further analyzed. The area-of-sight is determined in the event that the weights in the map are found more than a predefined average value. If the gaze direction is deviated from the area-of- sights larger than a predefined allowed range, the unit is prompted to activate or actuate an alert for users.
Based on the provided preferred embodiments the system and method of the present invention allows automated notification of any suspicious event within an area, whereby users or security persons can be alerted beforehand on any suspicious occurrences which may lead to perilous outcome.
Having thus described the preferred embodiments of the present invention, it should be noted by the person skilled in the relevant art that there may be modifications within the scope of invention. Therefore, the present invention is not limited to the specific embodiments as illustrated therein, but is limited by the following claims.
Claims
1. A method for use in determining a common area-of-sight in detecting prior suspicious event within a monitored area, said method comprising the steps of:
a) providing image sequence based on monitored area;
b) detecting and thus determining the line-of-sights based on human gaze direction within said area; said detecting the line-of-sights further comprising the steps of:
subjecting said image sequence to a motion detection process;
subjecting said resultant from the motion detection process to a human detection process; subjecting said resultant from the human detection process to a head detection process; determining head angles based on the resultant from the head detection process;
creating classifiers for head angle directions;
predicting the head angles and using said head angles for estimating and thus determine the line-of-sights based on the created classifiers; wherein said estimating line-of-sights comprises the step of quantizing said head angles into a plurality of different directions; c) detecting and thus determining the area- of -sight in consideration of the line-of-sight information obtained from steps a) to b); and
d) providing a suspicious event alert based on the information obtained from steps a) to c).
2. A method as claimed in Claim 1 wherein the detecting and thus determining the area-of- sight further comprises the steps of providing a common area map based on captured images within the monitored area, determining the coverage field for each line-of-sight on said map, incrementing the weights in the common area map at locations that overlaps with the coverage fields and normalizing the common area map.
3. A method as claimed in Claims 1 to 2 wherein the step of determining the coverage field for each line-of-sight on the map comprises the steps:
a) . relating the pixel in the line-of -sight as coverage field;
b) calculating the number of offset pixels for coverage field of the pixel in the line-of- sight;
c) relating the number of offset pixels on adjacent pixels with the similar direction of line-of-sight as coverage field and; and
d) repeating steps a) to b) for each pixel in the line-of-sight.
4. A method as claimed in Claims 1 to 3 wherein the step relating the pixel in the line-of-sight as coverage field comprises the step of assigning a counter value to each line-of-sight, wherein each counter value represents the sequence of pixels in the line-of-sight based on the following equation:
Counter Value = i - 1 where i is index sequence of pixel {1••• number of pixels} In which;
Counter Value = |/ - m\, where i refers to the column number of neighboring pixel and m refers to the column number of the origins of the head; and
i is the column number of the pixel to be evaluated.
5. A method as claimed in Claims 1 to 4 wherein the step calculating the number of offset pixels for coverage field for each pixel in the line-of-sight is performed in consideration of sequence of pixel in the line-of-sight and the predefined coverage value, and based on the equation:
Number of offset pixels = counter value * coverage value
6. A method as claimed in Claims 1 to 2 wherein the step incrementing the weights in the common area map at locations that overlaps with the coverage fields comprises the steps of : a) initializing the weights in the common area map to 0;
b) incrementing each coverage field weight by any positive integer;
wherein in the event that more than one coverage fields fall on the same weight in the common area map, the weights are incremented by the number of coverage fields.
7. A method as claimed in Claim 6 wherein incrementing each coverage field is performed consistently.
8. A method as claimed in Claim 6 wherein each coverage field weight is incrementing by 1.
9. A method as claimed in Claims 1 to 2, wherein normalizing the common area map comprising the step of normalizing the common area map for at least one image interval.
10. A method as claimed in Claim 7 wherein normalizing the common are map further comprises the steps of :
a. representing the weights in the common area map with each confidence value for each point in the map to be area-of-sight; wherein said confidence value ranges from 0 to 100;
b. determining the largest weight in the area-of-sight map at each image interval;
c. reducing each weight by the difference value of the largest weight and 100 in the event that the largest weight is more than 100 and based on the equation:
New Weight = Old Weight - (L argest Weight - 100) ;
d. reducing each weight by predetermined deteriorate value in the event that none of the weight is larger than 100 and based on the equation
New Weight = Old Weight - D, where D is det eriorate value .
11. A method as claimed in Claim 1 to 2 wherein the step providing the suspicious alert further comprises the steps of computing the area-of-sight if the weights in the map are more than a predefined average value and prompting the suspicious alert if the gaze direction deviation is larger than the predefined average value range.
12. A method as claimed in Claim 1 wherein the steps of motion detection, human detection, head detection based on the image sequence and creating classifiers for head angles are performed in offline mode.
13. A method as claimed in Claim 1 wherein the steps of head angles prediction and lime-of- sight estimation are performed in online mode.
14. A method as claimed in Claim 1 wherein the step quantizing the head angles into a plurality of different directions comprising the step of quantizing the head angles into eight different directions thus labeled with eight different values, wherein said directions are left, top left, top, top right, right, bottom right, bottom and bottom left.
15. A method as claimed in Claim 12 wherein the head directions are deviated from one another within 45 degrees of angle.
16. A system for use in determining a common area-of-sight in detecting prior suspicious event within a monitored area, said system comprising: at least one means (100) configured for detecting and thus determining the line-of-sights based on human gaze direction within said area;
at least one means (102) configured for detecting and thus determining the area- of -sight in consideration of the line-of-sight information; and
at least one means(104) configured for providing a suspicious event alert based on the area- of-sight information.
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
MYPI20092938 | 2009-07-13 | ||
MYPI20092938A MY147718A (en) | 2009-07-13 | 2009-07-13 | Method and system for use in gaze suspicious analysis |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2011008071A1 true WO2011008071A1 (en) | 2011-01-20 |
Family
ID=43449551
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/MY2010/000118 WO2011008071A1 (en) | 2009-07-13 | 2010-07-12 | Method and system for use in gaze suspicious analysis |
Country Status (2)
Country | Link |
---|---|
MY (1) | MY147718A (en) |
WO (1) | WO2011008071A1 (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2017059058A1 (en) * | 2015-09-29 | 2017-04-06 | Panasonic Intellectual Property Management Co., Ltd. | A system and method for detecting a person interested in a target |
-
2009
- 2009-07-13 MY MYPI20092938A patent/MY147718A/en unknown
-
2010
- 2010-07-12 WO PCT/MY2010/000118 patent/WO2011008071A1/en active Application Filing
Non-Patent Citations (2)
Title |
---|
DICK ET AL.: "Issues in automated visual surveillance", PROC. VIIITH DIGITAL IMAGE, 2003, pages 195 - 204 * |
KAMINSKI ET AL.: "Three-Dimensional Face Orientation and Gaze Detection from a Single Image", 4 August 2004 (2004-08-04), Retrieved from the Internet <URL:http://arxiv.org/PS_cache/cs/pdf/0408/0408012vl.pdf> [retrieved on 20100928] * |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2017059058A1 (en) * | 2015-09-29 | 2017-04-06 | Panasonic Intellectual Property Management Co., Ltd. | A system and method for detecting a person interested in a target |
US10699101B2 (en) | 2015-09-29 | 2020-06-30 | Panasonic Intellectual Property Management Co., Ltd. | System and method for detecting a person interested in a target |
Also Published As
Publication number | Publication date |
---|---|
MY147718A (en) | 2013-01-15 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US11710075B2 (en) | Hazard recognition | |
US9036039B2 (en) | Apparatus and method for acquiring face image using multiple cameras so as to identify human located at remote site | |
CN102521578B (en) | Method for detecting and identifying intrusion | |
Vishwakarma et al. | Automatic detection of human fall in video | |
KR101337060B1 (en) | Imaging processing device and imaging processing method | |
US10607348B2 (en) | Unattended object monitoring apparatus, unattended object monitoring system provided with same, and unattended object monitoring method | |
US9805265B2 (en) | Surveillance camera control device and video surveillance system | |
US11625949B2 (en) | Face authentication apparatus | |
US20150338497A1 (en) | Target tracking device using handover between cameras and method thereof | |
CN102831615A (en) | Object monitoring method and device as well as monitoring system operating method | |
CN113033521B (en) | Perimeter dynamic early warning method and system based on target analysis | |
WO2017126187A1 (en) | Video monitoring apparatus and video monitoring method | |
JP7418315B2 (en) | How to re-identify a target | |
CN111401239B (en) | Video analysis method, device, system, equipment and storage medium | |
JP5088463B2 (en) | Monitoring system | |
JP2007219603A (en) | Person tracking device, person tracking method and person tracking program | |
US20180144198A1 (en) | Information processing apparatus, information processing method, and storage medium | |
CN115205581A (en) | Fishing detection method, fishing detection device and computer readable storage medium | |
WO2011008071A1 (en) | Method and system for use in gaze suspicious analysis | |
KR102464196B1 (en) | Big data-based video surveillance system | |
KR102690927B1 (en) | Appartus of providing service customized on exhibit hall and controlling method of the same | |
JP5777389B2 (en) | Image processing apparatus, image processing system, and image processing method | |
Li et al. | Adaptive multiple video sensors fusion based on decentralized Kalman filter and sensor confidence | |
JP2009140360A (en) | Behavior analysis device and method | |
Samundeswari et al. | Real-time Crime Detection Using Customized CNN |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 10800081 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 10800081 Country of ref document: EP Kind code of ref document: A1 |