IL200219A - System and method for identifying that a human is under threat - Google Patents
System and method for identifying that a human is under threatInfo
- Publication number
- IL200219A IL200219A IL200219A IL20021909A IL200219A IL 200219 A IL200219 A IL 200219A IL 200219 A IL200219 A IL 200219A IL 20021909 A IL20021909 A IL 20021909A IL 200219 A IL200219 A IL 200219A
- Authority
- IL
- Israel
- Prior art keywords
- threat
- human
- processing unit
- voice
- designated area
- Prior art date
Links
- 241000282414 Homo sapiens Species 0.000 title claims description 60
- 238000000034 method Methods 0.000 title claims description 27
- 238000012545 processing Methods 0.000 claims description 52
- 241000282412 Homo Species 0.000 claims description 16
- 230000007246 mechanism Effects 0.000 claims description 14
- 238000004458 analytical method Methods 0.000 claims description 8
- 230000008569 process Effects 0.000 claims description 8
- 230000009429 distress Effects 0.000 claims description 6
- 230000002159 abnormal effect Effects 0.000 claims description 5
- 230000009471 action Effects 0.000 claims description 5
- 230000004927 fusion Effects 0.000 claims description 5
- 238000013475 authorization Methods 0.000 claims description 4
- 230000004044 response Effects 0.000 claims description 4
- 208000003028 Stuttering Diseases 0.000 claims description 3
- 230000006399 behavior Effects 0.000 description 28
- 238000012360 testing method Methods 0.000 description 6
- 230000000007 visual effect Effects 0.000 description 6
- 238000004891 communication Methods 0.000 description 4
- 238000001514 detection method Methods 0.000 description 4
- 210000000887 face Anatomy 0.000 description 4
- 230000001815 facial effect Effects 0.000 description 4
- 230000003340 mental effect Effects 0.000 description 3
- 230000008859 change Effects 0.000 description 2
- 238000010586 diagram Methods 0.000 description 2
- 230000004424 eye movement Effects 0.000 description 2
- 230000004886 head movement Effects 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 230000036651 mood Effects 0.000 description 2
- 238000003672 processing method Methods 0.000 description 2
- 230000035882 stress Effects 0.000 description 2
- 238000012546 transfer Methods 0.000 description 2
- 230000001755 vocal effect Effects 0.000 description 2
- 206010000117 Abnormal behaviour Diseases 0.000 description 1
- 206010028347 Muscle twitching Diseases 0.000 description 1
- 230000005856 abnormality Effects 0.000 description 1
- 230000004913 activation Effects 0.000 description 1
- 230000003542 behavioural effect Effects 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000001276 controlling effect Effects 0.000 description 1
- 230000036461 convulsion Effects 0.000 description 1
- 230000002596 correlated effect Effects 0.000 description 1
- 230000000875 corresponding effect Effects 0.000 description 1
- 238000010191 image analysis Methods 0.000 description 1
- 230000000977 initiatory effect Effects 0.000 description 1
- 238000011835 investigation Methods 0.000 description 1
- 210000000056 organ Anatomy 0.000 description 1
- 230000008520 organization Effects 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
Landscapes
- Alarm Systems (AREA)
Description
SYSTEM AND METHOD FOR IDENTIFYING THAT A HUMAN IS UNDER THREAT 01>N t)T)S) NStti OtN ttNfl >)ίΐ Ϊ7 fitful MlJItt Pearl Cohen Zedek Latzer P-71563-IL P-71563-IL SYSTEM AND METHOD FOR IDENTIFYING THAT A HUMAN IS UNDER THREAT BACKGROUND OF THE INVENTION
[001] In many circumstances it is desirable to provide a way to identify situations in which an individual is under threat. One such instance may be when there is an attempt to deceive an entry control system by forcing an authorized human to pass the system in order to enter a restricted area against his or her free will. In such circumstances it may be beneficial to identify when the authorized human is trying to enter the restricted area due to intimidation and threats imposed on the authorized human by others.
[002] The recognition and identification of threatened individuals is typically based, according to known methods, on analysis of an individual's behavior. Such analysis attempts to detect abnormalities in the individual's behavior in comparison to the individual's behavior under normal conditions. However, the known automated methods fail to identify a threatened individual when the threatened individual tries to conceal his/hers condition. Furthermore, known automated methods are further limited due to the wide range of natural human behaviors. Video processing methods using image analysis of the human's face and behavior, and audio processing methods using voice analytics were described in recent publications, however, these methods are not accurate and unlikely to provide a reliable identification of a threatened individual.
SUMMARY OF THE INVENTION
[003] The present invention relates to a system and method for identifying that a human is under threat or in abnormal distress. The system according to some embodiments of the present invention may comprise of at least one imager, one or more voice sensors and one P-71563-IL or 'more speakers and/or visual presentation means such as a monitor, all in active communication with a processing unit. The processing unit may comprise a video processing unit, an audio processing unit and a questioning generator. The system according to embodiments of the present invention may analyze information received from the imagers and/or voice sensors to automatically determine whether a human is under threat.
BRIEF DESCRIPTION OF THE DRAWINGS
[004] The subject matter regarded as the invention is particularly pointed out and distinctly claimed in the concluding portion of the specification. The invention, however, both as to organization and method of operation, together with objects, features, and advantages thereof, may best be understood by reference to the following detailed description when read with the accompanying drawings in which:
[005] Fig. 1 is a block diagram of an embodiment of a system according to the present invention; and
[006] Fig. 2 is a flowchart illustrating a method according to embodiments of the present invention.
[007] It will be appreciated that for simplicity and clarity of illustration, elements shown in the figures have not necessarily been drawn to scale. For example, the dimensions of some of the elements may be exaggerated relative to other elements for clarity. Further, where considered appropriate, reference numerals may be repeated among the figures to indicate corresponding or analogous elements.
P-71563-IL DETAILED DESCRIPTION OF THE PRESENT INVENTION
[008] In the following detailed description, numerous specific details are set forth in order to provide a thorough understanding of the invention. However, it will be understood by those skilled in the art that the present invention may be practiced without these specific details. In other instances, well-known methods, procedures, and components have not been described in detail so as not to obscure the present invention.
[009] Embodiment of the present invention presents a system and method for the identification that a human is under threat or abnormal stress.
[0010] Reference is now made to Fig. 1 which is a block diagram of a system 10 according to one embodiment of the present invention. System 10 may comprise at least one imager 14 which is in active communication with a processing unit 20. System 10 may further comprise one or more voice sensors 16, one or more speakers and / or visual presentation means 18, both in active communication with processing unit 20. Processing unit 20 may comprise a video processing unit 20A, an audio processing unit 20B and a questioning generator 20C. Each one of video processing unit 20A, audio processing unit 20B and questioning generator 20C may be embodied in hardware, in software or in any suitable combination thereof. In other embodiments, processing unit 20 may be associated with a remote storage means accessible to processing unit 20. Further, video processing unit 20 A, audio processing unit 20B and questioning generator 20C may be, all of them or part of them, embodied in a single unit however according to other embodiments these units may be embodied in different units, or any combination thereof. Processing unit 20 may be in active communication with a storage means, such as database 22, which may be external to processing unit 20 as in Fig. 1 or comprised in it. System 10 may be adapted to monitor the entry of one or more humans into a restricted area. These one or more humans P-71563-IL may be present, when examined by system 10, in designated area 24, such as a security check booth. Imager 14 is adapted to capture video and still images of designated area 24, and transfer images to video processing unit 20 A in processing unit 20. Imager 14 may be, according to some embodiments of the present invention, initiated by an initialization means 12. mitialization means 12 may be, in some embodiments of the present invention, a motion detector, a volume detector both are disposed so as to detect the presence of one or more humans in designated area 24. Alternatively, other initialization means known in the art may be used. In response to the activation by initialization means 12, system 10 may be initialized, or put into an operational mode, due to the detection of an external event. Other embodiments of the present invention may comprise an imager, such as imager 14 that operates substantially constantly and thus eliminates the need for imager initialization means 12. In further embodiments of a system according to the present invention, imager 14 may be initiated manually when identification of an examined human is required. For example, when system 10 is part of an entry control system controlling the entrance to a restricted area, system 10 may be initiated only when someone requests to enter the restricted area. [001 1] Video processing unit 20 A may include software and/or hardware and/or firmware adapted to analyze and extract information from video images received from imager 14. Video processing unit 20A may, for instance, extract from images it receives from imager 14 the number of humans that entered designated area 24, analyze the images of the detected faces of the humans and identify them. Identification of the detected faces may be done using, for example, biometric video recognition methods known in the art. Video processing unit 20 A may extract from the images it receives from imager 14 information that may be indicative of the human's level of threat. According to an embodiment of the P-71 S63-IL present invention, video processing unit 20A may operate a tracking mechanism 21 A adapted to track the location of the humans in designated area 24 in every given moment. Tracking mechanism 21 A may search for faces in the images received from imager 14. According to some embodiments of the invention a face is any portion of an image with two visible eyes. Once a face is detected it receives a tag with a temporary unique face number and an X,Y pixel based coordinates of location in the 2D image. As the face identification is preformed on a 2 dimensional picture the location of the face is not correlated to the room and the distance of the face from imager 14 is not known. An additional parameter that is measured by tracking mechanism 21 A and the distance between the eyes of the human or humans in designated area 24 is logged in the tag. The distance between the eyes may be used to evaluate the distance and the speed of the movement of the tracked human relative to imager 14. Tracking mechanism 21 A may track the detected face updating the 3 parameters (X,Y, Eye distance) every frame. The tracker may also provide speed and velocity information calculated from the location information using a double derivative. Tracking mechanism 21 A may be used in order to recognize one or more faces in designated area 24 only once. However, according to some embodiments of the present invention, tracking mechanism 21 A may be used, additionally or alternatively, in order to track the movement of a selected certain portion of a face, such as the eye movements of an identified human in designated area 24. Tracking mechanism 21 A, according to some embodiments of the present invention, may be adapted to provide an identifier and location coordinates on the X and Y axis of the tracked facial organ. Video processing unit 20A may perform a second derivative function by time on the location information of the selected portion of the face, as extracted by tracking mechanism 21 A. The second derivative in time of the motion results in a value that may P-71563-IL be considered indicative of the mood and mental condition of the human and, as a result, indicative of the threat level of the identified human. This calculated value is referred to hereinafter as the behavior value. In other or additional embodiments of the present invention, tracking mechanism 21 A may be adapted to track other movements or changes in the tracked face of a human such as head movements, facial gesture changes and tics of the humans in designated area 24. In yet other embodiment other visual information that may be indicative of the human's level of threat may be extracted.
[0012] It has been found that the behavior value is individual and varies from one person to another. The behavior value of human beings may change as a result of changes in mood and mental condition, however the amplitude of the change, its direction and value is individual and may be used to identify the mental condition of an identified human and determine whether he or she is under threat or abnormal distress. The behavior Value is a vector of parameters comprising voice related parameters and head movement speed. An individual behavior Value is pre-calculated and stored in a database, such as database 22, however the behavior value may be recalculated from time to time, upon every entry or every predetermined number of entries.
[0013] The behavior value calculated by video processing unit 20 A may be combined with further behavior parameters extracted by audio processing unit 20B as will be norther described below. The behavior value may be saved available for various purposes in the system.
[0014] When one or more humans enter designated area 24, video processing unit 20 A may process the images received from imager 14 in order to extract data that may be used to identify the human or humans in the designated area, for instance, by biometrically identifying the humans in designated area 24 by means of face recognition. An identified P-71 S63-IL human may further be checked to verify if he or she is authorized to enter a restricted area or perform certain actions. When there is at least one identified and authorized human in designated area 24, processing unit 20 may initiate a questioning process by initiating questioning generator 20C. Questioning generator 20C may be configured to randomly select one or more questions from a pre-prepared database of questions and present a question or a series of questions to the authorized human in designated area 24. The question or questions may be presented, for example, using speakers 18, or in any other known display or presentation means which are positioned and operated so that they may be seen and/or heard inside designated area 24. The question or questions may be personalized to meet specific features and behavior profile of an identified human, in order to enable better analysis of the behavior of that human.
[0015] The identified and authorized human may reply to the question or questions presented to him or her. Voice sensor 16 may receive a vocal response to the question(s) from the authorized human and transfer it to audio processing unit 20B in processing unit 20. Audio processing unit 20B may be configured to analyze the received voice and to match it to the human identified by video analysis, or to indicate that there is no such match. Furthermore, audio processing unit 20B may be adapted for voice threat detection. Audio processing unit 20B may analyze the received voice, extract from it a voice print and signature, analyze changes in the amplitude, intonation and in the delay between phonemes and words of the voice print or signature of the received voice, identify discontinuities in speech, stuttering etc., and calculate a voice threat value, that may be combined with the behavior value calculated by video processing unit 20A. Audio processing unit 20B may be further adapted to recognize use of words typically used during distress. Processing unit 20 may create a threat profile by fusion of detected P-71563-IL abnormal behaviors. The created threat profile may be used as a warning signal that may be used in deciding whether further investigation of the behavior of the identified and authorized human in designated area 24 is required. This decision may be made by fusion of the information collected and analyzed by the audio processing unit 20B with the information collected from the video analytics, such as the behavior value, and comparing the combined data to a threshold calculated based on tests on a large number of humans and information obtained from the identified and authorized human in controlled conditions considered to be threat-free.
[0016] After extracting the behavior parameters, a threat profile may be constructed from the combination of the voice threat value, the behavior value and key word spotting. The key word spotting may be preformed based on an automatic speech recognition system adapted to recognize use of words typically used during distress. The fusion of the voice threat value, the behavior value and the key word spotting may be preformed, for example, by using a mathematical adding function of the values received from each behavior parameter.
[0017] Processing unit 20 may compare the threat profile with data associated with the authorized human, which has been pre-obtained in conditions considered to be threat-free, and pre-stored in storage means, such as database 22. According to some embodiments of the present invention, the data stored in database 22 may consist of a pre-defined variation threshold. When the variations between the threat profile and the stored data exceed the variation threshold stored in database 22 and associated with the authorized human in designated area 24, processing unit 20 may automatically determine whether the identified human is threatened.
P-71563-IL
[0018] Reference is now made to Fig. 2 which is a flowchart illustrating one method for identifying a threatened human according to an embodiment of the present invention. The method illustrated in Fig. 2 may comprise the following steps:
[0019] Identifying the presence of at least one human in designated area 24 (block 100), by video analytics and video motion detection. The detection of presence of at least one human in designated area 24 may be the trigger to start the process, yet other triggers, events of means may be used to start the process.
[0020] Identifying whether there is at least one authorized human in designated area 24 (block 110), by video processing unit 20 A which may perform a face recognition process on visual information received from imagers 14. When there is at least one authorized human in designated area 24 processing unit 20 may initiate a questioning process. The questioning process may randomly generate questions to be presented to and replied by the identified and authorized human (block 130). Optionally the authorized human may be required to respond to the random questioning within a predefined period of time. The response, vocal and/or visual, of the responding human may then be analyzed to verify the identity of the authorized human. The images received from imagers 14 and the voice received from voice/visual sensors 16 may undergo voice and video analysis to extract behavior parameters, such as facial gestures facial and eye movements per minute, twitches and tics, voice vibrations, stuttering, voice amplitude and intonation and any other behavioral parameters that may be indicative of the human's level of threat (block 140) After extracting behavior parameters, a threat profile may be constructed from the combination of the calculated behavior parameters as described above.
[0021] When a threat profile is constructed, it may be compared with previously stored information regarding these parameters obtained during conditions considered to be threat- P-71563-IL free and which create, when combined, a typical behavior profile (block 150). Based on the comparison it may be determined whether the authorized human is under threat or in abnormal distress (block 160). In order to avoid false threat determination, when the variations between stored data and analyzed data exceed a certain threshold determined based on tests on a large number of humans, an indication that the authorized person is under threat may be issued.
[0022] According to some embodiments of the present invention, a correlation test may be performed, usually after obtaining a threat profile, in order to correlate the threat profile with the reasons that have created the threat conditions. The correlation test may be performed by testing the behavior parameters immediately after presenting each question to the identified and authorized human. It may be appreciated that other or further tests may be preformed in order to determine the reasons for the detected stress.
[0023] When a threat is detected, action of the authorized human that requires authorization may be denied (block 180). When no threat is detected, actions requiring authorization may be allowed (block 170).
[0024] While certain features of the invention have been illustrated and described herein, many modifications, substitutions, changes, and equivalents will now occur to those of ordinary skill in the art. It is, therefore, to be understood that the appended claims are intended to cover all such modifications and changes as fall within the true spirit of the invention. 200219/2
Claims (17)
1. A system for identifying that a human is under threat comprising: an initialization means; at least one imager and at least one voice sensor; a processing unit; and a storage device; wherein said processing unit comprises a video processing unit, said video processing unit comprises a tracking mechanism adapted to track movements of a certain portion of a face to identify one or more humans in a designated area, an audio processing unit, and a questioning generator, wherein said audio processing unit is configured to analyze voice received from said at least one voice sensor and to match it to the human identified by video processing unit, or to indicate that there is no such match
2. The system of claim 1 wherein said tracking mechanism is adapted to track movements of a certain portion of a face to provide the location coordinates on the X and Y axis of said certain portion of a face.
3. The system of claim 2 wherein said tracking mechanism is adapted to measure the distance between the eyes of a human in said designated area to evaluate the distance and the speed of said human relative to said imager.
4. The system of claim 3 wherein said video processing unit is configured to perform a second derivative function by time on the location information of said portion of a face, as extracted by said tracking mechanism, to receive a behavior value indicative of the threat level of said one or more identified humans in said designated area.
5. The system of any one of claims 1-4 further comprising a presentation means to present questions, generated by said questioning generator.
6. The system of claim 5 wherein said audio processing unit is further adapted to extract from said voice received from said voice sensor, information indicative of the threat level of said human in said designated area, and calculate a voice threat value.
7. The system of claim 6 wherein said information indicative of the threat level of said human comprises at least a portion from a list comprising: 200219/2 a voice print and signature, changes in amplitude, intonation and in delay between phonemes and words of the voice print or signature of said received voice, discontinuities in speech, and stuttering.
8. The system of claim 7 wherein said audio processing unit is further adapted to automatically identify the use of words associated with threat and abnormal distress, and to calculate, based on said use of said words, a vocabulary value indicative of the threat level of said one or more humans in said designated area.
9. The system of claim 8 wherein said processing unit is configured to create a threat profile from the fusion of said behavior value, said voice threat value and of said vocabulary value.
10. The system of claim 5 wherein said voice sensor is a microphone.
11. The system of claim 5 wherein said presentation means is a monitor.
12. The system of claim 5 wherein said presentation means are speakers.
13. The system of claim 5 wherein said initialization means is a motion detector.
14. The system of claim 5 wherein said initialization means is a volume detector.
15. A method for identifying that a human is under threat comprising: identifying the presence of at least one authorized human in a designated area, generating a questioning process, conducting video analysis of data received from tracking mechanism to receive a behavior value, conducting voice analysis of data received from voice sensors in response to questions presented to said at least one authorized human in said designated area to receive a voice threat value and a vocabulary value, creating a threat profile based on the fusion of said behavior value, said voice threat value and said vocabulary value, comparing threat profile with data stored in database, and if detecting a threat denying action requiring authorization.
16. The method of claim 15 further comprising video analysis to determine whether an authorized human is present in the designated area.
17. The method of claim 16 wherein an action requiring authorization is denied when no authorized human is detected in said designated area.
Priority Applications (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| IL200219A IL200219A (en) | 2009-08-03 | 2009-08-03 | System and method for identifying that a human is under threat |
Applications Claiming Priority (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| IL200219A IL200219A (en) | 2009-08-03 | 2009-08-03 | System and method for identifying that a human is under threat |
Publications (2)
| Publication Number | Publication Date |
|---|---|
| IL200219A0 IL200219A0 (en) | 2011-08-01 |
| IL200219A true IL200219A (en) | 2014-09-30 |
Family
ID=44671882
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| IL200219A IL200219A (en) | 2009-08-03 | 2009-08-03 | System and method for identifying that a human is under threat |
Country Status (1)
| Country | Link |
|---|---|
| IL (1) | IL200219A (en) |
-
2009
- 2009-08-03 IL IL200219A patent/IL200219A/en not_active IP Right Cessation
Also Published As
| Publication number | Publication date |
|---|---|
| IL200219A0 (en) | 2011-08-01 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| US7825950B2 (en) | Video monitoring system with object masking | |
| EP2620896B1 (en) | System And Method For Face Capture And Matching | |
| US10629036B2 (en) | Surveillance device | |
| CN109726663A (en) | Online testing monitoring method, device, computer equipment and storage medium | |
| JP4945297B2 (en) | Terminal monitoring device | |
| KR20180050968A (en) | on-line test management method | |
| US20140071293A1 (en) | Method and device for authintication of live human faces using infra red images | |
| US20180232569A1 (en) | System and method for in motion identification | |
| US20180158269A1 (en) | System and method for identifying fraud attempt of an entrance control system | |
| US20210390215A1 (en) | Method for automatically protecting an object, a person or an item of information or visual work from a risk of unwanted viewing | |
| US11727520B2 (en) | Frictionless security monitoring and management | |
| JP6679291B2 (en) | Applicant authentication device, authentication method, and security authentication system using the method | |
| CN111783714B (en) | Method, device, equipment and storage medium for face recognition under duress | |
| CN114104878A (en) | Elevator control method, elevator control device, computer equipment and storage medium | |
| US20240087328A1 (en) | Monitoring apparatus, monitoring system, monitoring method, and non-transitory computer-readable medium storing program | |
| JP2020187518A (en) | Information output device, method, and program | |
| CN109271771A (en) | Account information method for retrieving, device, computer equipment | |
| US20060104444A1 (en) | System and practice for surveillance privacy-protection certification and registration | |
| US8742887B2 (en) | Biometric visitor check system | |
| IL200219A (en) | System and method for identifying that a human is under threat | |
| JP2008009689A (en) | Face registration device, face authentication device, and face registration method | |
| KR101520446B1 (en) | Monitoring system for prevention beating and cruel act | |
| KR20190072323A (en) | Image Monitoring System and Method for Monitoring Image | |
| EP2963583A1 (en) | Method, apparatus and computer program for facial recognition-based identity verification | |
| Dirgantara et al. | Design of face recognition security system on public spaces |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| FF | Patent granted | ||
| KB | Patent renewed | ||
| MM9K | Patent not in force due to non-payment of renewal fees |