CN114067511A - Attention reminding system, attention reminding method, and storage medium - Google Patents

Attention reminding system, attention reminding method, and storage medium Download PDF

Info

Publication number
CN114067511A
CN114067511A CN202110210078.6A CN202110210078A CN114067511A CN 114067511 A CN114067511 A CN 114067511A CN 202110210078 A CN202110210078 A CN 202110210078A CN 114067511 A CN114067511 A CN 114067511A
Authority
CN
China
Prior art keywords
information
attention
unit
degree
sensor data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110210078.6A
Other languages
Chinese (zh)
Inventor
柏本雄士朗
山地雄土
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Toshiba Corp
Original Assignee
Toshiba Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Toshiba Corp filed Critical Toshiba Corp
Publication of CN114067511A publication Critical patent/CN114067511A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/20Scenes; Scene-specific elements in augmented reality scenes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification
    • GPHYSICS
    • G08SIGNALLING
    • G08BSIGNALLING OR CALLING SYSTEMS; ORDER TELEGRAPHS; ALARM SYSTEMS
    • G08B13/00Burglar, theft or intruder alarms
    • G08B13/18Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength
    • G08B13/189Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength using passive radiation detection systems
    • G08B13/194Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength using passive radiation detection systems using image scanning and comparing systems
    • G08B13/196Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength using passive radiation detection systems using image scanning and comparing systems using television cameras
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/52Surveillance or monitoring of activities, e.g. for recognising suspicious objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/50Maintenance of biometric data or enrolment thereof
    • G06V40/58Solutions for unknown imposter distribution
    • GPHYSICS
    • G08SIGNALLING
    • G08BSIGNALLING OR CALLING SYSTEMS; ORDER TELEGRAPHS; ALARM SYSTEMS
    • G08B13/00Burglar, theft or intruder alarms
    • G08B13/18Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength
    • G08B13/189Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength using passive radiation detection systems
    • G08B13/194Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength using passive radiation detection systems using image scanning and comparing systems
    • G08B13/196Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength using passive radiation detection systems using image scanning and comparing systems using television cameras
    • G08B13/19697Arrangements wherein non-video detectors generate an alarm themselves
    • GPHYSICS
    • G08SIGNALLING
    • G08BSIGNALLING OR CALLING SYSTEMS; ORDER TELEGRAPHS; ALARM SYSTEMS
    • G08B21/00Alarms responsive to a single specified undesired or abnormal condition and not otherwise provided for
    • G08B21/02Alarms for ensuring the safety of persons
    • GPHYSICS
    • G08SIGNALLING
    • G08BSIGNALLING OR CALLING SYSTEMS; ORDER TELEGRAPHS; ALARM SYSTEMS
    • G08B21/00Alarms responsive to a single specified undesired or abnormal condition and not otherwise provided for
    • G08B21/18Status alarms
    • G08B21/22Status alarms responsive to presence or absence of persons
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30196Human being; Person

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Human Computer Interaction (AREA)
  • Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • General Health & Medical Sciences (AREA)
  • Business, Economics & Management (AREA)
  • Emergency Management (AREA)
  • Alarm Systems (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • Automation & Control Theory (AREA)
  • Electromagnetism (AREA)
  • Closed-Circuit Television Systems (AREA)
  • Emergency Alarm Devices (AREA)

Abstract

Provided are an attention calling system, an attention calling method, and a storage medium, which can call attention before entering a dangerous area while suppressing unnecessary alarm. The attention calling system of the embodiment includes an acquisition unit, an analysis unit, and a calculation unit. The acquisition unit acquires status information of a subject from the 1 st sensor data including information of the subject. The analysis unit analyzes current state information of an object included in the area and behavior prediction information of the object from 2 nd sensor data including information of the area. The calculation unit calculates the degree of attention based on the status information of the object, the current state information of the object, and the behavior prediction information of the object.

Description

Attention reminding system, attention reminding method, and storage medium
Technical Field
Embodiments of the present invention relate to an attention calling system, an attention calling method, and a storage medium.
Background
Conventionally, a technique for detecting a person intruding into the vicinity of a machine that is operating has been known. For example, the following techniques have been known in the past: whether a person invades a monitoring area is judged based on an image shot by a monitoring camera in a factory, and notification is performed according to the working condition of equipment in the monitoring area.
Disclosure of Invention
However, in the conventional technology, it is difficult to alert the user before the user invades the dangerous area while suppressing unnecessary alarms.
The attention calling system of the embodiment includes an acquisition unit, an analysis unit, and a calculation unit. The acquisition unit acquires status information of a subject from the 1 st sensor data including information of the subject. The analysis unit analyzes current state information of an object included in the area and behavior prediction information of the object from 2 nd sensor data including information of the area. The calculation unit calculates the degree of attention based on the status information of the object, the current state information of the object, and the behavior prediction information of the object.
According to the above-described attention calling system, it is possible to call attention before entering a dangerous area while suppressing unnecessary alarms.
Drawings
Fig. 1 is a diagram showing an example of a functional configuration of the attention calling system according to embodiment 1.
Fig. 2 is a flowchart showing an example of the operation of the acquisition unit according to embodiment 1.
Fig. 3 is a flowchart showing an example of the operation of the analysis unit according to embodiment 1.
Fig. 4 is a flowchart showing an example of the operation of the calculation unit according to embodiment 1.
Fig. 5 is a diagram showing an example of a functional configuration of the attention calling system according to embodiment 2.
Fig. 6 is a diagram showing an example of the parameter setting UI according to embodiment 2.
Fig. 7 is a diagram showing an example of display information according to embodiment 2.
Fig. 8 is a diagram showing an example of a functional configuration of the attention calling system according to embodiment 3.
Fig. 9 is a diagram showing an example of a functional configuration of the attention calling system according to embodiment 4.
Fig. 10 is a diagram showing an example of the hardware configuration of the attention calling system according to embodiments 1 to 4.
Description of the reference symbols
1 sensor
2 input part
3 acquisition part
4 analysis part
5 calculating part
6 alarm part
7 recording part
8 control part
9 setting part
10 display part
11 recognition part
12 determination unit
201 control device
202 main storage device
203 auxiliary storage device
204 display device
205 input device
206 communication device
210 bus
Detailed Description
Embodiments of the attention calling system, the attention calling method, and the program will be described in detail below with reference to the drawings.
(embodiment 1)
The attention calling system according to embodiment 1 is a system for calling attention when another person comes close to or contacts a machine in operation, a person in operation who needs to pay attention, or the like in a factory, a construction site, or the like, for example.
First, an example of the functional configuration of the attention calling system according to embodiment 1 will be described.
[ example of functional configuration ]
Fig. 1 is a diagram showing an example of a functional configuration of an attention calling system 100 according to embodiment 1. The attention calling system 100 according to embodiment 1 includes a sensor 1, an input unit 2, an acquisition unit 3, an analysis unit 4, a calculation unit 5, an alarm unit 6, a recording unit 7, and a control unit 8.
The sensor 1 detects data of the object a and the area B. The object a is, for example, a machine in a factory or a heavy equipment in a construction site. For example, the object a may be a person or a robot that is performing dangerous work.
The region B is a region in which the degree of attention is required to change depending on the operation (working) state of the object a. The region B is, for example, a region within a predetermined range from the object a. The region B may be a region including the object a, or may be a region not including the object a.
The sensor 1 is a camera, a vibration sensor, or the like that detects the status of the object a and the area B. The condition of the object a is detected from, for example, image data, vibration sensor data, and the like. The condition of the region B is obtained from, for example, image data, depth data, and the like.
In addition, a plurality of the objects a and the areas B may exist. The sensor 1 may be configured by a plurality of sensors, and different types of sensors may be used for the object a and the area B to detect data.
The input unit 2 inputs the data on the object a obtained by the sensor 1 to the acquisition unit 3, and inputs the data on the area B to the analysis unit 4.
The acquisition unit 3 acquires the status information of the object a from the data relating to the object a, and inputs the status information of the object a to the calculation unit 106. The state information includes information indicating the operation state of the machine when the object a is a machine, for example. For example, when the object a is a person, a robot, or the like, information indicating the operational status of the person, the robot, or the like is included.
The analysis unit 4 detects a specific object in the area B from the data on the area B, analyzes the state of the detected object, predicts the behavior, and the like, and inputs analysis information including state information and behavior prediction information of the object to the calculation unit 5. The object to be analyzed is, for example, a person, a vehicle, an animal, a machine having a moving function, or the like.
Image processing may be used as input data to the acquisition unit 3 and the analysis unit 4 to acquire the status information of the object a, detect an object in the area B, and predict an action. By using image data for the input data, the input data can be acquired by merely providing a camera without requiring a special center (for example, a vibration sensor, an acceleration sensor, or the like), and therefore, the cost for introducing the attention calling system 100 can be suppressed. Further, since data is easily and intuitively understood by a human, it is possible to obtain an effect that analysis of analysis results is easy.
When image data is used as input data, for example, the analysis unit 4 receives a plurality of image data in time series, tracks an object included in the plurality of 2 nd image data, calculates the moving direction and speed of the object, and analyzes the behavior prediction information of the object based on the moving direction and speed of the object. When image data is used, it is not necessary to track an object, and the analysis unit 4 may analyze the behavior prediction information of the object based on the posture information and/or movement information of the object.
The calculation unit 5 calculates the degree of attention based on the situation information (for example, the operation situation of the object a) acquired by the acquisition unit 3 and the analysis information (for example, current state information and behavior prediction information of the object included in the area B) acquired by the analysis unit 4. For example, when it is determined that the object a is operating or operating based on the status information of the object a, the calculation unit 5 calculates the degree of attention to be higher as the possibility that the object is close to or in contact with the object a is higher based on the current state information of the object and the action prediction information of the object. The calculation unit 5 inputs the calculated degree of attention to at least one of the alarm unit 6, the recording unit 7, and the control unit 8.
When the degree of attention is higher than the threshold value, the alarm unit 6 gives an alarm by any method and notifies the occurrence of a situation requiring attention to the person, manager, or the like around the object a or the area B. For example, the alarm unit 6 may determine the degree of attention based on a plurality of threshold values, and control the alarm method so as to be changed according to the degree of attention.
In addition, the alarm method may be arbitrary. For example, the alarm unit 6 emits an alarm sound so that human beings around the object a can hear it. The alarm unit 6 notifies the management responsible party of the occurrence of a situation requiring attention by mail (mail) or the like. By providing the warning unit 6 in the attention notifying system 100, it is possible to notify a person or a supervisor on the scene of a state requiring attention, for example, and to prevent occurrence of an accident or the like in advance.
When the degree of attention is higher than the threshold value, the recording unit 7 records at least one of sensor data including information on the object a, sensor data including information on the area B, status information on the object a, current state information on an object included in the area B, behavior prediction information on the object, and the degree of attention calculated by the calculation unit 5. The information stored in the recording unit 7 may include not only information at a time point at which the degree of attention higher than the threshold is calculated, but also information before and after the time point at which the degree of attention higher than the threshold is calculated, and information for a fixed period. By providing the attention notifying system 100 with the recording unit 7, for example, the manager of the object a can review information of a scene in which a state requiring attention occurs, and can improve the crisis management.
The control unit 8 is configured to transition to a safe state by a method of transmitting a control signal or the like for stopping the object a when the degree of attention calculated during the operation of the object a is higher than the threshold value. When it is calculated that the degree of attention is higher than the threshold value during the stop of the object a, the control unit 8 suppresses the operation of the object a by a method such as suppressing transmission of a control signal for operating the object a. By providing the attention notifying system 100 with the control unit 8, the object a can be brought into a safe state in a state requiring attention, and occurrence of an accident or the like can be prevented in advance.
[ operation example of the acquisition unit ]
Fig. 2 is a flowchart showing an example of the operation of the acquisition unit 3 according to embodiment 1. First, the acquisition unit 3 receives sensor data including information of the object a as input data from the input unit 2 (step S1). The acquisition unit 3 may receive input data directly from a signal from the object a.
Next, the acquiring unit 3 analyzes the information of the object a received through the processing of step S1, and acquires the status information of the object a (step S2). For example, when the information of the object a is image data, the acquiring unit 3 analyzes motion information from an image of a previous frame using an optical flow. The acquisition unit 3 converts the state of the operation (or motion) of the object a into numerical values, for example, based on the direction and intensity of the optical flow relating to the object a. For example, the acquiring unit 3 acquires the status information indicating that the object a is operating (or in operation) when a value larger than the threshold value is detected in the predetermined direction, and acquires the status information indicating that the object a is stopped (or in rest) when a value not larger than the threshold value is detected in the predetermined direction.
Next, the acquiring unit 3 outputs the situation information acquired by the processing of step S2 (step S3). The status information includes, for example, a numerical value indicating the presence or absence of the operation (or action) of the object a and the status of the operation (or action) of the object a. When a plurality of objects a exist, the acquisition unit 3 outputs, as the situation information, a logical or, a logical and, or a weighted sum of numerical values of the presence or absence of the operation (or the action).
In addition, when there are a plurality of objects a, the acquisition unit 3 may perform the processing of fig. 2 using parameters different for each object a. In the attention calling, when there are a plurality of objects a whose work (or operation) status needs to be grasped, different parameters are used, whereby the process of acquiring the work (or operation) status can be optimized.
[ example of operation of analysis section ]
Fig. 3 is a flowchart showing an example of the operation of the analysis unit 4 according to embodiment 1. First, the analysis unit 4 receives data on the area B from the input unit 2 (step S11).
Next, the analysis unit 4 detects an object in the region B, and outputs the position of the object and a detection score indicating the probability of detection (step S12). For example, when the data relating to the area B is image data, the process of step S12 is performed by using a method such as SSD (see non-patent document 1) for detecting an object.
Next, the analysis unit 4 correlates the object detected by the processing of step S12 with the object detected in the past using the position information and feature information of the object, thereby performing tracking of the object (step S13). For example, the analysis unit 4 correlates objects whose positions in the previous frame are close to each other (for example, objects whose distances are smaller than a threshold value) and whose feature amounts extracted by ResNet (see non-patent document 2) or the like are similar (for example, objects whose difference in feature amounts is smaller than a threshold value), thereby tracking the objects. By the processing in step S13, the movement trajectory of each object or each person up to the present, the movement speed and the movement direction up to the present time point can be calculated.
Next, the analysis unit 4 analyzes the current state for each object detected by the processing of step S13, predicts the subsequent action from the current and past states, and acquires analysis information including current state information and action prediction information (step S14). The state of the object to be analyzed is, for example, the position, direction, moving speed, posture, and the like of the part (parts) of the object. In the prediction of the action, the predicted destination of movement, the estimated probability score, the predicted action, the estimated probability score for executing the action, and the like are calculated by linear prediction, a kalman filter, and the like. Specifically, when the behavior prediction of the person is performed, the analysis unit 4 performs the behavior prediction including the motion of the person based on the current position information of the person, the information of the object held by the person, the current movement trajectory and the posture information obtained from the result of the tracking of the person, and the like. For example, when the data on the region B is image data, the posture information includes the body and head orientation of the person and the positions of the arms and legs obtained by a technique such as openpos (see non-patent document 3). The analysis unit 4 can obtain the current state information of the person and the predicted behavior information of the person with higher accuracy by analyzing the states of the person in a plurality of frames continuously by tracking the person.
Next, the analysis unit 4 outputs the analysis information obtained by the processing of step S14 (step S15). Further, in the case where there are a plurality of areas B, the analysis information in all of those is output.
In addition, when there are a plurality of regions B, the analysis unit 4 may perform the processing of fig. 3 using different parameters for each region B. In the case where there are a plurality of regions B in which an object such as a person can be detected, the analysis in the region B can be optimized by using different parameters.
[ operation example of calculating section ]
Fig. 4 is a flowchart showing an operation example of the calculating unit 5 according to embodiment 1. First, the calculating unit 5 receives the situation information from the acquiring unit 3 and receives the analysis information from the analyzing unit 4 (step S21).
Next, the calculation unit 5 calculates the degree of attention according to an index based on definition information defining a situation requiring attention (step S22). For example, when the object a is a machine and "approach of a person to the machine during work" is defined as requiring attention, the indicator for calculating the degree of attention may be considered to include "a situation in which a person approaching the machine at a constant or higher speed" exists in the area B around the machine during work "," a situation in which the distance between the person and the machine is closer than a constant distance ", and" a situation in which the line of sight and the body direction of the person are oriented in the direction of the machine and approach is predicted "or the like. Therefore, when the attention calling system 100 is operated, one or more indexes that match the scene among the plurality of indexes are set. The calculation unit 5 calculates the degree of attention according to an equation based on the set index.
While the attention calling system 100 is being operated, the analysis unit 4 analyzes the position, the moving direction, the speed, the line of sight direction, the body orientation, and the like of the person for each frame. The calculation unit 5 calculates the degree of attention for each person based on indices such as "the moving direction of the person is close to the direction of the machine seen from the person and the higher the speed of the person is, the higher the value is calculated", "the closer the distance between the person and the machine is, the higher the value is calculated", and "the closer the direction of the line of sight and the direction of the body are to the direction of the machine seen from the person, the higher the value is calculated".
The calculation unit 5 calculates the degree of attention of each person by using a weighted sum of the degrees of attention of each index calculated when two OR more indices are used, OR by using a conditional branch (determination by AND OR) of the result of threshold determination of each index. When it is determined that the object a is operating (for example, when the object a is an apparatus or the like), based on the situation information acquired from the acquisition unit 3, the calculation unit 5 calculates the maximum value or the total value of the attention degrees of the respective persons as the attention degree of the entire attention promoting system 100.
Next, the calculating unit 5 outputs the degree of attention acquired in the process of step S22 (step S23).
As described above, in the attention calling system 100 according to embodiment 1, the acquisition unit 3 acquires the status information of the object a from the sensor data (the 1 st sensor data) including the information of the object a. The analysis unit 4 analyzes the current state information of the object and the behavior prediction information of the object included in the area B from the sensor data (2 nd sensor data) including the information of the area B. The calculation unit 5 calculates the degree of attention based on the status information of the object a, the current state information of the object, and the behavior prediction information of the object.
Thus, according to the attention calling system 100 of embodiment 1, it is possible to call attention before entering a dangerous area while suppressing unnecessary alarms. Specifically, by acquiring the status information of the object a by the acquisition unit 3, the attention can be reminded only in a status where attention is actually required. The analysis unit 4 predicts the movement of the object included in the area B, and the calculation unit 5 uses the prediction result for the calculation of the degree of attention. This makes it possible to appropriately cope with a situation requiring attention in accordance with the degree of attention while suppressing unnecessary attention calling. For example, the attention calling system 100 according to embodiment 1 can prevent an accident from occurring by alarming by the alarm unit 6 before a person invades the vicinity of a machine in operation.
On the other hand, in the conventional technique, when the area in which intrusion detection is performed is enlarged in order to prevent a risk from being detected in advance, another problem arises in that an unnecessary alarm may be caused. The attention calling system 100 according to embodiment 1 can detect only a situation that requires real attention and can prevent the situation from happening by not only detecting intrusion but also predicting the action of a person or the like.
In addition, the conventional techniques also have the following problems: whether or not to alarm can be determined based on the operating condition only in intrusion detection, and flexible response cannot be made after the overall condition is determined. On the other hand, the attention calling system 100 according to embodiment 1 is provided with the calculating unit 5 described above, and can calculate the degree of attention corresponding to more situations, and can also call the attention of the user by a method according to the calculated degree of attention.
(embodiment 2)
Next, embodiment 2 will be explained. In the description of embodiment 2, the same description as embodiment 1 will be omitted, and the description will be given of the differences from embodiment 1.
[ example of functional configuration ]
Fig. 5 is a diagram showing an example of a functional configuration of the attention calling system 100-2 according to embodiment 2. The attention calling system 100-2 according to embodiment 2 includes a sensor 1, an input unit 2, an acquisition unit 3, an analysis unit 4, a calculation unit 5, an alarm unit 6, a recording unit 7, a control unit 8, a setting unit 9, and a display unit 10.
In embodiment 2, a setting unit 9 and a display unit 10 are added to the configuration of embodiment 1.
The setting unit 9 sets parameters for controlling the operations of the respective units (for example, the acquisition unit 3, the analysis unit 4, and the calculation unit 5). The setting unit 9 may receive parameter settings of each unit by setting the UI, or may set parameters automatically determined by machine learning.
[ example of setting UI ]
Fig. 6 is a diagram showing an example of the parameter setting UI according to embodiment 2. The setting UI includes, for example, a setting area 101, a setting area 102, and a setting area 103, the setting area 101 sets an object a and an area B, the setting area 102 sets parameters (for example, parameters used for optical flow calculation) for controlling the operation of the acquisition unit 3 that acquires status information of the object a, the setting area 103 sets parameters (for example, parameters of a machine learning model used for analysis) for controlling the operation of the analysis unit 4, and the analysis unit 4 analyzes an object in the area B. When the input data input to the input unit 2 is image data, the setting unit 9 displays the image data in the setting area 101, and accepts setting of the object a and the area B on the image data.
The setting unit 9 sets parameters for controlling the operation of the calculation unit 5 (for example, parameters for setting a method of calculating the degree of attention) and thresholds for the degree of attention to be referred to by the alarm unit 6, the recording unit 7, and the control unit 8.
When there are a plurality of objects a and areas B, the setting unit 9 sets parameters for controlling the operation of the acquisition unit 3 for each object a, and sets parameters for controlling the operation of the analysis unit 4 for each area B.
By providing the setting unit 9, the parameter can be optimized after the attention calling system 100-2 is introduced. In addition, it is easy to cope with a situation requiring attention such as a change.
The display unit 10 displays display information including the processing results of each unit (for example, at least one of the status information of the object a acquired by the acquisition unit 3, the current state information of the object and the predicted behavior information of the object included in the area B analyzed by the analysis unit 4, and the degree of attention calculated by the calculation unit 5). For example, the display unit 10 displays the processing results of the respective units in a manner to be superimposed on the periphery of input data (for example, image data) or the input data as necessary.
[ example of display information ]
Fig. 7 is a diagram showing an example of display information according to embodiment 2. When an image is used as input data, the display unit 10 displays the operation (or motion) status of the object a, the position, motion, destination, and the like of an object such as a person detected in the area B, superimposed on the display area 111 representing the image data. The display unit 10 displays information in display areas 112 to 115 around the display area 111. Detailed information of the operation and the action state (for example, the speed of the movement of the object a) is displayed in the display area 112. In the display area 113, human information (for example, the action and state of a human, coordinates indicating the position of a rectangle representing a human, and the like) is displayed. The attention (for example, a numerical value, a character, a grade, and the like) is displayed in the display area 114. A sentence or the like indicating the analyzed situation is displayed in the display area 115.
By providing the display unit 10, it is possible to easily understand the cause and situation of the need for attention, which is a basis of the degree of attention, and to achieve more appropriate measures.
(embodiment 3)
Next, embodiment 3 will be explained. In the description of embodiment 3, the same description as embodiment 1 will be omitted, and the differences from embodiment 1 will be described.
[ example of functional configuration ]
Fig. 8 is a diagram showing an example of a functional configuration of the attention calling system 100-3 according to embodiment 3. The attention calling system 100-3 according to embodiment 3 includes a sensor 1, an input unit 2, an acquisition unit 3, an analysis unit 4, a calculation unit 5, an alarm unit 6, a recording unit 7, a control unit 8, and an identification unit 11.
In embodiment 3, the configuration of embodiment 1 is further added with an identification unit 11.
The recognition unit 11 recognizes a person by comparing the person with known person information. Specifically, when the analysis unit 4 detects a person, the recognition unit 11 uses the face information, the body shape information, the clothing information, the motion information, and the like to identify the person, the attribute, and the like of the detected person. The attribute represents information indicating an unknown person, a person from a specific standpoint (for example, a role), a person belonging to a specific organization, or the like, in a range where the person cannot be completely identified.
The personal information obtained by the recognition unit 11 is input to the calculation unit 5 and used for the calculation of the degree of attention. For example, the calculation unit 5 increases the degree of attention when an unknown person is detected, compared with the degree of attention when a specific person is detected. For example, when the person to approach the machine in operation is a skilled person in the field, work, or the like, the calculation unit 5 lowers the degree of attention as compared with the case where the person is an unskilled person.
Further, the alarm method of the alarm unit 6 may be changed by giving an alarm to the person and the person responsible for the person by mail or the like based on the identified attributes of the person and person.
In addition, when the degree of attention is higher than the threshold value, the recording unit 7 may further store attributes of the person and the person to be notified of attention.
In addition, when the setting unit 9 is provided as in embodiment 2, the following may be provided: parameters for controlling the operation of the recognition unit 11 can be set.
In addition, in the case where the display unit 10 is provided as in embodiment 2, the personal information recognized by the recognition unit 11 may be further included in the display information.
By providing the recognition unit 11, the position, the ability, and the like of the detected person can be specified, and the degree of attention can be flexibly changed according to the position, the ability, and the like. In addition, persons who are the cause of the situation requiring attention can be identified, and improvement in easy crisis management can be expected.
(embodiment 4)
Next, embodiment 4 will be explained. In the description of embodiment 4, the same description as embodiment 3 will be omitted, and the differences from embodiment 3 will be described.
[ example of functional configuration ]
Fig. 9 is a diagram showing an example of a functional configuration of the attention calling system 100-4 according to embodiment 4. The attention calling system 100-4 according to embodiment 4 includes a sensor 1, an input unit 2, an acquisition unit 3, an analysis unit 4, a calculation unit 5, an alarm unit 6, a recording unit 7, a control unit 8, an identification unit 11, and a determination unit 12.
In embodiment 4, a determination unit 12 is further added to the calculation unit 5 of embodiment 3.
The judgment unit 12 judges whether or not a predetermined number of persons are present at a predetermined position in a predetermined state. Specifically, the determination unit 12 determines that the number of surrounding supervisors or operators of the machine or the like in operation is too small, or that a predetermined number of supervisors or operators are not in a correct state at a predetermined position, such as when the machine or the like is looking aside.
The calculation unit 5 calculates a predetermined degree of attention determined based on the processing result of the determination unit 12. For example, in a situation where the number of people in the area B is too small or there is no specific person, the calculation unit 5 calculates the degree of attention determined according to the risk of the situation. For example, in a situation where the person is detected but is looking aside or is not a person having a specific attribute such as a management qualification, the calculation unit 5 calculates the degree of attention determined according to the risk of the situation. For example, in a situation where the number of people in operation is less than a predetermined number of people (for example, two people) when the area B is a place where a predetermined number of people are needed for operation, the calculation unit 5 calculates the degree of attention determined according to the risk of the situation.
By providing the determination unit 23, it is possible to achieve an effect of detecting a situation in which safety cannot be ensured, such as a situation in which a supervisor, an operator, and the like are absent, an improper situation (for example, a situation in which the supervisor, the operator, and the like are looking aside), and a situation in which a predetermined number of workers are not satisfied.
Finally, an example of the hardware configuration of the attention calling system 100(100-2 to 100-4) according to embodiments 1 to 4 will be described.
[ example of hardware Structure ]
Fig. 10 is a diagram showing an example of the hardware configuration of the attention calling system 100(100-2 to 100-4) according to embodiment 1 and embodiment 2. The attention calling system 100 includes a control device 201, a main storage device 202, an auxiliary storage device 203, a display device 204, an input device 205, and a communication device 206. The control device 201, the main storage device 202, the auxiliary storage device 203, the display device 204, the input device 205, and the communication device 206 are connected via a bus 210.
The control device 201 executes a program read from the auxiliary storage device 203 to the main storage device 202. The control device 201 is, for example, one or more processors such as a CPU. The main storage 202 is a Memory such as a ROM (Read Only Memory) or a RAM. The auxiliary storage device 203 is a memory card, an HDD (Hard Disk Drive), or the like.
The display device 204 displays information. The display device 204 is, for example, a liquid crystal display. The input device 205 receives input of information. The input device 205 is, for example, a keyboard, a mouse, and hardware keys. The display device 204 and the input device 205 may be liquid crystal touch panels having both a display function and an input function. The communication device 206 communicates with other devices.
A computer-readable storage medium such as a CD-ROM, a memory card, a CD-R, and a dvd (digital Versatile disc) is provided as a computer program product, in which a program executed by the attention calling system 100 is stored as an installable or executable file.
Further, the following may be configured: the program executed by the attention calling system 100 is stored in a computer connected to a network such as the internet and is provided by being downloaded via the network. Further, the program executed by the attention calling system 100 may be provided via a network such as the internet without downloading.
Further, the program executed by the attention notifying system 100 may be loaded in advance into a ROM or the like and provided.
The program executed by the attention calling system 100 is constituted by modules including functions that can be realized by the program among the functions of the attention calling system 100.
The functions realized by the programs are loaded on the main storage apparatus 202 by the control apparatus 201 reading out the programs from the storage medium such as the auxiliary storage apparatus 203 and executing the programs. That is, the functions realized by the program are generated on the main storage 202.
Further, a part of the functions of the attention calling system 100 may be realized by hardware such as an IC. The IC is, for example, a processor that performs dedicated processing.
In the case where each function is implemented by using a plurality of processors, each processor may implement one of the functions or two or more of the functions.
Further, each function may be realized by a plurality of processors, and information obtained by each processor may be transmitted and received via a network. That is, the attention reminder system 100 may also be implemented as a cloud system on a network.
While several embodiments of the present invention have been described above, these embodiments are presented as examples and are not intended to limit the scope of the invention. These new embodiments can be implemented in various other ways, and various omissions, substitutions, and changes can be made without departing from the spirit of the invention. These embodiments and modifications thereof are included in the scope and spirit of the invention, and are included in the invention described in the claims and the equivalent scope thereof.
Technical solution 1
An attention-reminding system is provided with:
an acquisition unit that acquires status information of a subject from 1 st sensor data including information of the subject;
an analysis unit that analyzes current state information of an object included in a region and behavior prediction information of the object from 2 nd sensor data including information of the region; and
and a calculation unit that calculates the degree of attention based on the status information of the object, the current state information of the object, and the behavior prediction information of the object.
Technical solution 2
According to the attention calling system described in claim 1,
the calculation unit calculates the degree of attention to be higher as the possibility that the object is close to or in contact with the object is higher, based on the current state information of the object and the action prediction information of the object, when it is determined that the object is operating or moving, based on the situation information of the object.
Technical solution 3
According to the attention calling system described in claim 1 or 2,
the calculation unit determines whether or not a predetermined number of persons are present at a predetermined position in a predetermined state, and calculates a predetermined degree of attention when the predetermined number of persons are not present at the predetermined position in the predetermined state.
Technical solution 4
The attention calling system according to any one of claims 1 to 3,
the acquisition unit receives 1 st image data as the 1 st sensor data, acquires the status information of the object from the 1 st image data,
the analysis unit receives 2 nd image data as the 2 nd sensor data, and analyzes current state information of the object and behavior prediction information of the object from the 2 nd image data.
Technical solution 5
According to the attention calling system as set forth in claim 4,
the analysis unit receives a plurality of 2 nd image data in time series, tracks the object included in the plurality of 2 nd image data, calculates a moving direction and a moving speed of the object, and analyzes the behavior prediction information of the object based on the moving direction and the moving speed of the object.
Technical scheme 6
The attention reminding system according to any one of claims 1 to 5,
the system further comprises an alarm unit configured to give an alarm when the degree of attention calculated by the calculation unit is higher than a 1 st threshold value.
Technical scheme 7
The attention reminding system according to any one of claims 1 to 6,
the system further includes a recording unit that records at least one of the 1 st sensor data, the 2 nd sensor data, the target situation information, the current state information of the object, the behavior prediction information of the object, and the degree of attention when the degree of attention calculated by the calculation unit is higher than a 2 nd threshold value.
Technical solution 8
The attention reminding system according to any one of claims 1 to 7,
the system further includes a control unit that shifts the object to a safe state when the degree of attention calculated by the calculation unit is higher than a 3 rd threshold value.
Technical solution 9
The attention reminding system according to any one of claims 1 to 8,
the information processing apparatus further includes a display unit that displays display information including at least one of the status information of the object, the current state information of the object, the behavior prediction information of the object, and the degree of attention.
Technical means 10
The attention reminding system according to any one of claims 1 to 9,
the system further includes a setting unit that sets a parameter for controlling the operations of the acquisition unit, the analysis unit, and the calculation unit.
Technical means 11
According to the attention calling system recited in claim 10,
the 1 st sensor data includes information of a plurality of the objects,
the 2 nd sensor data includes information for a plurality of the regions,
the setting unit sets a parameter for controlling the operation of the acquisition unit for each object, and sets a parameter for controlling the operation of the analysis unit for each region.
Technical means 12
The attention reminding system according to any one of claims 1 to 11,
the object is a person, and the object is a character,
further comprising a recognition unit for recognizing the person by comparing the person with known person information,
the calculation unit further calculates the degree of attention based on information specified by the person.
Technical means 13
An attention reminding method, comprising:
a step in which an attention calling system acquires condition information of a subject from 1 st sensor data including information of the subject;
a step in which the attention calling system analyzes current state information of an object included in an area and behavior prediction information of the object from 2 nd sensor data including information of the area; and
and a step in which the attention calling system calculates the degree of attention based on the status information of the object, the current state information of the object, and the behavior prediction information of the object.
Technical means 14
According to the attention calling method of claim 13,
in the calculating, when it is determined that the subject is working or moving based on the condition information of the subject, the attention degree is calculated to be higher as the possibility that the subject is close to or in contact with the subject is higher based on the current state information of the subject and the action prediction information of the subject.
Technical means 15
According to the attention calling method of claim 13 or 14,
the calculating step determines whether or not a predetermined number of persons are present at a predetermined position in a predetermined state, and calculates a predetermined degree of attention when the predetermined number of persons are not present at the predetermined position in the predetermined state.
Technical solution 16
The attention reminding method according to any one of claims 13 to 15,
in the acquiring step, 1 st image data is received as the 1 st sensor data, and the status information of the object is acquired from the 1 st image data,
in the analyzing step, 2 nd image data is received as the 2 nd sensor data, and the current state information of the object and the behavior prediction information of the object are analyzed from the 2 nd image data.
Technical solution 17
In a program for executing a program,
the program causes a computer to function as an acquisition unit, an analysis unit, and a calculation unit,
the acquisition unit acquires status information of a subject from 1 st sensor data including information of the subject,
the analysis unit analyzes current state information of an object included in the area and behavior prediction information of the object from 2 nd sensor data including information of the area,
the calculation unit calculates the degree of attention based on the status information of the object, the current state information of the object, and the behavior prediction information of the object.
Technical means 18
According to the procedure set forth in claim 17,
the calculation unit calculates the degree of attention to be higher as the possibility that the object is close to or in contact with the object is higher, based on the current state information of the object and the action prediction information of the object, when it is determined that the object is operating or moving, based on the situation information of the object.
Technical means 19
According to the procedure described in claim 17 or 18,
the calculation unit determines whether or not a predetermined number of persons are present at a predetermined position in a predetermined state, and calculates a predetermined degree of attention when the predetermined number of persons are not present at the predetermined position in the predetermined state.
Technical solution 20
The program according to any one of claims 17 to 19,
the acquisition unit receives 1 st image data as the 1 st sensor data, acquires the status information of the object from the 1 st image data,
the analysis unit receives 2 nd image data as the 2 nd sensor data, and analyzes current state information of the object and behavior prediction information of the object from the 2 nd image data.

Claims (20)

1. An attention-reminding system is provided with:
an acquisition unit that acquires status information of a subject from 1 st sensor data including information of the subject;
an analysis unit that analyzes current state information of an object included in a region and behavior prediction information of the object from 2 nd sensor data including information of the region; and
and a calculation unit that calculates the degree of attention based on the status information of the object, the current state information of the object, and the behavior prediction information of the object.
2. The attention calling system as set forth in claim 1,
the calculation unit calculates the degree of attention to be higher as the possibility that the object is close to or in contact with the object is higher, based on the current state information of the object and the action prediction information of the object, when it is determined that the object is operating or moving, based on the situation information of the object.
3. The attention calling system according to claim 1 or 2,
the calculation unit determines whether or not a predetermined number of persons are present at a predetermined position in a predetermined state, and calculates a predetermined degree of attention when the predetermined number of persons are not present at the predetermined position in the predetermined state.
4. The attention calling system according to any one of claims 1 to 3,
the acquisition unit receives 1 st image data as the 1 st sensor data, acquires the status information of the object from the 1 st image data,
the analysis unit receives 2 nd image data as the 2 nd sensor data, and analyzes current state information of the object and behavior prediction information of the object from the 2 nd image data.
5. The attention calling system as set forth in claim 4,
the analysis unit receives a plurality of 2 nd image data in time series, tracks the object included in the plurality of 2 nd image data, calculates a moving direction and a moving speed of the object, and analyzes the behavior prediction information of the object based on the moving direction and the moving speed of the object.
6. The attention calling system according to any one of claims 1 to 5,
the system further comprises an alarm unit configured to give an alarm when the degree of attention calculated by the calculation unit is higher than a 1 st threshold value.
7. The attention calling system according to any one of claims 1 to 6,
the system further includes a recording unit that records at least one of the 1 st sensor data, the 2 nd sensor data, the target situation information, the current state information of the object, the behavior prediction information of the object, and the degree of attention when the degree of attention calculated by the calculation unit is higher than a 2 nd threshold value.
8. The attention calling system according to any one of claims 1 to 7,
the system further includes a control unit that shifts the object to a safe state when the degree of attention calculated by the calculation unit is higher than a 3 rd threshold value.
9. The attention calling system according to any one of claims 1 to 8,
the information processing apparatus further includes a display unit that displays display information including at least one of the status information of the object, the current state information of the object, the behavior prediction information of the object, and the degree of attention.
10. The attention calling system according to any one of claims 1 to 9,
the system further includes a setting unit that sets a parameter for controlling the operations of the acquisition unit, the analysis unit, and the calculation unit.
11. The attention calling system as set forth in claim 10,
the 1 st sensor data includes information of a plurality of the objects,
the 2 nd sensor data includes information for a plurality of the regions,
the setting unit sets a parameter for controlling the operation of the acquisition unit for each object, and sets a parameter for controlling the operation of the analysis unit for each region.
12. The attention calling system according to any one of claims 1 to 11,
the object is a person, and the object is a character,
further comprising a recognition unit for recognizing the person by comparing the person with known person information,
the calculation unit further calculates the degree of attention based on information specified by the person.
13. An attention reminding method, comprising:
a step in which an attention calling system acquires condition information of a subject from 1 st sensor data including information of the subject;
a step in which the attention calling system analyzes current state information of an object included in an area and behavior prediction information of the object from 2 nd sensor data including information of the area; and
and a step in which the attention calling system calculates the degree of attention based on the status information of the object, the current state information of the object, and the behavior prediction information of the object.
14. The attention calling method as set forth in claim 13,
in the calculating, when it is determined that the subject is working or moving based on the condition information of the subject, the attention degree is calculated to be higher as the possibility that the subject is close to or in contact with the subject is higher based on the current state information of the subject and the action prediction information of the subject.
15. The attention calling method according to claim 13 or 14,
the calculating step determines whether or not a predetermined number of persons are present at a predetermined position in a predetermined state, and calculates a predetermined degree of attention when the predetermined number of persons are not present at the predetermined position in the predetermined state.
16. The attention calling method according to any one of claims 13 to 15,
in the acquiring step, 1 st image data is received as the 1 st sensor data, and the status information of the object is acquired from the 1 st image data,
in the analyzing step, 2 nd image data is received as the 2 nd sensor data, and the current state information of the object and the behavior prediction information of the object are analyzed from the 2 nd image data.
17. A storage medium storing a program for executing a program,
the program causes a computer to function as an acquisition unit, an analysis unit, and a calculation unit,
the acquisition unit acquires status information of a subject from 1 st sensor data including information of the subject,
the analysis unit analyzes current state information of an object included in the area and behavior prediction information of the object from 2 nd sensor data including information of the area,
the calculation unit calculates the degree of attention based on the status information of the object, the current state information of the object, and the behavior prediction information of the object.
18. The storage medium storing a program according to claim 17,
the calculation unit calculates the degree of attention to be higher as the possibility that the object is close to or in contact with the object is higher, based on the current state information of the object and the action prediction information of the object, when it is determined that the object is operating or moving, based on the situation information of the object.
19. The storage medium storing a program according to claim 17 or 18,
the calculation unit determines whether or not a predetermined number of persons are present at a predetermined position in a predetermined state, and calculates a predetermined degree of attention when the predetermined number of persons are not present at the predetermined position in the predetermined state.
20. The storage medium storing a program according to any one of claims 17 to 19,
the acquisition unit receives 1 st image data as the 1 st sensor data, acquires the status information of the object from the 1 st image data,
the analysis unit receives 2 nd image data as the 2 nd sensor data, and analyzes current state information of the object and behavior prediction information of the object from the 2 nd image data.
CN202110210078.6A 2020-07-31 2021-02-25 Attention reminding system, attention reminding method, and storage medium Pending CN114067511A (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2020130622A JP2022026925A (en) 2020-07-31 2020-07-31 Alerting system, alerting method, and program
JP2020-130622 2020-07-31

Publications (1)

Publication Number Publication Date
CN114067511A true CN114067511A (en) 2022-02-18

Family

ID=80004483

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110210078.6A Pending CN114067511A (en) 2020-07-31 2021-02-25 Attention reminding system, attention reminding method, and storage medium

Country Status (3)

Country Link
US (1) US20220036074A1 (en)
JP (1) JP2022026925A (en)
CN (1) CN114067511A (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1538355A (en) * 2003-03-13 2004-10-20 欧姆龙株式会社 System for monitoring danger source
JP2010198519A (en) * 2009-02-27 2010-09-09 Hitachi Constr Mach Co Ltd Periphery monitoring device
CN104982030A (en) * 2013-03-19 2015-10-14 住友重机械工业株式会社 Periphery monitoring device for work machine
US20180357870A1 (en) * 2017-06-07 2018-12-13 Amazon Technologies, Inc. Behavior-aware security systems and associated methods
KR20200022229A (en) * 2018-08-22 2020-03-03 한국기계연구원 Apparatus and method for safety control of excavator

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170278362A1 (en) * 2014-09-19 2017-09-28 Nec Corporation Information processing device, information processing method, and program
JP6672076B2 (en) * 2016-05-27 2020-03-25 株式会社東芝 Information processing device and mobile device
US10007269B1 (en) * 2017-06-23 2018-06-26 Uber Technologies, Inc. Collision-avoidance system for autonomous-capable vehicle
US10974720B2 (en) * 2018-08-13 2021-04-13 Kingman Ag, Llc Vehicle sliding bumper and system for object contact detection and responsive control
JP7306260B2 (en) * 2019-12-26 2023-07-11 コベルコ建機株式会社 Remote control system for construction machinery
US20210279486A1 (en) * 2020-03-09 2021-09-09 Vincent Nguyen Collision avoidance and pedestrian detection systems
US11688184B2 (en) * 2020-06-17 2023-06-27 Toyota Motor Engineering & Manufacturing North America, Inc. Driving automation external communication location change

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1538355A (en) * 2003-03-13 2004-10-20 欧姆龙株式会社 System for monitoring danger source
JP2010198519A (en) * 2009-02-27 2010-09-09 Hitachi Constr Mach Co Ltd Periphery monitoring device
CN104982030A (en) * 2013-03-19 2015-10-14 住友重机械工业株式会社 Periphery monitoring device for work machine
US20180357870A1 (en) * 2017-06-07 2018-12-13 Amazon Technologies, Inc. Behavior-aware security systems and associated methods
KR20200022229A (en) * 2018-08-22 2020-03-03 한국기계연구원 Apparatus and method for safety control of excavator

Also Published As

Publication number Publication date
JP2022026925A (en) 2022-02-10
US20220036074A1 (en) 2022-02-03

Similar Documents

Publication Publication Date Title
US10657386B2 (en) Movement state estimation device, movement state estimation method and program recording medium
KR102572811B1 (en) System for identifying defined objects
CN110544360B (en) Train safe driving monitoring system and method
JP4924607B2 (en) Suspicious behavior detection apparatus and method, program, and recording medium
CN111392619B (en) Tower crane early warning method, device and system and storage medium
US10846537B2 (en) Information processing device, determination device, notification system, information transmission method, and program
US9946921B2 (en) Monitoring device, monitoring method and monitoring program
US11776274B2 (en) Information processing apparatus, control method, and program
NL2020067B1 (en) System for detecting persons in an area of interest
Lashkov et al. Driver dangerous state detection based on OpenCV & dlib libraries using mobile video processing
CN112576310B (en) Tunnel security detection method and system based on robot
CN104050785A (en) Safety alert method based on virtualized boundary and face recognition technology
CN114067511A (en) Attention reminding system, attention reminding method, and storage medium
JP6978986B2 (en) Warning system, warning control device and warning method
KR20170048108A (en) Method and system for recognizing object and environment
CN103974028A (en) Method for detecting fierce behavior of personnel
CN112560658B (en) Early warning method, early warning device, electronic equipment and computer readable storage medium
KR102613957B1 (en) System of peventing suicide using virtulal detection line in image
CN117456471B (en) Perimeter security method, perimeter security system, terminal equipment and storage medium
US11961419B2 (en) Event detection and prediction
CN118053261B (en) Anti-spoofing early warning method, device, equipment and medium for smart campus
JP6954416B2 (en) Information processing equipment, information processing methods, and programs
Das et al. A Comparative Study on Procedural and Predictive Systems on Drowsiness Detection
CN118212594A (en) Workshop supervision decision-making method, equipment and computer readable storage medium
KR20220060037A (en) AI-based smart care system by use of thermal imaging camera and skeleton analysis of smart mirror, and smart mirror device for the same

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination