EP4176379A1 - Electronic system to detect the presence of a person in a limited area - Google Patents

Electronic system to detect the presence of a person in a limited area

Info

Publication number
EP4176379A1
EP4176379A1 EP21746145.8A EP21746145A EP4176379A1 EP 4176379 A1 EP4176379 A1 EP 4176379A1 EP 21746145 A EP21746145 A EP 21746145A EP 4176379 A1 EP4176379 A1 EP 4176379A1
Authority
EP
European Patent Office
Prior art keywords
person
dangerous area
processing unit
top view
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
EP21746145.8A
Other languages
German (de)
French (fr)
Inventor
Giovanni Andrea Farina
Stefano Della Valle
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Itway SpA
Original Assignee
Itway SpA
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Itway SpA filed Critical Itway SpA
Publication of EP4176379A1 publication Critical patent/EP4176379A1/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/52Surveillance or monitoring of activities, e.g. for recognising suspicious objects

Definitions

  • the present invention generally relates to the electronics field.
  • the present invention relates to an electronic system to detect the presence of a person positioned in proximity or within a limited area.
  • Vision systems are known which use cameras to detect the presence of a person in a certain environment.
  • the known vision systems are not capable of detecting the presence of a person in an area of an industrial environment with sufficient reliability and with a sufficiently reduced reaction time to avoid injuries to people, in the case of framing from the top the people to be detected and in the case where the area to be monitored moves, instead of the people.
  • the present invention relates to an electronic system to detect the presence of a person in proximity or within a limited dangerous area as defined in the appended claim 1 and the preferred embodiments thereof described in dependent claims 2 to 11.
  • the electronic system in accordance with the present invention can detect the presence of a person in proximity or within a limited dangerous area of an industrial environment in a reliable manner (i.e., minimizing false alarms) and it allows an alarm to be generated to warn the person of a dangerous condition with a reduced reaction time (typically less than one second, for example about 0.5 seconds), such to significantly reduce the risk of injury to the person himself, in the case of framing the people to be detected from the top and in the case where it is the mainly the dangerous area which moves, instead of the people.
  • a reduced reaction time typically less than one second, for example about 0.5 seconds
  • the Applicant has perceived that the electronic system in accordance with the present invention can reliably and promptly detect the presence of a person in proximity or within the dangerous area by means of the recognition of at least one portion of the head and/or of the body of a person regardless of his posture (i.e., standing, sitting, lying down) and regardless of whether he is wearing personal protective equipment (typically a helmet), unlike the known solutions which are only capable of detecting the presence of a person in particular postures (typically only standing) or only if they are wearing personal protective equipment (helmet).
  • Figure 1 shows a block diagram of an electronic system to detect the presence of a person in proximity or within a limited dangerous area according to an embodiment of the invention
  • Figures 2A-2B schematically show a pair of images acquired by a pair of cameras which frame the area beneath the hook of a bridge crane on which the electronic system of the invention is mounted, in two different examples of system operation.
  • FIG. 1 shows an electronic system 10 to detect the presence of a person in proximity or within a limited dangerous area.
  • the dangerous area is a limited portion of an industrial environment, such as the area beneath the hook of a crane or of a bridge crane.
  • the dangerous area is monitored in real time to check if an operator (assigned to carry out a certain task in the considered industrial environment) is positioned in proximity or within the dangerous area, in order to take appropriate measures such as generating an audible and/or visual alarm indicative of the presence of a dangerous condition or stopping the operation of a particular machine positioned in the considered industrial environment.
  • the electronic system 10 is typically mounted on a mobile structure, so in this case the dangerous area is mobile, while people may also be in a stationary position in the environment considered.
  • the dangerous area depends on the type of application in which the system 10 is used and can have for example a circular or rectangular or polygon shape.
  • the electronic system 10 is used to monitor a dangerous area beneath the hook of a crane installed in a warehouse where there are several steel coils which are particularly heavy, for example having a weight greater than 10 tons.
  • a bridge crane comprises a pair of parallel tracks located at the top above the sides of a building (for example, a warehouse), through which a mobile metal bridge (called a beam) runs, on which a carriage with a winch and a gripping member is mounted, such as a hook for lifting heavy objects.
  • a building for example, a warehouse
  • a mobile metal bridge called a beam
  • a carriage with a winch and a gripping member is mounted, such as a hook for lifting heavy objects.
  • the bridge crane is used, for example, to move semi-finished materials or finished products between one department and the other of a warehouse, or towards the loading or unloading area of the goods.
  • the electronic system 10 is mounted on the carriage of the bridge crane and the dangerous area has the shape of a circle centred on the weight lifting hook, with a variable radius (for example around 3-7 metres) and programmable according to the desired safety requirements, or according to the safety policies defined in a company in relation to the minimum distance required for operators with respect to the load.
  • the shape of the dangerous area depends on the factor of the shape of the load of the bridge crane or crane.
  • the electronic system 10 comprises a processing device 1 and a pair of cameras 2, 3 electrically connected to the processing unit 1.
  • the pair of cameras 2, 3 is positioned so as to frame from the top at least one portion of the dangerous area to be monitored, thus even people who are in proximity or within the dangerous area are framed from the top.
  • the pair of cameras 2, 3 is then configured to each acquire a flow 11 , I2 of real time images representative of a respective portion of the dangerous area to be monitored, in which the two portions overlap at least in part and together cover the entire dangerous area; in particular, the images acquired by the pair of cameras contain a top view of the people who are in proximity or within the defined dangerous area.
  • the cameras 2, 3 are for example HWIN@ Dahua HAC-HDBW2220R-Z, which have a resolution of 2.4 Megapixels and an acquisition frequency of 30 images per second.
  • the processing device 1 is made, for example, with the industrial series PC Neousys Nuvo 5000, in particular 5000E/P.
  • the use of two (or more than two) cameras has the advantage of allowing to obtain a stereoscopic view of the monitored dangerous area, also allowing to detect the presence of a person seen from the top positioned in proximity or within the dangerous area, even when the framed person is partially hidden from other objects present in the dangerous area itself.
  • the use of two (or more) cameras allows to improve the visibility in the area beneath the load, reducing the risk of a failure to detect the presence of a person in the area beneath the load.
  • FIG. 2A shows the application in which the dangerous area is that beneath the crane of a bridge crane, the two cameras 2, 3 are positioned on the carriage of the bridge crane on the two opposite sides and substantially equidistant with respect to the direction defined by the weight lifting hook and the electronic processing device 1 is mounted on the carriage.
  • the first camera 2 is such to acquire a first image 11.1 (of the first flow of images 11) representative of a top view of one side of the area beneath the bridge crane hook in which a plurality of coils 16 are positioned
  • the second camera 3 is such to acquire a second image 12.1 (of the second flow of images I2) representative of a top view of the other side of the area beneath the hook in which the same plurality of coils 16 and additional coils 18 are positioned.
  • the dangerous area has the shape of a circle centred on the hook and the cameras 2, 3 have the lens oriented so as to frame the area beneath the hook of the bridge crane; in particular, in Figure 2A the first dangerous area 15-1 associated with the first camera 2 and having the shape of a circle is shown on the left (considering the reading orientation), and the second dangerous area 15-2 associated with the second camera 3 and also having the shape of a circle is shown on the right.
  • the second camera 3 acquires a top view of a portion of the area beneath the hook of the crane which is partially overlapped on the portion acquired by the first camera 2, thus a part which is outside the circle associated with the first camera 2 is instead inside the circle associated with the second camera 3 (see the coils 18 which are only present in the circle of the second image 12.1 associated with the second camera 3).
  • both cameras 2, 3 is not essential, i.e., applications are possible in which even a single camera is sufficient.
  • the processing device 1 is an electronic device, which in turn comprises: a data processing unit 1 -1 ; a graphic processing unit 1 -2; a memory 1-5.
  • the graphic processing unit 1-2 is connected on one side to the two cameras 2, 3 and on the other side to the data processing unit 1 -1.
  • the graphic processing unit 1-2 is for example the model Nvidia GTX 1050 Ti.
  • the graphic processing unit 1-2 has the function of receiving in parallel the two flows of images 11 , I2 acquired respectively by means of the cameras 2, 3, in which the acquired images of the two flows 11 , I2 are representative of a top view of the dangerous area and of the top view of the possible presence of one or more people in proximity or within the dangerous area, in particular a top view of at least part of the head and/or of the body of at least one person.
  • the graphic processing unit 1-2 has the function of appropriately processing the two acquired flows of images 11 , I2 by means of a parallel type processing architecture and it has the function of generating a positioning signal S_pos indicative of the position (within the analysed image) of at least one portion of the image representative of the top view of at least part of the head and/or of the body of at least one person.
  • the graphic processing unit 1-2 is capable of both identifying in an image a portion representative of the top view of at least part of the head and/or of the body of a person, and localizing said portion within the analysed image, thus providing the position (e.g., expressed in pixel coordinates) within the image of the identified portion of image representative of the top view of at least part of the head and/or of the body of a person.
  • Figures 2A-2B show with a square 20 the position of the head and/or of the body (seen from the top) which has been identified by means of the graphic processing unit 1 -2.
  • a graphic processing unit 1-2 (separate from the data processing unit 1- 1) has the advantage of significantly reducing the processing time of the acquired images, by means of a parallel processing of distinct smaller portions of the same image: this allows the electronic system 10 to analyse 30 images per second for each camera and to promptly generate an alarm signal indicative of the presence of a dangerous condition with a reduced reaction time, typically less than one second, in particular equal to about 0.5 seconds, thus avoiding a possible dangerous situation of a workplace accident.
  • the data processing unit 1-1 (for example a microprocessor or a programmable logic unit) has the function of comparing the position of the top views of one or more people (identified by means of the graphic processing unit 1-2) and the perimeter of the dangerous area, in order to determine if one or more people are in proximity or within the perimeter of the dangerous area.
  • the data processing unit 1-1 is configured to generate, as a function of the positioning signal S_pos, an alarm signal S_al indicative of the presence or absence of a dangerous condition, in particular indicative of the presence of a person positioned in proximity or within the perimeter of the dangerous area or indicative of the absence of the person in the dangerous area (i.e., the person is far from the dangerous area).
  • the data processing unit 1-1 is configured to generate the alarm signal S_al having a first value (e.g., a high logical value) representative of the presence of at least one person in proximity or within the dangerous area, when the processing unit is such to detect that a top view of at least part of the head and/or of the body of a person is positioned in proximity or within the perimeter of the dangerous area; conversely, the data processing unit 1-1 is configured to generate the alarm signal S_al having a second value (e.g., a low logical value) representative of the absence of people in proximity or within the dangerous area (i.e., people are far from the dangerous area).
  • a first value e.g., a high logical value
  • the image 11.1 comprises a top view of the head, shoulders and a part of the trunk of a person 12 (a worker) wearing a helmet and located in the warehouse where the bridge crane on which the system 10 is mounted is installed: it can be seen that the worker 12 is partially inside the first dangerous area 15-
  • the data processing unit 1-1 is such to generate the alarm signal S_al having a first value (for example, a high logical value) representative of the presence of the person 12 who is partially inside the first dangerous area 15-1.
  • the data processing unit 1-1 is such to generate the alarm signal S_al having a second value (for example, a low logical value) representative of the absence of the person 13-1 , 13-2 within or in proximity of the dangerous area 15-1 and 15-2.
  • the dangerous area is divided into two or more concentric areas, each associated with a different level of danger, in which the outermost area is associated with the lowest level of danger and the innermost dangerous area is associated with a higher level of danger: this has the purpose of increasing the safety of the person, increasing his awareness of positioning with respect to the danger, thus achieving a training aim regarding the prevention of workplace accidents.
  • the dangerous area is divided into two concentric dangerous areas (for example, two concentric circles), where the outermost dangerous area is associated with a low danger level and the innermost dangerous area is associated with a high danger level.
  • the data processing unit 1-1 is configured to generate the alarm signal S_al having two possible values, as a function of the low or high danger level detected: the alarm signal S_al has a first warning value indicative of a condition of imminent danger, when the data processing unit 1-1 is such to detect the presence of at least one person positioned within the perimeter of the outer dangerous area, but still outside the inner dangerous area (for example, at a distance of less than 1 metre from the perimeter of the latter); the alarm signal S_al has a second alarm value indicative of an actual condition of danger (alarm), when the data processing unit 1-1 is such to detect the presence of at least one person positioned within the perimeter of the inner dangerous area.
  • the alarm signal S_al has a first warning value indicative of a condition of imminent danger, when the data processing unit 1-1 is such to detect the presence of at least one person positioned within the perimeter of the outer dangerous area, but still outside the inner dangerous area (for example, at a distance of less than 1 metre from the perimeter of the latter); the alarm signal S_al has a second
  • the dimensions of the dangerous area are dynamically varied, i.e., they are increased or decreased as a function of the desired safety requirements, or according to the safety policies defined in a company in relation to the minimum distance required between the operators and the load of the crane.
  • the data processing unit 1-1 is for example Intel Core i5-6500TE (Skylake) 2.3 GHz Micro Processor.
  • the alarm signal S_al can be one or a combination of the following signals: an acoustic signal generated by a siren 4 (i.e., a speaker) connected to the processing device 1 (and therefore with the data processing unit 1-1); a light signal (e.g., flashing) generated by a light source 5; a graphic and/or textual indication of a screen connected to the processing device 1 (and thus to the data processing unit 1-1) by means of a wired connection or by means of a short distance wireless signal (for example, of the Bluetooth or WiFi type); a graphic and/or textual indication of a screen of a mobile electronic device (for example, a smartphone, tablet or laptop) connected to the processing device 1 (and therefore to the data processing unit 1-1) by means of a short distance wireless signal (for example, of the Bluetooth or WiFi type).
  • a short distance wireless signal for example, of the Bluetooth or WiFi type
  • the dangerous area is that beneath the crane of a bridge crane, the siren 4 and/or the light source 5 are mounted on the carriage of the bridge crane, so that the light beam emitted by the light source 5 is visible by the people who are positioned in proximity or within the dangerous area and so that the sound wave generated by the siren 4 is received by the same people.
  • the data processing unit 1-1 runs an appropriate software program which appropriately processes the positioning signal S_img_pr, detects the presence of one or more people positioned in proximity and/or within the dangerous area and then generates the alarm signal S_al to drive the siren 4 and/or the light source 5 and/or a display screen.
  • the memory 1-5 is non-volatile and it has the function of storing in real time a plurality of images acquired by means of the cameras 2, 3, when the presence of at least one person is detected in proximity and/or within the defined dangerous area.
  • the memory 1-5 is configured to store a sequence of images representative of a top view of the dangerous area comprising a person positioned in proximity or within the dangerous area, starting from the instant when the data processing unit 1-1 is such to detect the presence of a person positioned in proximity or within the dangerous area, until the instant when the data processing unit 1-1 is such to detect that the person has moved away from the dangerous area (or has left the dangerous area).
  • the electronic system 10 further comprises a wireless signal transceiver (for example, of the WiFi type) and thus the electronic system 10 is connected (by means of the wireless signal transceiver) with an external electronic device by means of a wireless connection.
  • the data processing unit 1-1 is configured to read from the memory 1-5 the plurality of stored images and forward them to the wireless signal transceiver, then the wireless signal transceiver is configured to transmit said plurality of stored images to the external electronic device: it is thereby possible to carry out (in the external electronic device or in another electronic device) a subsequent processing of the plurality of stored images in order to process statistical analyses or for forensic analyses, in order to identify a possible intervention to improve safety measures in the industrial environment considered.
  • the graphic processing unit 1-2 uses Artificial Intelligence techniques in order to detect the presence of a person in proximity or within the dangerous area, in particular using a deep neural network (Deep Learning) implemented in the graphic processing unit 1-2, even more in particular a convolutional neural network.
  • a deep neural network Deep Learning
  • the deep neural network (possibly convolutional) is first trained using a training set appropriately created based on images which contain at least one person viewed from the top in different possible positions, such as the following top views in an industrial environment: images representative of an industrial environment which comprise a top view of a standing person wearing a protective helmet; images representative of an industrial environment which comprise a top view of a standing person not wearing a protective helmet; images representative of the industrial environment which comprise a top view of a person lying down; images representative of the industrial environment which comprise a top view of a crouching person wearing a protective helmet; images representative of an industrial environment which comprise a top view of a crouching person not wearing a protective helmet; images representative of the industrial environment which comprise a top view of a person riding a bicycle.
  • the training step it is determined the starting height from the track at the height of the hook of the bridge crane, in order to calculate the height of the transported load with respect to the ground and/or with respect to a defined point of the crane (for example, the upper vertex thereof): it is thereby possible to improve the accuracy with which a dangerous situation is detected.
  • the perimeter of the dangerous area is dynamically changed according to the height and possibly according to the type of load, appropriately increasing or decreasing the perimeter of the dangerous area.
  • the parameters of the new deep neural network have been then determined: a mathematical model of the deep neural network is then generated which is capable of recognizing both a top view of at least one portion of the head and/or of the body of a person, both top views comprised in the training set, and top views of new images acquired during the successive normal operating step.
  • the new deep neural network is capable of successfully recognizing the presence of a person in proximity or within the dangerous area by recognizing at least one portion of a head and/or of a body of a person, whether the person is wearing one or more pieces of personal protective equipment (e.g., a helmet), or if the person is not wearing any personal protective equipment.
  • personal protective equipment e.g., a helmet
  • said recognition of at least one portion of a head and/or of a body of a person occurs successfully in different possible situations of the state of the person (i.e., sitting, standing, and lying down) and in different possible situations of the health of the person (for example, even when the person is lying on the ground due to an ailment).
  • the new deep neural network implemented in the graphic processing unit 1-2 is configured, during the normal operation of the electronic system 10, to recognize the presence of a person who is within or in proximity of the dangerous area beneath the hook of a crane in an industrial environment by means of the analysis of images acquired by the cameras representative of a top view of the person in the following situations: standing person wearing a protective helmet; standing person not wearing a protective helmet; person lying down; crouching person wearing a protective helmet; crouching person not wearing a protective helmet; person riding a bicycle.
  • Said trained deep neural network (possibly convolutional) is thus implemented in the electronic circuits of the graphic processing unit 1-2.
  • the deep neural network is such to identify not only the presence or absence of a top view of at least one portion of the head and/or of the body of a person, but (if present) it is such to provide the position (i.e., the location) within the analysed image of the top view of the identified portion of the head and/or of the body of the person.
  • the deep neural network is such to automatically calculate the height of the transported load with respect to the ground and/or with respect to a defined point of the crane (for example, the upper vertex thereof).
  • the deep neural network is created using the YOLO library (You Only Look Once) which provides the necessary functions to perform the recognition of objects by means of the analysis of a single image, using a single neural network for the entire image, generating at the output information indicative not only of the presence of a person, but also where the person is positioned within the analysed image.
  • YOLO library You Only Look Once
  • the YOLO neural network and the standard libraries thereof are not adapted to recognize people from the top, but they are used to recognize and localize objects viewed horizontally (i.e., in front): therefore, the Applicant used the structure of a known neural network to create a new deep neural network model operating for the recognition and localization of people framed from the top.
  • the original YOLO model i.e. Darknet, has been modified.
  • This approach allows to have the maximum possible performance for YOLO, unlike other frameworks or libraries such as Keras, TensorFlow, PyTorch which, although they provide support during the editing and training step of the neural network, weigh down the execution of the algorithm, causing a decrease in performance both in terms of precision and time, both key factors of the system.
  • the electronic system 10 is further provided with the possibility of remote access and it has a web interface through which it is possible to modify the area framed by the camera and the dangerous area, display the system status and the alarm events generated by the system in real time.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)
  • Burglar Alarm Systems (AREA)
  • Measurement Of The Respiration, Hearing Ability, Form, And Blood Characteristics Of Living Organisms (AREA)
  • Geophysics And Detection Of Objects (AREA)
  • Alarm Systems (AREA)
  • Image Processing (AREA)

Abstract

It is disclosed an electronic system (10) to detect the presence of a person. The system comprises a processing device (1) and a camera (2) connected to the processing device. The processing device comprises a graphic processing unit (1-2) and a data processing unit (1-1) connected to each other, the graphic processing unit being further connected to the camera.

Description

ELECTRONIC SYSTEM TO DETECT THE PRESENCE OF A PERSON IN A LIMITED
AREA
DESCRIPTION
TECHNICAL FIELD OF THE INVENTION
The present invention generally relates to the electronics field.
More in particular, the present invention relates to an electronic system to detect the presence of a person positioned in proximity or within a limited area.
PRIOR ART
Vision systems are known which use cameras to detect the presence of a person in a certain environment.
The Applicant has noted that the known vision systems are not capable of detecting the presence of a person in an area of an industrial environment with sufficient reliability and with a sufficiently reduced reaction time to avoid injuries to people, in the case of framing from the top the people to be detected and in the case where the area to be monitored moves, instead of the people.
SUMMARY OF THE INVENTION
The present invention relates to an electronic system to detect the presence of a person in proximity or within a limited dangerous area as defined in the appended claim 1 and the preferred embodiments thereof described in dependent claims 2 to 11.
The Applicant has perceived that the electronic system in accordance with the present invention can detect the presence of a person in proximity or within a limited dangerous area of an industrial environment in a reliable manner (i.e., minimizing false alarms) and it allows an alarm to be generated to warn the person of a dangerous condition with a reduced reaction time (typically less than one second, for example about 0.5 seconds), such to significantly reduce the risk of injury to the person himself, in the case of framing the people to be detected from the top and in the case where it is the mainly the dangerous area which moves, instead of the people.
Furthermore, the Applicant has perceived that the electronic system in accordance with the present invention can reliably and promptly detect the presence of a person in proximity or within the dangerous area by means of the recognition of at least one portion of the head and/or of the body of a person regardless of his posture (i.e., standing, sitting, lying down) and regardless of whether he is wearing personal protective equipment (typically a helmet), unlike the known solutions which are only capable of detecting the presence of a person in particular postures (typically only standing) or only if they are wearing personal protective equipment (helmet).
It is also an object of the present invention a crane, in particular a bridge crane, as defined in the appended claim 12 and by a preferred embodiment thereof described in dependent claim 13.
BRIEF DESCRIPTION OF THE DRAWINGS
Additional features and advantages of the invention will become more apparent from the description which follows of a preferred embodiment and the variants thereof, provided by way of example with reference to the appended drawings, in which:
Figure 1 shows a block diagram of an electronic system to detect the presence of a person in proximity or within a limited dangerous area according to an embodiment of the invention;
Figures 2A-2B schematically show a pair of images acquired by a pair of cameras which frame the area beneath the hook of a bridge crane on which the electronic system of the invention is mounted, in two different examples of system operation. DETAILED DESCRIPTION OF THE INVENTION
It should be noted that in the description below, identical or similar blocks, components or modules, even if they appear in different embodiments of the invention, are indicated by the same numerical references in the figures.
With reference to Figure 1 , it shows an electronic system 10 to detect the presence of a person in proximity or within a limited dangerous area.
The dangerous area is a limited portion of an industrial environment, such as the area beneath the hook of a crane or of a bridge crane.
The dangerous area is monitored in real time to check if an operator (assigned to carry out a certain task in the considered industrial environment) is positioned in proximity or within the dangerous area, in order to take appropriate measures such as generating an audible and/or visual alarm indicative of the presence of a dangerous condition or stopping the operation of a particular machine positioned in the considered industrial environment.
The electronic system 10 is typically mounted on a mobile structure, so in this case the dangerous area is mobile, while people may also be in a stationary position in the environment considered.
The dangerous area depends on the type of application in which the system 10 is used and can have for example a circular or rectangular or polygon shape. For example, the electronic system 10 is used to monitor a dangerous area beneath the hook of a crane installed in a warehouse where there are several steel coils which are particularly heavy, for example having a weight greater than 10 tons.
It is known that a bridge crane comprises a pair of parallel tracks located at the top above the sides of a building (for example, a warehouse), through which a mobile metal bridge (called a beam) runs, on which a carriage with a winch and a gripping member is mounted, such as a hook for lifting heavy objects.
The bridge crane is used, for example, to move semi-finished materials or finished products between one department and the other of a warehouse, or towards the loading or unloading area of the goods.
In this case, the electronic system 10 is mounted on the carriage of the bridge crane and the dangerous area has the shape of a circle centred on the weight lifting hook, with a variable radius (for example around 3-7 metres) and programmable according to the desired safety requirements, or according to the safety policies defined in a company in relation to the minimum distance required for operators with respect to the load.
More in general, the shape of the dangerous area depends on the factor of the shape of the load of the bridge crane or crane.
The electronic system 10 comprises a processing device 1 and a pair of cameras 2, 3 electrically connected to the processing unit 1.
The pair of cameras 2, 3 is positioned so as to frame from the top at least one portion of the dangerous area to be monitored, thus even people who are in proximity or within the dangerous area are framed from the top.
The pair of cameras 2, 3 is then configured to each acquire a flow 11 , I2 of real time images representative of a respective portion of the dangerous area to be monitored, in which the two portions overlap at least in part and together cover the entire dangerous area; in particular, the images acquired by the pair of cameras contain a top view of the people who are in proximity or within the defined dangerous area.
The cameras 2, 3 are for example HWIN@ Dahua HAC-HDBW2220R-Z, which have a resolution of 2.4 Megapixels and an acquisition frequency of 30 images per second.
The processing device 1 is made, for example, with the industrial series PC Neousys Nuvo 5000, in particular 5000E/P.
The use of two (or more than two) cameras has the advantage of allowing to obtain a stereoscopic view of the monitored dangerous area, also allowing to detect the presence of a person seen from the top positioned in proximity or within the dangerous area, even when the framed person is partially hidden from other objects present in the dangerous area itself.
Furthermore, in the case of application of the bridge crane, there is a blind spot beneath the load itself which could cover (at least partially) a person who is within the blind spot, with the risk of not being able to correctly detect the presence thereof: the use of two (or more) cameras allows to improve the visibility in the area beneath the load, reducing the risk of a failure to detect the presence of a person in the area beneath the load.
With reference to Figure 2A, it shows the application in which the dangerous area is that beneath the crane of a bridge crane, the two cameras 2, 3 are positioned on the carriage of the bridge crane on the two opposite sides and substantially equidistant with respect to the direction defined by the weight lifting hook and the electronic processing device 1 is mounted on the carriage.
In this case, the first camera 2 is such to acquire a first image 11.1 (of the first flow of images 11) representative of a top view of one side of the area beneath the bridge crane hook in which a plurality of coils 16 are positioned, while the second camera 3 is such to acquire a second image 12.1 (of the second flow of images I2) representative of a top view of the other side of the area beneath the hook in which the same plurality of coils 16 and additional coils 18 are positioned.
The dangerous area has the shape of a circle centred on the hook and the cameras 2, 3 have the lens oriented so as to frame the area beneath the hook of the bridge crane; in particular, in Figure 2A the first dangerous area 15-1 associated with the first camera 2 and having the shape of a circle is shown on the left (considering the reading orientation), and the second dangerous area 15-2 associated with the second camera 3 and also having the shape of a circle is shown on the right.
In this way it is generated a stereoscopic image of a top view of both sides of the hook of the bridge crane.
It can be seen in Figure 2A that the second camera 3 acquires a top view of a portion of the area beneath the hook of the crane which is partially overlapped on the portion acquired by the first camera 2, thus a part which is outside the circle associated with the first camera 2 is instead inside the circle associated with the second camera 3 (see the coils 18 which are only present in the circle of the second image 12.1 associated with the second camera 3). The above observations related to Figure 2A are similarly applicable to Figure 2B, with the difference that the camera 2 acquires another image 11.2 (of the first flow of images 11) representative of a top view of one side of the area beneath the hook of the bridge crane, while the camera 3 acquires another image (of the second flow of images I2) representative of a top view of the other side of the area beneath the hook in which a plurality of coils 19 are positioned, in which the second image I2.2 is partially overlapped on the first image 11.2, thus a part which is outside the circle associated with the first camera 2 is instead inside the circle associated with the second camera 3 (see the coils 19 which are only present in the circle of the second image I2.2 associated with the second camera 3).
It should be noted, however, that the presence of both cameras 2, 3 is not essential, i.e., applications are possible in which even a single camera is sufficient.
Furthermore, it is possible to use more than two cameras in order to increase the reliability of the system 10 to recognize the presence of people from the top.
The processing device 1 is an electronic device, which in turn comprises: a data processing unit 1 -1 ; a graphic processing unit 1 -2; a memory 1-5.
The graphic processing unit 1-2 is connected on one side to the two cameras 2, 3 and on the other side to the data processing unit 1 -1.
The graphic processing unit 1-2 (commonly referred to as GPU= Graphics Processing Unit) is an electronic circuit dedicated to processing images efficiently and quickly, by using structures which perform image processing in parallel and by using specialized processing circuits to perform a particular type of processing, such as geometric calculations, polygon rendering, texture mapping, oversampling, interpolation, vector and matrix calculations, motion compensation, etc.
The graphic processing unit 1-2 is for example the model Nvidia GTX 1050 Ti.
The graphic processing unit 1-2 has the function of receiving in parallel the two flows of images 11 , I2 acquired respectively by means of the cameras 2, 3, in which the acquired images of the two flows 11 , I2 are representative of a top view of the dangerous area and of the top view of the possible presence of one or more people in proximity or within the dangerous area, in particular a top view of at least part of the head and/or of the body of at least one person. Therefore the graphic processing unit 1-2 has the function of appropriately processing the two acquired flows of images 11 , I2 by means of a parallel type processing architecture and it has the function of generating a positioning signal S_pos indicative of the position (within the analysed image) of at least one portion of the image representative of the top view of at least part of the head and/or of the body of at least one person.
In other words, the graphic processing unit 1-2 is capable of both identifying in an image a portion representative of the top view of at least part of the head and/or of the body of a person, and localizing said portion within the analysed image, thus providing the position (e.g., expressed in pixel coordinates) within the image of the identified portion of image representative of the top view of at least part of the head and/or of the body of a person.
Figures 2A-2B show with a square 20 the position of the head and/or of the body (seen from the top) which has been identified by means of the graphic processing unit 1 -2.
The use of a graphic processing unit 1-2 (separate from the data processing unit 1- 1) has the advantage of significantly reducing the processing time of the acquired images, by means of a parallel processing of distinct smaller portions of the same image: this allows the electronic system 10 to analyse 30 images per second for each camera and to promptly generate an alarm signal indicative of the presence of a dangerous condition with a reduced reaction time, typically less than one second, in particular equal to about 0.5 seconds, thus avoiding a possible dangerous situation of a workplace accident.
The data processing unit 1-1 (for example a microprocessor or a programmable logic unit) has the function of comparing the position of the top views of one or more people (identified by means of the graphic processing unit 1-2) and the perimeter of the dangerous area, in order to determine if one or more people are in proximity or within the perimeter of the dangerous area.
The data processing unit 1-1 is configured to generate, as a function of the positioning signal S_pos, an alarm signal S_al indicative of the presence or absence of a dangerous condition, in particular indicative of the presence of a person positioned in proximity or within the perimeter of the dangerous area or indicative of the absence of the person in the dangerous area (i.e., the person is far from the dangerous area). In particular, the data processing unit 1-1 is configured to generate the alarm signal S_al having a first value (e.g., a high logical value) representative of the presence of at least one person in proximity or within the dangerous area, when the processing unit is such to detect that a top view of at least part of the head and/or of the body of a person is positioned in proximity or within the perimeter of the dangerous area; conversely, the data processing unit 1-1 is configured to generate the alarm signal S_al having a second value (e.g., a low logical value) representative of the absence of people in proximity or within the dangerous area (i.e., people are far from the dangerous area).
Referring again to Figure 2A, the image 11.1 comprises a top view of the head, shoulders and a part of the trunk of a person 12 (a worker) wearing a helmet and located in the warehouse where the bridge crane on which the system 10 is mounted is installed: it can be seen that the worker 12 is partially inside the first dangerous area 15-
1 associated with the first camera 2, thus in this case the data processing unit 1-1 is such to generate the alarm signal S_al having a first value (for example, a high logical value) representative of the presence of the person 12 who is partially inside the first dangerous area 15-1.
Furthermore, in Figure 2A it can be seen that in the second image 12.1 (acquired from the second camera 3) there is no top view of the head and/or of the body of the worker 12, nor of other people, since the second camera 3 acquires a top view of a portion of the area beneath the hook of the crane which is partially overlapped on the portion acquired by the first camera 2: therefore the worker 12 which is not framed by the second camera 3, is instead framed by the first camera 2, thus by means of the area framed by the first camera 2 it is possible to detect the proximity of the person 12 to the dangerous area 15-1 in the form of a circle associated with the first camera 2.
The above considerations related to Figure 2A are similarly applicable to Figure 2B, with the difference that the same person 13 is present both in the image 11.2 acquired by the first camera 2 (in this case indicated with 13-1), and in the image I2.2 acquired by the second camera 3 (in this case indicated with 13-2): in this case the person 13-1 , 13-
2 is outside both the dangerous area 15-1 in the form of a circle, and outside the dangerous area 15-2 in the form of a circle and also the person 13-1 , 13-2 is sufficiently far from the perimeter of the dangerous area 15-1 and 15-2, thus the data processing unit 1-1 is such to generate the alarm signal S_al having a second value (for example, a low logical value) representative of the absence of the person 13-1 , 13-2 within or in proximity of the dangerous area 15-1 and 15-2. Preferably, the dangerous area is divided into two or more concentric areas, each associated with a different level of danger, in which the outermost area is associated with the lowest level of danger and the innermost dangerous area is associated with a higher level of danger: this has the purpose of increasing the safety of the person, increasing his awareness of positioning with respect to the danger, thus achieving a training aim regarding the prevention of workplace accidents.
For example, the dangerous area is divided into two concentric dangerous areas (for example, two concentric circles), where the outermost dangerous area is associated with a low danger level and the innermost dangerous area is associated with a high danger level.
In this example, the data processing unit 1-1 is configured to generate the alarm signal S_al having two possible values, as a function of the low or high danger level detected: the alarm signal S_al has a first warning value indicative of a condition of imminent danger, when the data processing unit 1-1 is such to detect the presence of at least one person positioned within the perimeter of the outer dangerous area, but still outside the inner dangerous area (for example, at a distance of less than 1 metre from the perimeter of the latter); the alarm signal S_al has a second alarm value indicative of an actual condition of danger (alarm), when the data processing unit 1-1 is such to detect the presence of at least one person positioned within the perimeter of the inner dangerous area.
Advantageously, the dimensions of the dangerous area are dynamically varied, i.e., they are increased or decreased as a function of the desired safety requirements, or according to the safety policies defined in a company in relation to the minimum distance required between the operators and the load of the crane.
The data processing unit 1-1 is for example Intel Core i5-6500TE (Skylake) 2.3 GHz Micro Processor.
The alarm signal S_al can be one or a combination of the following signals: an acoustic signal generated by a siren 4 (i.e., a speaker) connected to the processing device 1 (and therefore with the data processing unit 1-1); a light signal (e.g., flashing) generated by a light source 5; a graphic and/or textual indication of a screen connected to the processing device 1 (and thus to the data processing unit 1-1) by means of a wired connection or by means of a short distance wireless signal (for example, of the Bluetooth or WiFi type); a graphic and/or textual indication of a screen of a mobile electronic device (for example, a smartphone, tablet or laptop) connected to the processing device 1 (and therefore to the data processing unit 1-1) by means of a short distance wireless signal (for example, of the Bluetooth or WiFi type).
Considering the application in which the dangerous area is that beneath the crane of a bridge crane, the siren 4 and/or the light source 5 are mounted on the carriage of the bridge crane, so that the light beam emitted by the light source 5 is visible by the people who are positioned in proximity or within the dangerous area and so that the sound wave generated by the siren 4 is received by the same people.
The data processing unit 1-1 runs an appropriate software program which appropriately processes the positioning signal S_img_pr, detects the presence of one or more people positioned in proximity and/or within the dangerous area and then generates the alarm signal S_al to drive the siren 4 and/or the light source 5 and/or a display screen.
The memory 1-5 is non-volatile and it has the function of storing in real time a plurality of images acquired by means of the cameras 2, 3, when the presence of at least one person is detected in proximity and/or within the defined dangerous area.
In particular, the memory 1-5 is configured to store a sequence of images representative of a top view of the dangerous area comprising a person positioned in proximity or within the dangerous area, starting from the instant when the data processing unit 1-1 is such to detect the presence of a person positioned in proximity or within the dangerous area, until the instant when the data processing unit 1-1 is such to detect that the person has moved away from the dangerous area (or has left the dangerous area).
It should be noted that the real-time saving of the images which triggered the alarm event is possible by virtue of the separation between the data processing unit 1-1 and the graphic processing unit 1-2, allowing the two units to operate in parallel.
According to a preferred embodiment, the electronic system 10 further comprises a wireless signal transceiver (for example, of the WiFi type) and thus the electronic system 10 is connected (by means of the wireless signal transceiver) with an external electronic device by means of a wireless connection. According to this preferred embodiment, the data processing unit 1-1 is configured to read from the memory 1-5 the plurality of stored images and forward them to the wireless signal transceiver, then the wireless signal transceiver is configured to transmit said plurality of stored images to the external electronic device: it is thereby possible to carry out (in the external electronic device or in another electronic device) a subsequent processing of the plurality of stored images in order to process statistical analyses or for forensic analyses, in order to identify a possible intervention to improve safety measures in the industrial environment considered.
Advantageously, the graphic processing unit 1-2 uses Artificial Intelligence techniques in order to detect the presence of a person in proximity or within the dangerous area, in particular using a deep neural network (Deep Learning) implemented in the graphic processing unit 1-2, even more in particular a convolutional neural network.
In this case, the deep neural network (possibly convolutional) is first trained using a training set appropriately created based on images which contain at least one person viewed from the top in different possible positions, such as the following top views in an industrial environment: images representative of an industrial environment which comprise a top view of a standing person wearing a protective helmet; images representative of an industrial environment which comprise a top view of a standing person not wearing a protective helmet; images representative of the industrial environment which comprise a top view of a person lying down; images representative of the industrial environment which comprise a top view of a crouching person wearing a protective helmet; images representative of an industrial environment which comprise a top view of a crouching person not wearing a protective helmet; images representative of the industrial environment which comprise a top view of a person riding a bicycle.
Advantageously, in the case of application of the bridge crane, in the training step it is determined the starting height from the track at the height of the hook of the bridge crane, in order to calculate the height of the transported load with respect to the ground and/or with respect to a defined point of the crane (for example, the upper vertex thereof): it is thereby possible to improve the accuracy with which a dangerous situation is detected.
In particular, the perimeter of the dangerous area is dynamically changed according to the height and possibly according to the type of load, appropriately increasing or decreasing the perimeter of the dangerous area.
It is also possible to determine the relative speed of the person with respect to the load, calculate the time which would elapse before a possible collision and vary the perimeter of the dangerous area accordingly, anticipating the alarm signal.
At the end of the training step, the parameters of the new deep neural network (possibly convolutional) have been then determined: a mathematical model of the deep neural network is then generated which is capable of recognizing both a top view of at least one portion of the head and/or of the body of a person, both top views comprised in the training set, and top views of new images acquired during the successive normal operating step.
It should be noted that the new deep neural network is capable of successfully recognizing the presence of a person in proximity or within the dangerous area by recognizing at least one portion of a head and/or of a body of a person, whether the person is wearing one or more pieces of personal protective equipment (e.g., a helmet), or if the person is not wearing any personal protective equipment.
Furthermore, said recognition of at least one portion of a head and/or of a body of a person occurs successfully in different possible situations of the state of the person (i.e., sitting, standing, and lying down) and in different possible situations of the health of the person (for example, even when the person is lying on the ground due to an ailment).
In particular, the new deep neural network implemented in the graphic processing unit 1-2 is configured, during the normal operation of the electronic system 10, to recognize the presence of a person who is within or in proximity of the dangerous area beneath the hook of a crane in an industrial environment by means of the analysis of images acquired by the cameras representative of a top view of the person in the following situations: standing person wearing a protective helmet; standing person not wearing a protective helmet; person lying down; crouching person wearing a protective helmet; crouching person not wearing a protective helmet; person riding a bicycle.
Therefore it can be observed that by using an appropriately trained deep neural network, it is possible to distinguish with high reliability and reduced reaction time a person wearing a protective helmet (or other personal protective equipment) from a person not wearing a protective helmet.
Said trained deep neural network (possibly convolutional) is thus implemented in the electronic circuits of the graphic processing unit 1-2.
During operation, it is continuously improved the ability to recognize a top view of at least one portion of a head and/or of a body of a person, continuously updating the parameters of the deep neural network (possibly convolutional).
It should be noted that the use of artificial intelligence techniques allows to identify the presence of a person in proximity or within the dangerous area by means of the analysis of a single image, or without requiring a comparison between two or more subsequent images which are close in time, as is instead carried out in the known vision techniques for the recognition of people framed horizontally (i.e., in front).
Advantageously, the deep neural network is such to identify not only the presence or absence of a top view of at least one portion of the head and/or of the body of a person, but (if present) it is such to provide the position (i.e., the location) within the analysed image of the top view of the identified portion of the head and/or of the body of the person.
Furthermore, in the application of the bridge crane, the deep neural network is such to automatically calculate the height of the transported load with respect to the ground and/or with respect to a defined point of the crane (for example, the upper vertex thereof).
For example, the deep neural network is created using the YOLO library (You Only Look Once) which provides the necessary functions to perform the recognition of objects by means of the analysis of a single image, using a single neural network for the entire image, generating at the output information indicative not only of the presence of a person, but also where the person is positioned within the analysed image.
It is useful to note that the YOLO neural network and the standard libraries thereof are not adapted to recognize people from the top, but they are used to recognize and localize objects viewed horizontally (i.e., in front): therefore, the Applicant used the structure of a known neural network to create a new deep neural network model operating for the recognition and localization of people framed from the top. In order to obtain a high-performance model capable of signalling a dangerous situation in a very short time and avoiding a possible accident, the original YOLO model, i.e. Darknet, has been modified. This approach allows to have the maximum possible performance for YOLO, unlike other frameworks or libraries such as Keras, TensorFlow, PyTorch which, although they provide support during the editing and training step of the neural network, weigh down the execution of the algorithm, causing a decrease in performance both in terms of precision and time, both key factors of the system.
The electronic system 10 is further provided with the possibility of remote access and it has a web interface through which it is possible to modify the area framed by the camera and the dangerous area, display the system status and the alarm events generated by the system in real time.

Claims

1 . Electronic system (10) to detect the presence of a person, the system comprising a processing device (1 ) and a camera (2) connected to the processing device, the processing device comprising a graphic processing unit (1 -2) and a data processing unit (1 -1 ) connected to each other, the graphic processing unit (1 -2) being further connected to the camera, wherein the camera is configured to acquire a flow of images (11 ) representative of a top view of a defined dangerous area (15-1 ), and wherein the graphic processing unit (1 -2) is configured to: receive the flow of acquired images (11 ); analyse successively a single image of the flow of received images and identify, for at least one of the analysed images, at least one portion comprising a top view of at least part of the head and/or of the body of at least one person; generate a positioning signal (S_pos) indicative of the position of said portion of image representative of a top view of at least part of the head and/or of the body within the analysed image; and wherein the data processing unit (1 -1 ) is configured to: receive the positioning signal (S_pos) indicative of the position of said portion of image representative of the top view of at least part of the head and/or of the body; compare the position of said portion of image with respect to the perimeter of the dangerous area; generate an alarm signal representative of the presence of at least one person in proximity or within the dangerous area, as a function of the comparison between the position of said portion of image and the perimeter of the dangerous area.
2. Electronic system according to claim 1 , comprising a further camera (3) connected to the graphic processing unit (1 -2), wherein the further camera is configured to acquire a further (I2) flow of images representative of a further top view of the dangerous area, wherein the top view and the further top view are at least partly overlapped, wherein the graphic processing unit is further configured to: receive the flow of images (11 ) and the further flow of images (I2) acquired; analyse successively a single image of the flow of received images and a single image of the further flow of received images and identify, for at least one of the analysed images of the flow of received images and/or of the further flow of received images, at least one portion comprising a top view of at least part of the head and/or of the body of at least one person; generate the positioning signal (S_pos) indicative of the position of said portion of image representative of the top view of at least part of the head and/or of the body in the analysed image and/or within the further analysed image.
3. Electronic system according to claims 1 or 2, wherein the data processing unit (1- 1) is configured to: generate a warning signal indicative of an imminent dangerous condition, in case of detection that said portion of image of the top view of at least part of the head and/or body is positioned in proximity of the perimeter of the dangerous area and outside the dangerous area; generate an alarm signal indicative of an actual dangerous condition, in case of detection that said portion of image of the top view of at least part of the head and/or body is positioned within the perimeter of the dangerous area.
4. Electronic system according to any one of the preceding claims, wherein the processing unit is further configured to dynamically change the dimensions of the dangerous area, increasing or decreasing the dimensions as a function of defined safety requirements, in particular the minimum requested distance between people and a load of a crane.
5. Electronic system according to any one of the preceding claims, wherein the graphic processing unit (1-2) is configured to carry out said analysis of each single image of the flow of received images and/or of the further flow of received images using a deep neural network.
6. Electronic system according to claim 5, wherein the deep neural network of the graphic processing unit is configured to:
- identify said portion of image comprising the top view of at least part of the head and/or of the body of at least one person;
- generate the positioning signal representative of the position, within the analysed image, of the at least one portion comprising said top view of at least part of the head and/or of the body of at least one person.
7. Electronic system according to claim 5 or 6, wherein the deep neural network is further configured to: calculate a value of the height of the load transported by a carriage of a bridge crane with respect to a defined point, in particular with respect to the ground; increase or decrease the perimeter of the dangerous area, as a function of the calculated height value.
8. Electronic system according to any one of claims 4-6, wherein the deep neural network is implemented in the graphic processing unit using the You Only Look Once - YOLO - library.
9. Electronic system according to any one of claims 5 to 8, wherein the deep neural network of the graphic processing unit is configured, during the normal operation of the electronic system, to identify the at least one portion of the top view of at least part of the head and/or of the body of a person in the following situations: standing person wearing a protective helmet; standing person not wearing a protective helmet; person lying down; crouching person wearing a protective helmet; crouching person not wearing a protective helmet; person riding a bicycle.
10. Electronic system according to any one of the preceding claims, further comprising a siren (4) and/or a light source (5) connected to the data processing unit (1- 1), wherein the data processing unit is configured to generate the alarm signal to:
- drive the siren to generate a sound signal indicative of the imminent or present condition of danger, in case of detection that the identified portion of the top view of at least part of the head and/or body is positioned in proximity or within the perimeter of the dangerous area and outside the dangerous area; and/or
- drive the light source (5) to generate a light signal indicative of the imminent or current condition of danger, in case of detection that the identified portion of the top view of at least part of the head and/or body is positioned in proximity or within the perimeter of the dangerous area and outside the dangerous area.
11. Electronic system according to any one of the preceding claims, wherein the position of said portion of image representative of the top view of at least part of the head and/or body is expressed in coordinates of pixels.
12. Bridge crane comprising an electronic system to detect the presence of a person in proximity or within a limited dangerous area according to any one of the preceding claims, the bridge crane comprising a movable carriage on which the processing device and the at least one camera configured to frame the dangerous area from the top are mounted, wherein the dangerous area is the area beneath the hook of the bridge crane and has a shape substantially of a circle or of a polygon centred at the hook.
13. Bridge crane according to claim 9, comprising a first and a second camera mounted on opposite sides and substantially equally spaced with respect to the direction defined by the movement of the hook for lifting weights, wherein the first camera is configured to frame from the top a first side with respect to the hook and the second camera is configured to frame from the top a second side with respect to the hook opposite the first side.
EP21746145.8A 2020-07-02 2021-06-30 Electronic system to detect the presence of a person in a limited area Pending EP4176379A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
IT102020000016051A IT202000016051A1 (en) 2020-07-02 2020-07-02 ELECTRONIC SYSTEM FOR DETECTING THE PRESENCE OF A PERSON IN A LIMITED AREA
PCT/IB2021/055860 WO2022003589A1 (en) 2020-07-02 2021-06-30 Electronic system to detect the presence of a person in a limited area

Publications (1)

Publication Number Publication Date
EP4176379A1 true EP4176379A1 (en) 2023-05-10

Family

ID=72644652

Family Applications (1)

Application Number Title Priority Date Filing Date
EP21746145.8A Pending EP4176379A1 (en) 2020-07-02 2021-06-30 Electronic system to detect the presence of a person in a limited area

Country Status (3)

Country Link
EP (1) EP4176379A1 (en)
IT (1) IT202000016051A1 (en)
WO (1) WO2022003589A1 (en)

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2010241548A (en) * 2009-04-03 2010-10-28 Kansai Electric Power Co Inc:The Safety confirmation device of crane
WO2019151876A1 (en) * 2018-02-02 2019-08-08 Digital Logistics As Cargo detection and tracking
CN110745704B (en) * 2019-12-20 2020-04-10 广东博智林机器人有限公司 Tower crane early warning method and device

Also Published As

Publication number Publication date
WO2022003589A1 (en) 2022-01-06
IT202000016051A1 (en) 2022-01-02

Similar Documents

Publication Publication Date Title
US10636308B2 (en) Systems and methods for collision avoidance
CN109095356B (en) Engineering machinery and operation space dynamic anti-collision method, device and system thereof
US20180141789A1 (en) Optical detection system for lift crane
JP6333741B2 (en) Method and apparatus for securing the hazardous work area of an automatic machine
US9809115B2 (en) Operator drowsiness detection in surface mines
CN111226178A (en) Monitoring device, industrial system, method for monitoring, and computer program
CA2845440A1 (en) Method and system for reducing the risk of a moving machine colliding with personnel or an object
US20200255267A1 (en) A safety system for a machine
KR20160130332A (en) Approach monitoring and operation retardation and stop control system for heavy equipment of industry and construction
CN112836563A (en) Job site classification system and method
EP4152272A1 (en) Systems and methods for low latency analytics and control of devices via edge nodes and next generation networks
JP2021139283A (en) Detection system
CN110231819A (en) System for avoiding collision and the method for avoiding collision
JP2018036920A (en) Obstacle outside visual field detection system
CN102092640A (en) Crane safety monitoring device and method and crane applying device
JPH05261692A (en) Working environment monitoring device for robot
EP4176379A1 (en) Electronic system to detect the presence of a person in a limited area
KR20230094768A (en) Method for determining whether to wear personal protective equipment and server for performing the same
JP2015005152A (en) Approach warning system for hanging load of crane and worker
KR20120038640A (en) Safty system of heavy weight moving apparatus using image processing
JP2021163401A (en) Person detection system, person detection program, leaned model generation program and learned model
JP6960299B2 (en) Suspended load warning system
EP3838476B1 (en) Machine tool for industrial processing operations comprising a system to monitor its operational safety, and method therefore
KR20230122959A (en) Dangerous situstion monitoring apparatus and mehtod for preventing safety accidents
KR20220112326A (en) Device for warning forklift truck proximity

Legal Events

Date Code Title Description
STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: UNKNOWN

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: THE INTERNATIONAL PUBLICATION HAS BEEN MADE

PUAI Public reference made under article 153(3) epc to a published international application that has entered the european phase

Free format text: ORIGINAL CODE: 0009012

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: REQUEST FOR EXAMINATION WAS MADE

17P Request for examination filed

Effective date: 20221229

AK Designated contracting states

Kind code of ref document: A1

Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR

DAV Request for validation of the european patent (deleted)
DAX Request for extension of the european patent (deleted)