WO2022003589A1 - Electronic system to detect the presence of a person in a limited area - Google Patents

Electronic system to detect the presence of a person in a limited area Download PDF

Info

Publication number
WO2022003589A1
WO2022003589A1 PCT/IB2021/055860 IB2021055860W WO2022003589A1 WO 2022003589 A1 WO2022003589 A1 WO 2022003589A1 IB 2021055860 W IB2021055860 W IB 2021055860W WO 2022003589 A1 WO2022003589 A1 WO 2022003589A1
Authority
WO
WIPO (PCT)
Prior art keywords
person
dangerous area
processing unit
top view
image
Prior art date
Application number
PCT/IB2021/055860
Other languages
English (en)
French (fr)
Inventor
Giovanni Andrea Farina
Stefano Della Valle
Original Assignee
Itway S.P.A.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Itway S.P.A. filed Critical Itway S.P.A.
Priority to EP21746145.8A priority Critical patent/EP4176379A1/en
Publication of WO2022003589A1 publication Critical patent/WO2022003589A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/52Surveillance or monitoring of activities, e.g. for recognising suspicious objects

Definitions

  • the present invention generally relates to the electronics field.
  • the present invention relates to an electronic system to detect the presence of a person positioned in proximity or within a limited area.
  • Vision systems are known which use cameras to detect the presence of a person in a certain environment.
  • the known vision systems are not capable of detecting the presence of a person in an area of an industrial environment with sufficient reliability and with a sufficiently reduced reaction time to avoid injuries to people, in the case of framing from the top the people to be detected and in the case where the area to be monitored moves, instead of the people.
  • the present invention relates to an electronic system to detect the presence of a person in proximity or within a limited dangerous area as defined in the appended claim 1 and the preferred embodiments thereof described in dependent claims 2 to 11.
  • the electronic system in accordance with the present invention can detect the presence of a person in proximity or within a limited dangerous area of an industrial environment in a reliable manner (i.e., minimizing false alarms) and it allows an alarm to be generated to warn the person of a dangerous condition with a reduced reaction time (typically less than one second, for example about 0.5 seconds), such to significantly reduce the risk of injury to the person himself, in the case of framing the people to be detected from the top and in the case where it is the mainly the dangerous area which moves, instead of the people.
  • a reduced reaction time typically less than one second, for example about 0.5 seconds
  • the Applicant has perceived that the electronic system in accordance with the present invention can reliably and promptly detect the presence of a person in proximity or within the dangerous area by means of the recognition of at least one portion of the head and/or of the body of a person regardless of his posture (i.e., standing, sitting, lying down) and regardless of whether he is wearing personal protective equipment (typically a helmet), unlike the known solutions which are only capable of detecting the presence of a person in particular postures (typically only standing) or only if they are wearing personal protective equipment (helmet).
  • Figure 1 shows a block diagram of an electronic system to detect the presence of a person in proximity or within a limited dangerous area according to an embodiment of the invention
  • Figures 2A-2B schematically show a pair of images acquired by a pair of cameras which frame the area beneath the hook of a bridge crane on which the electronic system of the invention is mounted, in two different examples of system operation.
  • FIG. 1 shows an electronic system 10 to detect the presence of a person in proximity or within a limited dangerous area.
  • the dangerous area is a limited portion of an industrial environment, such as the area beneath the hook of a crane or of a bridge crane.
  • the dangerous area is monitored in real time to check if an operator (assigned to carry out a certain task in the considered industrial environment) is positioned in proximity or within the dangerous area, in order to take appropriate measures such as generating an audible and/or visual alarm indicative of the presence of a dangerous condition or stopping the operation of a particular machine positioned in the considered industrial environment.
  • the electronic system 10 is typically mounted on a mobile structure, so in this case the dangerous area is mobile, while people may also be in a stationary position in the environment considered.
  • the dangerous area depends on the type of application in which the system 10 is used and can have for example a circular or rectangular or polygon shape.
  • the electronic system 10 is used to monitor a dangerous area beneath the hook of a crane installed in a warehouse where there are several steel coils which are particularly heavy, for example having a weight greater than 10 tons.
  • a bridge crane comprises a pair of parallel tracks located at the top above the sides of a building (for example, a warehouse), through which a mobile metal bridge (called a beam) runs, on which a carriage with a winch and a gripping member is mounted, such as a hook for lifting heavy objects.
  • a building for example, a warehouse
  • a mobile metal bridge called a beam
  • a carriage with a winch and a gripping member is mounted, such as a hook for lifting heavy objects.
  • the bridge crane is used, for example, to move semi-finished materials or finished products between one department and the other of a warehouse, or towards the loading or unloading area of the goods.
  • the electronic system 10 is mounted on the carriage of the bridge crane and the dangerous area has the shape of a circle centred on the weight lifting hook, with a variable radius (for example around 3-7 metres) and programmable according to the desired safety requirements, or according to the safety policies defined in a company in relation to the minimum distance required for operators with respect to the load.
  • the shape of the dangerous area depends on the factor of the shape of the load of the bridge crane or crane.
  • the electronic system 10 comprises a processing device 1 and a pair of cameras 2, 3 electrically connected to the processing unit 1.
  • the pair of cameras 2, 3 is positioned so as to frame from the top at least one portion of the dangerous area to be monitored, thus even people who are in proximity or within the dangerous area are framed from the top.
  • the pair of cameras 2, 3 is then configured to each acquire a flow 11 , I2 of real time images representative of a respective portion of the dangerous area to be monitored, in which the two portions overlap at least in part and together cover the entire dangerous area; in particular, the images acquired by the pair of cameras contain a top view of the people who are in proximity or within the defined dangerous area.
  • the cameras 2, 3 are for example HWIN@ Dahua HAC-HDBW2220R-Z, which have a resolution of 2.4 Megapixels and an acquisition frequency of 30 images per second.
  • the processing device 1 is made, for example, with the industrial series PC Neousys Nuvo 5000, in particular 5000E/P.
  • the use of two (or more than two) cameras has the advantage of allowing to obtain a stereoscopic view of the monitored dangerous area, also allowing to detect the presence of a person seen from the top positioned in proximity or within the dangerous area, even when the framed person is partially hidden from other objects present in the dangerous area itself.
  • the use of two (or more) cameras allows to improve the visibility in the area beneath the load, reducing the risk of a failure to detect the presence of a person in the area beneath the load.
  • FIG. 2A shows the application in which the dangerous area is that beneath the crane of a bridge crane, the two cameras 2, 3 are positioned on the carriage of the bridge crane on the two opposite sides and substantially equidistant with respect to the direction defined by the weight lifting hook and the electronic processing device 1 is mounted on the carriage.
  • the first camera 2 is such to acquire a first image 11.1 (of the first flow of images 11) representative of a top view of one side of the area beneath the bridge crane hook in which a plurality of coils 16 are positioned
  • the second camera 3 is such to acquire a second image 12.1 (of the second flow of images I2) representative of a top view of the other side of the area beneath the hook in which the same plurality of coils 16 and additional coils 18 are positioned.
  • the dangerous area has the shape of a circle centred on the hook and the cameras 2, 3 have the lens oriented so as to frame the area beneath the hook of the bridge crane; in particular, in Figure 2A the first dangerous area 15-1 associated with the first camera 2 and having the shape of a circle is shown on the left (considering the reading orientation), and the second dangerous area 15-2 associated with the second camera 3 and also having the shape of a circle is shown on the right.
  • the second camera 3 acquires a top view of a portion of the area beneath the hook of the crane which is partially overlapped on the portion acquired by the first camera 2, thus a part which is outside the circle associated with the first camera 2 is instead inside the circle associated with the second camera 3 (see the coils 18 which are only present in the circle of the second image 12.1 associated with the second camera 3).
  • both cameras 2, 3 is not essential, i.e., applications are possible in which even a single camera is sufficient.
  • the processing device 1 is an electronic device, which in turn comprises: a data processing unit 1 -1 ; a graphic processing unit 1 -2; a memory 1-5.
  • the graphic processing unit 1-2 is connected on one side to the two cameras 2, 3 and on the other side to the data processing unit 1 -1.
  • the graphic processing unit 1-2 is for example the model Nvidia GTX 1050 Ti.
  • the graphic processing unit 1-2 has the function of receiving in parallel the two flows of images 11 , I2 acquired respectively by means of the cameras 2, 3, in which the acquired images of the two flows 11 , I2 are representative of a top view of the dangerous area and of the top view of the possible presence of one or more people in proximity or within the dangerous area, in particular a top view of at least part of the head and/or of the body of at least one person.
  • the graphic processing unit 1-2 has the function of appropriately processing the two acquired flows of images 11 , I2 by means of a parallel type processing architecture and it has the function of generating a positioning signal S_pos indicative of the position (within the analysed image) of at least one portion of the image representative of the top view of at least part of the head and/or of the body of at least one person.
  • the graphic processing unit 1-2 is capable of both identifying in an image a portion representative of the top view of at least part of the head and/or of the body of a person, and localizing said portion within the analysed image, thus providing the position (e.g., expressed in pixel coordinates) within the image of the identified portion of image representative of the top view of at least part of the head and/or of the body of a person.
  • Figures 2A-2B show with a square 20 the position of the head and/or of the body (seen from the top) which has been identified by means of the graphic processing unit 1 -2.
  • a graphic processing unit 1-2 (separate from the data processing unit 1- 1) has the advantage of significantly reducing the processing time of the acquired images, by means of a parallel processing of distinct smaller portions of the same image: this allows the electronic system 10 to analyse 30 images per second for each camera and to promptly generate an alarm signal indicative of the presence of a dangerous condition with a reduced reaction time, typically less than one second, in particular equal to about 0.5 seconds, thus avoiding a possible dangerous situation of a workplace accident.
  • the data processing unit 1-1 (for example a microprocessor or a programmable logic unit) has the function of comparing the position of the top views of one or more people (identified by means of the graphic processing unit 1-2) and the perimeter of the dangerous area, in order to determine if one or more people are in proximity or within the perimeter of the dangerous area.
  • the data processing unit 1-1 is configured to generate, as a function of the positioning signal S_pos, an alarm signal S_al indicative of the presence or absence of a dangerous condition, in particular indicative of the presence of a person positioned in proximity or within the perimeter of the dangerous area or indicative of the absence of the person in the dangerous area (i.e., the person is far from the dangerous area).
  • the data processing unit 1-1 is configured to generate the alarm signal S_al having a first value (e.g., a high logical value) representative of the presence of at least one person in proximity or within the dangerous area, when the processing unit is such to detect that a top view of at least part of the head and/or of the body of a person is positioned in proximity or within the perimeter of the dangerous area; conversely, the data processing unit 1-1 is configured to generate the alarm signal S_al having a second value (e.g., a low logical value) representative of the absence of people in proximity or within the dangerous area (i.e., people are far from the dangerous area).
  • a first value e.g., a high logical value
  • the image 11.1 comprises a top view of the head, shoulders and a part of the trunk of a person 12 (a worker) wearing a helmet and located in the warehouse where the bridge crane on which the system 10 is mounted is installed: it can be seen that the worker 12 is partially inside the first dangerous area 15-
  • the data processing unit 1-1 is such to generate the alarm signal S_al having a first value (for example, a high logical value) representative of the presence of the person 12 who is partially inside the first dangerous area 15-1.
  • the data processing unit 1-1 is such to generate the alarm signal S_al having a second value (for example, a low logical value) representative of the absence of the person 13-1 , 13-2 within or in proximity of the dangerous area 15-1 and 15-2.
  • the dangerous area is divided into two or more concentric areas, each associated with a different level of danger, in which the outermost area is associated with the lowest level of danger and the innermost dangerous area is associated with a higher level of danger: this has the purpose of increasing the safety of the person, increasing his awareness of positioning with respect to the danger, thus achieving a training aim regarding the prevention of workplace accidents.
  • the dangerous area is divided into two concentric dangerous areas (for example, two concentric circles), where the outermost dangerous area is associated with a low danger level and the innermost dangerous area is associated with a high danger level.
  • the data processing unit 1-1 is configured to generate the alarm signal S_al having two possible values, as a function of the low or high danger level detected: the alarm signal S_al has a first warning value indicative of a condition of imminent danger, when the data processing unit 1-1 is such to detect the presence of at least one person positioned within the perimeter of the outer dangerous area, but still outside the inner dangerous area (for example, at a distance of less than 1 metre from the perimeter of the latter); the alarm signal S_al has a second alarm value indicative of an actual condition of danger (alarm), when the data processing unit 1-1 is such to detect the presence of at least one person positioned within the perimeter of the inner dangerous area.
  • the alarm signal S_al has a first warning value indicative of a condition of imminent danger, when the data processing unit 1-1 is such to detect the presence of at least one person positioned within the perimeter of the outer dangerous area, but still outside the inner dangerous area (for example, at a distance of less than 1 metre from the perimeter of the latter); the alarm signal S_al has a second
  • the dimensions of the dangerous area are dynamically varied, i.e., they are increased or decreased as a function of the desired safety requirements, or according to the safety policies defined in a company in relation to the minimum distance required between the operators and the load of the crane.
  • the data processing unit 1-1 is for example Intel Core i5-6500TE (Skylake) 2.3 GHz Micro Processor.
  • the alarm signal S_al can be one or a combination of the following signals: an acoustic signal generated by a siren 4 (i.e., a speaker) connected to the processing device 1 (and therefore with the data processing unit 1-1); a light signal (e.g., flashing) generated by a light source 5; a graphic and/or textual indication of a screen connected to the processing device 1 (and thus to the data processing unit 1-1) by means of a wired connection or by means of a short distance wireless signal (for example, of the Bluetooth or WiFi type); a graphic and/or textual indication of a screen of a mobile electronic device (for example, a smartphone, tablet or laptop) connected to the processing device 1 (and therefore to the data processing unit 1-1) by means of a short distance wireless signal (for example, of the Bluetooth or WiFi type).
  • a short distance wireless signal for example, of the Bluetooth or WiFi type
  • the dangerous area is that beneath the crane of a bridge crane, the siren 4 and/or the light source 5 are mounted on the carriage of the bridge crane, so that the light beam emitted by the light source 5 is visible by the people who are positioned in proximity or within the dangerous area and so that the sound wave generated by the siren 4 is received by the same people.
  • the data processing unit 1-1 runs an appropriate software program which appropriately processes the positioning signal S_img_pr, detects the presence of one or more people positioned in proximity and/or within the dangerous area and then generates the alarm signal S_al to drive the siren 4 and/or the light source 5 and/or a display screen.
  • the memory 1-5 is non-volatile and it has the function of storing in real time a plurality of images acquired by means of the cameras 2, 3, when the presence of at least one person is detected in proximity and/or within the defined dangerous area.
  • the memory 1-5 is configured to store a sequence of images representative of a top view of the dangerous area comprising a person positioned in proximity or within the dangerous area, starting from the instant when the data processing unit 1-1 is such to detect the presence of a person positioned in proximity or within the dangerous area, until the instant when the data processing unit 1-1 is such to detect that the person has moved away from the dangerous area (or has left the dangerous area).
  • the electronic system 10 further comprises a wireless signal transceiver (for example, of the WiFi type) and thus the electronic system 10 is connected (by means of the wireless signal transceiver) with an external electronic device by means of a wireless connection.
  • the data processing unit 1-1 is configured to read from the memory 1-5 the plurality of stored images and forward them to the wireless signal transceiver, then the wireless signal transceiver is configured to transmit said plurality of stored images to the external electronic device: it is thereby possible to carry out (in the external electronic device or in another electronic device) a subsequent processing of the plurality of stored images in order to process statistical analyses or for forensic analyses, in order to identify a possible intervention to improve safety measures in the industrial environment considered.
  • the graphic processing unit 1-2 uses Artificial Intelligence techniques in order to detect the presence of a person in proximity or within the dangerous area, in particular using a deep neural network (Deep Learning) implemented in the graphic processing unit 1-2, even more in particular a convolutional neural network.
  • a deep neural network Deep Learning
  • the deep neural network (possibly convolutional) is first trained using a training set appropriately created based on images which contain at least one person viewed from the top in different possible positions, such as the following top views in an industrial environment: images representative of an industrial environment which comprise a top view of a standing person wearing a protective helmet; images representative of an industrial environment which comprise a top view of a standing person not wearing a protective helmet; images representative of the industrial environment which comprise a top view of a person lying down; images representative of the industrial environment which comprise a top view of a crouching person wearing a protective helmet; images representative of an industrial environment which comprise a top view of a crouching person not wearing a protective helmet; images representative of the industrial environment which comprise a top view of a person riding a bicycle.
  • the training step it is determined the starting height from the track at the height of the hook of the bridge crane, in order to calculate the height of the transported load with respect to the ground and/or with respect to a defined point of the crane (for example, the upper vertex thereof): it is thereby possible to improve the accuracy with which a dangerous situation is detected.
  • the perimeter of the dangerous area is dynamically changed according to the height and possibly according to the type of load, appropriately increasing or decreasing the perimeter of the dangerous area.
  • the parameters of the new deep neural network have been then determined: a mathematical model of the deep neural network is then generated which is capable of recognizing both a top view of at least one portion of the head and/or of the body of a person, both top views comprised in the training set, and top views of new images acquired during the successive normal operating step.
  • the new deep neural network is capable of successfully recognizing the presence of a person in proximity or within the dangerous area by recognizing at least one portion of a head and/or of a body of a person, whether the person is wearing one or more pieces of personal protective equipment (e.g., a helmet), or if the person is not wearing any personal protective equipment.
  • personal protective equipment e.g., a helmet
  • said recognition of at least one portion of a head and/or of a body of a person occurs successfully in different possible situations of the state of the person (i.e., sitting, standing, and lying down) and in different possible situations of the health of the person (for example, even when the person is lying on the ground due to an ailment).
  • the new deep neural network implemented in the graphic processing unit 1-2 is configured, during the normal operation of the electronic system 10, to recognize the presence of a person who is within or in proximity of the dangerous area beneath the hook of a crane in an industrial environment by means of the analysis of images acquired by the cameras representative of a top view of the person in the following situations: standing person wearing a protective helmet; standing person not wearing a protective helmet; person lying down; crouching person wearing a protective helmet; crouching person not wearing a protective helmet; person riding a bicycle.
  • Said trained deep neural network (possibly convolutional) is thus implemented in the electronic circuits of the graphic processing unit 1-2.
  • the deep neural network is such to identify not only the presence or absence of a top view of at least one portion of the head and/or of the body of a person, but (if present) it is such to provide the position (i.e., the location) within the analysed image of the top view of the identified portion of the head and/or of the body of the person.
  • the deep neural network is such to automatically calculate the height of the transported load with respect to the ground and/or with respect to a defined point of the crane (for example, the upper vertex thereof).
  • the deep neural network is created using the YOLO library (You Only Look Once) which provides the necessary functions to perform the recognition of objects by means of the analysis of a single image, using a single neural network for the entire image, generating at the output information indicative not only of the presence of a person, but also where the person is positioned within the analysed image.
  • YOLO library You Only Look Once
  • the YOLO neural network and the standard libraries thereof are not adapted to recognize people from the top, but they are used to recognize and localize objects viewed horizontally (i.e., in front): therefore, the Applicant used the structure of a known neural network to create a new deep neural network model operating for the recognition and localization of people framed from the top.
  • the original YOLO model i.e. Darknet, has been modified.
  • This approach allows to have the maximum possible performance for YOLO, unlike other frameworks or libraries such as Keras, TensorFlow, PyTorch which, although they provide support during the editing and training step of the neural network, weigh down the execution of the algorithm, causing a decrease in performance both in terms of precision and time, both key factors of the system.
  • the electronic system 10 is further provided with the possibility of remote access and it has a web interface through which it is possible to modify the area framed by the camera and the dangerous area, display the system status and the alarm events generated by the system in real time.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)
  • Burglar Alarm Systems (AREA)
  • Measurement Of The Respiration, Hearing Ability, Form, And Blood Characteristics Of Living Organisms (AREA)
  • Geophysics And Detection Of Objects (AREA)
  • Alarm Systems (AREA)
  • Image Processing (AREA)
PCT/IB2021/055860 2020-07-02 2021-06-30 Electronic system to detect the presence of a person in a limited area WO2022003589A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
EP21746145.8A EP4176379A1 (en) 2020-07-02 2021-06-30 Electronic system to detect the presence of a person in a limited area

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
IT102020000016051A IT202000016051A1 (it) 2020-07-02 2020-07-02 Sistema elettronico per rilevare la presenza di una persona in un’area limitata
IT102020000016051 2020-07-02

Publications (1)

Publication Number Publication Date
WO2022003589A1 true WO2022003589A1 (en) 2022-01-06

Family

ID=72644652

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/IB2021/055860 WO2022003589A1 (en) 2020-07-02 2021-06-30 Electronic system to detect the presence of a person in a limited area

Country Status (3)

Country Link
EP (1) EP4176379A1 (it)
IT (1) IT202000016051A1 (it)
WO (1) WO2022003589A1 (it)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2010241548A (ja) * 2009-04-03 2010-10-28 Kansai Electric Power Co Inc:The クレーンの安全確認装置
WO2019151876A1 (en) * 2018-02-02 2019-08-08 Digital Logistics As Cargo detection and tracking
CN110745704A (zh) * 2019-12-20 2020-02-04 广东博智林机器人有限公司 一种塔吊预警方法及装置

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2010241548A (ja) * 2009-04-03 2010-10-28 Kansai Electric Power Co Inc:The クレーンの安全確認装置
WO2019151876A1 (en) * 2018-02-02 2019-08-08 Digital Logistics As Cargo detection and tracking
CN110745704A (zh) * 2019-12-20 2020-02-04 广东博智林机器人有限公司 一种塔吊预警方法及装置

Also Published As

Publication number Publication date
EP4176379A1 (en) 2023-05-10
IT202000016051A1 (it) 2022-01-02

Similar Documents

Publication Publication Date Title
US10636308B2 (en) Systems and methods for collision avoidance
CN109095356B (zh) 工程机械及其作业空间动态防碰撞方法、装置和系统
US20180141789A1 (en) Optical detection system for lift crane
JP6333741B2 (ja) 自動機械の危険な作業区域の安全を守るための方法および装置
US9809115B2 (en) Operator drowsiness detection in surface mines
CN111226178A (zh) 监视设备、工业系统、用于监视的方法及计算机程序
EP3826948B1 (en) Pedestrian-vehicle safety systems for loading docks
CA2845440A1 (en) Method and system for reducing the risk of a moving machine colliding with personnel or an object
US20200255267A1 (en) A safety system for a machine
KR20160130332A (ko) 산업용 및 건설용 중장비 접근감시 및 작동 감속·정지 제어시스템
EP4152272A1 (en) Systems and methods for low latency analytics and control of devices via edge nodes and next generation networks
CN112836563A (zh) 作业现场分类系统和方法
JP2021139283A (ja) 検出システム
CN110231819A (zh) 用于避免碰撞的系统和用于避免碰撞的方法
CA3126243C (en) Monitoring and alerting systems for detecting hazardous conditions at loading docks
JP2018036920A (ja) 視野外障害物検知システム
JPH05261692A (ja) ロボットの作業環境監視装置
CN102092640A (zh) 起重机的安全监控装置、方法以及包括该装置的起重机
WO2022003589A1 (en) Electronic system to detect the presence of a person in a limited area
KR20230094768A (ko) 안전 보호구 착용 상태 모니터링 방법 및 그 방법을 제공하는 서버
JP2015005152A (ja) クレーンの吊り荷と作業員の接近警告システム
JP2021163401A (ja) 人物検知システム、人物検知プログラム、学習済みモデル生成プログラム及び学習済みモデル
JP6960299B2 (ja) 吊荷警報システム
JP2012174058A (ja) 特定事象検出装置、システム、方法及びプログラム
EP3838476B1 (en) Machine tool for industrial processing operations comprising a system to monitor its operational safety, and method therefore

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21746145

Country of ref document: EP

Kind code of ref document: A1

WWE Wipo information: entry into national phase

Ref document number: 2021746145

Country of ref document: EP

ENP Entry into the national phase

Ref document number: 2021746145

Country of ref document: EP

Effective date: 20230202

NENP Non-entry into the national phase

Ref country code: DE