WO2023095938A1 - Procédé de prévention des accidents relatifs à la sécurité et appareil associé - Google Patents

Procédé de prévention des accidents relatifs à la sécurité et appareil associé Download PDF

Info

Publication number
WO2023095938A1
WO2023095938A1 PCT/KR2021/017342 KR2021017342W WO2023095938A1 WO 2023095938 A1 WO2023095938 A1 WO 2023095938A1 KR 2021017342 W KR2021017342 W KR 2021017342W WO 2023095938 A1 WO2023095938 A1 WO 2023095938A1
Authority
WO
WIPO (PCT)
Prior art keywords
risk factor
space
safety accident
virtual
accident prevention
Prior art date
Application number
PCT/KR2021/017342
Other languages
English (en)
Korean (ko)
Inventor
심용수
심상우
Original Assignee
심용수
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 심용수 filed Critical 심용수
Priority to PCT/KR2021/017342 priority Critical patent/WO2023095938A1/fr
Publication of WO2023095938A1 publication Critical patent/WO2023095938A1/fr

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/88Lidar systems specially adapted for specific applications
    • G01S17/89Lidar systems specially adapted for specific applications for mapping or imaging
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q50/00Information and communication technology [ICT] specially adapted for implementation of business processes of specific business sectors, e.g. utilities or tourism
    • G06Q50/10Services
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis

Definitions

  • Embodiments of the present invention relate to a method and device for preventing safety accidents at industrial sites, and more particularly, to a method and device for identifying and providing risk factors in advance to prevent safety accidents.
  • a technical problem to be achieved by an embodiment of the present invention is to provide a method and apparatus for identifying and providing risk factors in order to prevent safety accidents at industrial sites.
  • An example of a safety accident prevention method in an industrial site for achieving the above technical problem is to create a virtual space by photographing a real space; Comparing the 3D model of the real space with the virtual space to determine an area where there is a difference between the 3D model and the virtual space; and presenting a region where the difference exists.
  • An example of a safety accident prevention device for achieving the above technical problem is a virtual space generator for generating a virtual space by photographing a real space; a comparison unit comparing the 3D model of the real space with the virtual space and determining a region in which there is a difference between the 3D model and the virtual space; and a risk factor presenting unit presenting a region in which the difference exists.
  • safety accident risk factors in industrial sites can be identified and informed to the person in charge.
  • a risk factor in a difference region between a 3D model and a virtual space may be accurately determined by generating a virtual space without an error with a real space using a depth camera and LIDAR.
  • FIG. 1 is a diagram showing an example of a device capable of preventing safety accidents at industrial sites according to an embodiment of the present invention ;
  • FIG. 2 is a diagram showing an example of a safety accident prevention method according to an embodiment of the present invention.
  • FIG. 3 is a diagram showing an example of a risk factor identification method according to an embodiment of the present invention.
  • FIG. 4 is a diagram showing an example of a method of determining whether a risk factor exists by identifying a virtual object in a virtual space according to an embodiment of the present invention
  • FIG. 5 is a diagram showing an example of a method for determining whether a risk factor exists using a list of processing result lists of risk factors according to an embodiment of the present invention
  • FIG. 6 is a flowchart showing an example of a safety accident prevention method according to an embodiment of the present invention.
  • FIG. 7 is a view showing the configuration of an example of a safety accident prevention device according to an embodiment of the present invention.
  • FIG. 8 is a diagram showing an example of a schematic configuration of a system for creating objects in a virtual space according to an embodiment of the present invention
  • FIG. 9 is a view showing an example of a photographing method of a photographing device according to an embodiment of the present invention.
  • FIG. 10 is a flowchart illustrating an example of a method of generating an object in a virtual space according to an embodiment of the present invention
  • FIG. 11 is a diagram showing an example of a method for distinguishing a background and an object of a video frame and a measurement frame according to an embodiment of the present invention
  • FIG. 12 is a diagram showing an example of a grid space according to an embodiment of the present invention.
  • FIG. 13 is a view showing an example of displaying objects classified in a video frame in a grid space according to an embodiment of the present invention
  • FIG. 14 is a diagram showing an example of a method of correcting a depth value of a pixel of an object according to an embodiment of the present invention.
  • FIG. 15 is a diagram showing an example of a method for generating a 3D virtual object according to an embodiment of the present invention.
  • 16 is a diagram showing an example of extracting a point cloud for generating a 3D virtual object according to an embodiment of the present invention
  • FIG. 17 is a diagram showing an example of generating a 3D virtual object according to an embodiment of the present invention.
  • FIG. 18 is a diagram showing the configuration of an example of a virtual object generating device according to an embodiment of the present invention.
  • FIG. 1 is a diagram showing an example of a device capable of preventing safety accidents at industrial sites according to an embodiment of the present invention.
  • the safety accident prevention device 110 determines risk factors based on industrial field photographing data collected from at least one or more photographing devices 100 and provides them to the user terminal 120 .
  • the photographing device 100 may be implemented as various devices according to an embodiment.
  • the photographing apparatus 100 includes a depth camera and LiDAR (Light Detection and Ranging) together, includes only a depth camera, includes only lidar, or includes a general camera and lidar. can be included together. That is, the photographing device 100 is a device that photographs a real space to create a three-dimensional virtual space (or virtual object).
  • the photographing device 100 is a device that photographs a real space to create a three-dimensional virtual space (or virtual object).
  • a 3D virtual space (or virtual object) is created using only a depth camera, the 3D virtual space (or virtual object) is distorted due to the distortion of the camera lens, resulting in an accurate 3D virtual space without errors with real objects.
  • a depth camera and LIDAR can be used together to create a space (or virtual object), which will be reviewed again below in FIG. 8 .
  • the photographing device 100 may be implemented in a form carried or worn by users located at industrial sites, such as workers or managers at industrial sites.
  • the photographing device 100 may be implemented as a part of a smart phone, augmented reality (AR) glasses, or a head mounted display (HMD).
  • the photographing device 100 may be a device installed at a designated place in an industrial site (for example, a form of filming each industrial site at a fixed location, such as CCTV).
  • both a fixed photographing device and a photographing device possessed or worn by a user may be applied to the present embodiment.
  • Industrial sites to which this embodiment is applied are various, such as construction sites, logistics sites, and factories, and are not limited to specific places.
  • this embodiment uses the term industrial site to help understanding, but the industrial site may correspond to any place where a safety accident may occur.
  • this embodiment can be used for the purpose of preventing safety accidents in general homes or workplaces, and in this case, industrial sites can be interpreted as general homes or workplaces.
  • the industrial site is a construction site under construction for convenience of description.
  • the user terminal 120 receives related information for safety accident prevention from the safety accident prevention device 110 .
  • the user terminal may be a terminal such as a predetermined manager.
  • the user terminal 120 may be various types of terminals capable of communication, such as a general computer, a tablet PC, and a smart phone.
  • FIG. 2 is a diagram showing an example of a safety accident prevention method according to an embodiment of the present invention.
  • the safety accident prevention device 110 creates a virtual space (or virtual object) 200 by photographing a certain space (eg, the space of a specific building/number in an apartment construction site).
  • a certain space eg, the space of a specific building/number in an apartment construction site.
  • the stable accident prevention device 110 uses photographing data including an image frame obtained by photographing with a depth camera and a measurement frame obtained by measuring with LIDAR in a virtual space (or virtual object) 200 in a certain space. can be created with
  • the safety accident prevention device 110 may create a virtual object for at least one or more objects existing in a space by distinguishing a background and an object from photographing data.
  • the safety accident prevention device may generate a virtual space including both a background and an object from photographing data.
  • Objects may be diverse, such as water and sewage pipes, walls, windows, various objects (construction equipment or various materials, etc.), animals and plants, people, and holes in floors and walls.
  • the safety accident prevention apparatus may create a virtual object for the animal.
  • the background and the object in the shooting space are relative concepts, and walls in the indoor space can also be identified as objects, and the floor in the outdoor space can also be identified as an object. Therefore, hereinafter, virtual space is defined as a space in which one or more virtual objects exist, and may or may not include walls, ceilings, floors, etc., which are generally recognized as backgrounds, as objects, depending on embodiments.
  • the safety accident prevention device 110 compares the predefined 3D model 230 of the shooting space with the virtual space 200 created using the shooting data to identify a different region.
  • the 3D model 230 for the real space may be predefined in various forms.
  • the 3D model 230 may be created based on a 3D design drawing (CAD drawing, etc.) of a building.
  • a hole 210 and an object 220 that do not exist in the 3D model 230 exist in the virtual space 200 .
  • the facility 240 existing in the 3D model may not exist in the virtual space 200 because the facility 240 has not yet been installed in the construction stage of the construction site. All regions different from each other in the virtual space 200 and the 3D model 230 may not be risk factors.
  • a 3D model is created based on a design drawing for an apartment, and the virtual space may be the space of a specific block/number in the apartment construction stage.
  • An area of the 3D model 9230 mapped with the virtual space 200 may be directly designated by a user or may be automatically mapped.
  • the safety accident prevention device 110 may directly receive a user's selection of a region mapped to the virtual space 200 in the 3D model.
  • the safety accident prevention device 110 determines which area of the 3D model 230 the virtual space 200 is based on GPS information of the photographing data. You can figure out if it fits.
  • GPS Global Positioning System
  • the shooting device receives space identification information assigned to each space from the user or tags a tag located on one side of each space. Photographing data may be stored together with spatial identification information of the tag.
  • the safety accident prevention device 110 may identify a corresponding area of the 3D model 230 based on the space identification information stored together with the photographing data and compare it with the virtual space 200 .
  • the safety accident prevention device 110 may provide the areas 210 and 220 different from the 3D model 230 and the virtual space 200 to the user terminal so that the user can visually check whether or not there is a risk factor. However, since the number of spaces to be photographed at a construction site is very large, it may be inefficient for a user to individually classify risk factors by presenting all the different areas to the user terminal.
  • the safety accident prevention device 110 may include a method for automatically determining whether or not there is a risk factor, and examples thereof are shown in FIGS. 3 to 5.
  • FIG. 3 is a diagram illustrating an example of a method for identifying risk factors according to an embodiment of the present invention.
  • the safety accident prevention device 110 may predefine risk factors. For example, if the industrial site is a factory that handles combustible materials, a heating device such as a stove or heater may be defined in advance as a risk factor.
  • the safety accident prevention device 110 compares a virtual space created by photographing an industrial site with a predefined 3D model of the industrial site, identifies a difference area), and identifies the virtual objects 300 and 310 existing in the difference area. By identifying the type, it is possible to determine whether it is a predefined risk factor object. If the virtual object is recognized as a predefined risk factor, the safety accident prevention device 110 may inform the user terminal of this in the form of a message or the like.
  • Various conventional artificial intelligence models can be used to identify the types of the virtual objects 300 and 310 in the different areas. Since the artificial intelligence model itself for identifying object types in a 3D image is a well-known technology, a detailed description thereof will be omitted.
  • the virtual objects 300 and 310 existing in a region different from the 3D model and the virtual space may not be a risk factor.
  • a virtual object such as a cat or a person may exist in the virtual space.
  • the safety accident prevention device 110 may define object types other than risk factors together with risk factor object types. If the virtual object existing in the area different from the 3D model and the virtual space is an object existing in the object list (for example, animals such as cats and dogs) that is not a predefined risk factor, the safety accident prevention device (110 ) may determine that the corresponding virtual object 300 is not a risk factor.
  • FIG. 4 is a diagram illustrating an example of a method of identifying risk factors by identifying virtual objects in a virtual space according to an embodiment of the present invention.
  • the safety accident prevention device 110 inputs a 3D image (ie, 3D data) of a region 400 that is different between a virtual space and a 3D model into an artificial intelligence model 410 to correspond to the corresponding
  • the type of object in the area may be recognized (420).
  • the artificial intelligence model 410 is a model trained to identify a predefined object type.
  • the artificial intelligence model 410 may be implemented with various models such as CNN (Convolutional Neural Network), and various conventional artificial intelligence models that recognize the type of object may be applied to this embodiment. According to an embodiment, when there are many types of objects to be identified, a plurality of artificial intelligence models may be used.
  • the safety accident prevention device 110 determines whether the recognition object 420 is a risk factor using the risk factor determination algorithm 430 (440).
  • the risk factor determination algorithm 430 may include a predefined list of risk factor objects and/or a list of objects that are not risk factors, and determine whether the type of identified object exists in each list.
  • the risk factor determination algorithm 430 may define risk factor determination criteria for each object type. For example, as shown in FIG. 2 , if the recognized object is a worker, the risk factor determination algorithm 430 may be an algorithm that determines whether the worker wears a helmet. Various algorithms for determining the presence or absence of a helmet in a virtual object of a worker displayed as a 3D image may be applied to this embodiment. In addition to this, the risk factor determination algorithm may set various criteria according to embodiments, and is not limited to any one example.
  • FIG. 5 is a diagram illustrating an example of a method for determining whether or not a risk factor exists using a list of processing result lists of risk factors according to an embodiment of the present invention.
  • the safety accident prevention device 110 may include a risk factor processing result list 500 .
  • the risk factor processing result list 500 stored in the database includes previously identified risk factor processing methods and the like.
  • the safety accident prevention device 110 may compare the 3D model and the virtual space and determine both regions 210 and 220 having differences as risk factors.
  • the safety accident prevention device 110 may receive an input of a method for handling risk factors from a user.
  • hole removal i.e. hole filling
  • the second risk factor processing method of the object 220 present on the floor can be set to move to a corner within the same space.
  • the safety accident prevention device 110 may then create a virtual space based on the photographing data of the same space and compare it with the 3D model to identify a different area.
  • the safety accident prevention device 110 determines whether an object in the difference area 510 exists in the risk factor processing result list 500 . If the difference area is the hole 210 and the object 220 as before, the safety accident prevention device 110 is dangerous if the object is not processed according to the processing method for the object stored in the risk factor processing result list 500. presented as an element. For example, if the hole 210 previously identified as the first risk factor exists and the first risk factor is not processed by the method defined in the risk factor processing result list 500, the safety accident prevention device 110 may provide the user terminal that the first risk factor exists as it is.
  • the safety accident prevention device 110 handles the object according to the set method. Do not consider the object as a risk factor. That is, holes and objects exist in the area 510 that differs from the virtual space and the 3D model, but since the objects are processed according to the risk factor processing method, the safety accident prevention device 110 identifies and provides only the hole as a risk factor. can do.
  • FIG. 6 is a flowchart illustrating an example of a safety accident prevention method according to an embodiment of the present invention.
  • the safety accident prevention device 110 creates a virtual space including at least one or more objects based on photographing data using a photographing device (S600).
  • the safety accident prevention device 110 compares a predefined 3D model and a virtual space with respect to the space photographed by the photographing device to identify a different region (S610).
  • the comparison in this embodiment is not a simple comparison of 2D images, but a comparison between a 3D model and virtual space, which is 3D data.
  • the safety accident prevention device 110 may provide the difference area to the user terminal as it is or determine whether the difference area is a risk factor and provide it to the user terminal only when the difference area is a risk factor (S620).
  • S620 a risk factor
  • FIG. 7 is a diagram showing the configuration of an example of a safety accident prevention device according to an embodiment of the present invention.
  • the safety accident prevention device 110 includes a virtual space creation unit 700 , a comparison unit 710 and a risk factor presenting unit 720 .
  • the safety accident prevention device 110 may be implemented as a computing device including a memory, a processor, and an input/output device.
  • each component of the present embodiment may be implemented as software, loaded into a memory, and then executed by a processor.
  • the virtual space creation unit 700 creates a virtual space by photographing a real space.
  • the virtual space generator 700 may create a virtual object by using a depth camera and LIDAR together so that a real space can be accurately created as a virtual space without errors, which will be reviewed again below in FIG. 8 .
  • the comparator 710 compares the 3D model of the real space with the virtual space to identify a region where there is a difference between the 3D model and the virtual space.
  • An example of comparison between the virtual space and the 3D model is shown in FIG. 2 .
  • the risk factor presenting unit 720 presents an area where there is a difference between the 3D model and the virtual space.
  • the risk factor presenting unit 720 may determine whether an area in which a difference exists is a risk factor by using a predefined risk factor determination algorithm.
  • the risk factor presenting unit 720 may recognize the type of object appearing in the area where the difference exists using an artificial intelligence model, and determine whether the type of the recognized object is a risk factor.
  • the risk factor presenting unit 720 determines whether an object in a region in which a difference exists exists in a previously identified risk factor processing result list by referring to a database that stores a risk factor processing result list, and identifies the risks stored in the database. If the object is not processed according to the processing method of the element, it can be presented as a risk factor.
  • FIG. 8 is a diagram showing an example of a schematic configuration of a system for creating objects in a virtual space according to an embodiment of the present invention.
  • a photographing device 800 includes a depth camera 802 and a LIDAR 804 .
  • the depth camera 802 is a camera that captures a certain space in which the object 830 exists and provides a depth value of each pixel together.
  • the object 830 means an object to be created in a virtual space.
  • the object 830 may be diverse, such as various structures of a building (eg, sewer pipes, columns, walls, etc.) or various objects (wardrobe, sink, chair, shoes, etc.) or animals and plants, and is limited to a specific type. it is not going to be
  • the depth camera 802 itself is a well-known technology, and various types of conventional depth cameras 802 may be used in this embodiment.
  • the depth camera 802 may capture still images or moving images. However, hereinafter, for convenience of description, a case in which a video is captured by a depth camera will be mainly described. Photo data including a depth value of each pixel obtained by photographing with the depth camera 802 is called an image frame. That is, a video is composed of a certain number of video frames per second.
  • the LIDAR 804 emits a laser into a certain space, measures a signal returned from each point in the space (ie, a reflection point), and outputs distance values for a plurality of points in the certain space.
  • Data composed of distance values for a plurality of points measured by the LIDAR 804 in a certain space at a certain point in time is called a measurement frame.
  • the resolution of a plurality of points measured by the lidar 804, that is, the measurement frame, may be different depending on the lidar 804.
  • LiDAR 804 itself is already widely known technology, and various conventional lidars 804 may be used in this embodiment.
  • the photographing device 800 simultaneously drives the depth camera 802 and the LIDAR 804 to photograph and measure an object 830 in a certain space.
  • the expression 'photographing' of the photographing device 800 or 'measurement' of the photographing device may be interpreted as photographing by the depth camera 802 and measuring by the LIDAR 804 at the same time.
  • the number of video frames generated per second by the depth camera 802 and the number of measurement frames generated per second by the LIDAR 804 may be the same or different depending on embodiments.
  • the resolution of the video frame and the resolution of the measurement frame may be the same or different depending on the embodiment.
  • the depth camera 802 and the lidar 804 are simultaneously driven to generate an image frame and a measurement frame for a certain space, they can be mapped to the same time axis and synchronized.
  • the virtual object generating device 810 uses the image frame obtained by photographing with the depth camera 802 and the measurement frame obtained by measuring with the LIDAR 804 together to create an object in the virtual space for the object 830 in the real world (that is, , digital twin). 1 to 7 have been described on the assumption that the virtual object generator 810 is implemented as a part of the safety accident prevention device 110, but the virtual object generator 810 is separate from the safety accident prevention device 110. It can be implemented as a device of
  • the virtual object generating device 810 may be connected to the photographing device 800 through a wired or wireless communication network (eg, WebRTC, etc.) to receive image frames and measurement frames generated by the photographing device 800 in real time.
  • the virtual object generating device 810 captures a video frame and a measurement frame generated by the photographing device 800 for a certain period of time and measures them, and then stores the storage medium (eg, Universal Serial Bus (USB)). It may be received through a communication device, or it may be received by connecting to the photographing device 800 through a wired or wireless communication network (eg, a local area network, etc.) after the photographing is completed. It can be provided to the user terminal 820 so that it can be checked.
  • a wired or wireless communication network eg, WebRTC, etc.
  • the photographing device 800 and the virtual object generating device 810 may be implemented as one device.
  • the photographing device 810 and the virtual object device 810 may be implemented as part of various devices that display augmented reality or virtual reality, such as AR (augumented reality) glasses, HMD (Head Mounded Display), or wearable devices.
  • AR augumented reality
  • HMD Head Mounded Display
  • the photographing device 800 is implemented as a part of AR glasses, HMD, or a wearable device
  • the photographing device 800 is a virtual object generating device that connects real-time photographing and measuring image frames and measurement frames to a wired or wireless communication network. 810, the AR glasses, HMD, etc.
  • the photographing device may receive the virtual object from the virtual object generating device 810 and display the virtual object in augmented reality or virtual reality.
  • the user can immediately check the virtual object created in real time through augmented reality or virtual reality.
  • a detailed method of generating an object in the virtual space will be reviewed again below in FIG. 9 .
  • FIG. 9 is a diagram illustrating an example of a photographing method of a photographing device according to an embodiment of the present invention.
  • the photographing device 800 may continuously photograph one object 900 or continuously photograph a plurality of objects 900 and 910 .
  • the photographing device 800 may continuously photograph objects of the same type or different types located in various spaces. That is, at least one object 900 or 910 desired by the user may exist at various points in time on the basis of the time axis in the image frame and the measurement frame obtained by capturing the photographing device 800 .
  • This embodiment shows an example of an image frame for convenience of description.
  • the user may photograph the sewer pipe in space A of the building using the photographing device 800 and move to space B to photograph the sewer pipe.
  • the photographing device may continue to be maintained in a photographing state or may be turned off during movement.
  • the same type of object, the sewage pipe exists in the video frames and measurement frames captured in space A and space B.
  • a plurality of objects 900 and 910 may be captured together in an image frame and a measurement frame captured in each space.
  • Object a and object b may exist in the video frame and measurement frame captured in space A
  • object a and object c may exist in the video frame and measurement frame captured in space B.
  • the virtual object generating device 810 may classify the image frame and the measurement frame in units of objects.
  • the virtual object generating apparatus 810 may assign the same identification information (or index) to the video frame and the measurement frame in which the same object exists in the video frame and the measurement frame.
  • first identification information (or first index, hereinafter referred to as identification information) is assigned to all of the plurality of image frames 920 , 922 , and 924 in which the first object 900 exists, and the second object 910 exists.
  • Second identification information may be assigned to a plurality of image frames 930, 932, and 934. No identification information may be assigned to the image frames 940 and 942 in which no object exists, or third identification information may be assigned.
  • the image frames arranged along the time axis can be divided into three groups: A (950), B (960), and C (970).
  • identification information corresponding to each object may be assigned to one video frame and the measurement frame. That is, a plurality of pieces of identification information may be assigned to one image frame and one measurement frame.
  • the photographing device 800 since the photographing device 800 simultaneously drives the depth camera 802 and the lidar 804 to take pictures, the video frame generated by the depth camera 802 and the measurement frame measured by the lidar are synchronized in time. It can be. Therefore, the virtual object generating device 810 determines whether the same object exists only in the video frame, determines the time period of the video frame in which the same object exists, and assigns the same identification information to the video frame during that time period. The same identification information may be assigned by considering the measurement frame generated during the time period identified for the frame as a period in which the same object exists.
  • the virtual object generating device 810 may use various conventional image recognition algorithms to determine whether objects existing in image frames are the same.
  • the virtual object generating device 810 may use an artificial intelligence model as an example of an image recognition algorithm. Since the method itself for determining whether objects in an image are the same is a well-known technique, a detailed description thereof will be omitted.
  • FIG. 10 is a flowchart illustrating an example of a method of generating a virtual space object according to an embodiment of the present invention.
  • the virtual object generating apparatus 810 divides a first background area and a first object area from an image frame obtained by photographing a certain space with a depth camera (S1000).
  • the virtual object generating device 810 may distinguish a background and an object from a plurality of image frames in which the same object is photographed. For example, as shown in FIG. 9 , at least one or more image frames (eg, group A of 920, 922, and 924) in which the same object exists may be distinguished from a background, respectively.
  • the virtual object generating apparatus 810 may distinguish the background and the plurality of objects, respectively. An example of a method of distinguishing a background and an object in an image frame will be reviewed again in FIG. 11 .
  • the virtual object generating device 810 distinguishes a second background area and a second object area from a measurement frame obtained by measuring a certain space with lidar (S1010).
  • the virtual object generating device 810 may distinguish a background and an object for a plurality of measurement frames in which the same object is measured. For example, as shown in FIG. 10 , a background and an object may be distinguished for a plurality of measurement frames to which the same identification information is assigned. As another embodiment, when a plurality of objects exist in the measurement frame, the virtual object generator may distinguish the background and the plurality of objects, respectively.
  • An example of a method for distinguishing a background and an object in a measurement frame is shown in FIG. 11 .
  • the depth camera 802 and the LIDAR 804 are spaced apart from each other by a predetermined distance within the photographing device 800, and accordingly, the photographing angles of the image frame and the measurement frame are different from each other.
  • the positions of each pixel of the video frame and each point of the measurement frame may not be mapped on a one-to-one (1:1, scale) basis.
  • this embodiment uses a grid space.
  • the virtual object generator 810 arranges the pixels of the first object area divided from the image frame according to the depth value in the first grid space including the grid of the predefined size (S1020), and also The points of the second object area divided in the measurement frame are arranged according to the distance values in the second lattice space including the lattice of the same size (S1020). Since the image frame and the measurement frame are data obtained by photographing the same space, objects existing in the first object area and the second object area are the same object.
  • the first lattice space and the second lattice space are spaces having grids of the same size in the virtual space. An example of a grid space is shown in FIG. 12 .
  • the virtual object generator 810 corrects the depth value of the pixel in the first grid space based on the distance value of the point in the second grid space (S1030). Since the pixels of the first object area and the points of the second object area exist in the grid space of the same size, if the position, direction, size, etc. of the first grid space and the second grid space match each other, each point of the second grid space and each pixel of the first lattice space may be mapped.
  • a detailed method of correcting a depth value of a pixel in a grid unit of a grid space using a distance value of a point will be reviewed again in FIG. 14 .
  • the virtual object generating device 810 creates an object (ie, a virtual object) in a virtual space in which surface information exists based on the pixel whose depth value is corrected (S1040).
  • the virtual object generating device 810 corrects the pixel depth value of an object existing in a plurality of image frames to which the same identification information is assigned to the point distance value of an object present in a plurality of measurement frames to which the same identification information is assigned, and then generates a plurality of A virtual object can be created using the corrected pixels of the image frame of . That is, a virtual object may be created by correcting a pixel depth value of an image frame of an object photographed at various angles and positions.
  • the virtual object generating device 810 may generate a 3D virtual object using various types of 3D modeling algorithms.
  • the virtual object generating apparatus 810 may generate a 3D object having surface information as an artificial intelligence model using a pixel having a depth value.
  • the virtual object generator 810 is a point cloud representing corners and vertices among pixels constituting an object.
  • a virtual object can be created by extracting and inputting the point cloud to a 3D modeling algorithm. An example of generating a virtual object using a point cloud will be reviewed again in FIG. 15 .
  • a virtual object can be created based on the distance value of each point of the measurement frame, but generally the resolution of the measurement frame is lower than the resolution of the video frame.
  • a virtual object is created using a distance value of a pixel of an image frame having a relatively high resolution.
  • FIG. 11 is a diagram illustrating an example of a method of distinguishing a background and an object of a video frame and a measurement frame according to an embodiment of the present invention.
  • first artificial intelligence model 1100 that distinguishes the background of an image frame from an object
  • second artificial intelligence model 1110 that distinguishes a background from an object of a measurement frame.
  • Each of the artificial intelligence models 1100 and 1110 is a trained model using pre-constructed learning data and may be implemented as a Convolutional Neural Network (CNN) or the like. Since the process of learning and generating an artificial intelligence model itself is already a well-known technology, a description thereof will be omitted.
  • CNN Convolutional Neural Network
  • the first artificial intelligence model 1100 is a model generated through machine learning to distinguish a background and an object in an image frame when an image frame is input. For example, if the first artificial intelligence model 1100 is an artificial intelligence model trained to recognize a chair, the first artificial intelligence model 1100 is an area where a chair exists in an image frame (ie, a region of a chair in an image frame). pixels) can be distinguished.
  • the second artificial intelligence model 1110 is a model generated through machine learning to distinguish a background and an object in a measurement frame when a measurement frame is input. For example, if the first artificial intelligence model 1100 is an artificial intelligence model trained to recognize a chair, the second artificial intelligence model 1110 corresponds to a region where a chair exists in the measurement frame (ie, corresponds to a chair within the measurement frame). points) can be distinguished.
  • FIG. 12 is a diagram showing an example of a lattice space according to an embodiment of the present invention.
  • the grid space 1200 is a space in which a region in the virtual space is divided into unit grids 1210 of a predetermined size.
  • the grid space 1200 may be a space composed of unit cells 1210 having width, length, and height of d1, d2, and d3, respectively.
  • d1, d2, and d3 may all have the same size (eg, 1 mm) or different sizes.
  • FIG. 13 is a diagram illustrating an example of displaying objects classified in a video frame in a grid space according to an embodiment of the present invention.
  • the virtual object generating apparatus 810 may display pixels of an object area divided in an image frame in the grid space 1300 using the depth values.
  • pixels 1310 are schematically illustrated for ease of understanding.
  • the virtual object generating apparatus 810 may determine the 3D coordinate value (or vector value of the pixel, etc.) of each pixel in the grid space by mapping the pixels of the object in the image frame to the grid space 1300 . That is, a 3D coordinate value of each pixel of the object may be generated using a predefined point in the lattice space as a reference point (0,0,0 or X,Y,Z).
  • the virtual object generating apparatus 810 may map points representing objects in the measurement frame to the lattice space. If the point of the object in the measurement frame is displayed in the grid space, it can also be displayed in a shape similar to that of FIG. 13.
  • FIG. 14 is a diagram illustrating an example of a method of correcting a depth value of a pixel of an object according to an embodiment of the present invention.
  • a position corresponding to one of the first grids 1400 in the first grid space to which the objects of the image frame are mapped and the first grid 1400 in the second grid space to which the objects of the measurement frame are mapped denotes the second grating 1410, respectively.
  • the first lattice space in which the pixels of the object in the image frame are mapped and the second lattice space in which the points of the object in the measurement frame are mapped are first matched in position, direction, size, etc.
  • this embodiment shows an example in which one pixel 1402 and one point 1412 are present in the first grid 1400 and the second grid 1410, respectively, for convenience of explanation, but one grid 1400 , 1410) may have a plurality of pixels or a plurality of points. Also, the number of pixels and points present in the first grid 1400 and the second grid 1410 may be the same or different.
  • the virtual object generator 810 corrects the depth value of the pixel 1402 of the first grid 1400 based on the distance value of the point 1412 of the second grid 1410. Since the distance value of the point measured by lidar is more accurate, the virtual object generator 810 corrects the depth value of the pixel based on the distance value of the point. For example, if the coordinate values of the pixels 1402 of the first grid 1400 in the grid space and the coordinate values of the points of the second grid 1410 are different from each other, the virtual object generator 810 operates the first grid The pixel 1402 of (1400) is corrected (1420) according to the coordinate values of the points of the second grid 1410.
  • positions indicated by pixels of the first grid 1400 and points of the second grid 1410 may not be mapped one-to-one. Therefore, by using a plurality of points that exist in the second grid 1410 or a plurality of points that exist in the surrounding grids that exist on the top, bottom, left, right, etc. of the second grid 1410, the values existing between each point are interpolated, etc. Through this, it is possible to determine the distance value of the coordinate value of the point corresponding to the coordinate value of the pixel. In addition, coordinate values of pixels of the first grid 1400 may be corrected using distance values of points generated through interpolation.
  • 15 is a diagram illustrating an example of a method of generating a 3D virtual object according to an embodiment of the present invention.
  • the virtual object generating apparatus 810 may generate a 3D virtual object 1520 including surface information by inputting a point cloud 1510 to a 3D modeling algorithm 1500 .
  • a point cloud can be composed of points representing key points that can define an object, such as vertices and edges of an object.
  • Various conventional methods of extracting a point cloud from an image frame captured by a depth camera may be applied to this embodiment. Since the method of extracting a point cloud itself is a well-known technique, an additional description thereof will be omitted.
  • the virtual object generating apparatus 810 maps the object extracted from the image frame to the lattice space, and corrects the distance value (or coordinate value) of each pixel mapped to the lattice space in the same manner as shown in FIG. 14 . Then, a point cloud to be used for generating a 3D virtual object is extracted from the corrected distance value (or coordinate value) of each pixel. An example of extracting a point cloud for the object of FIG. 13 is shown in FIG. 16 .
  • the virtual object generating device 810 may use an artificial intelligence model such as machine learning as a 3D modeling algorithm 1500 .
  • Various conventional algorithms for generating a 3D object based on a point cloud may be applied to this embodiment. Since the method of generating a 3D object using a point cloud itself is a well-known technology, a detailed description thereof will be omitted.
  • FIG. 16 is a diagram illustrating an example of extracting a point cloud for generating a 3D virtual object according to an embodiment of the present invention.
  • an example of extracting a point cloud 1610 for a chair is shown.
  • Various conventional methods of extracting a point cloud may be applied to this embodiment.
  • a 3D virtual object may be generated by extracting a point cloud after correcting a depth value of a pixel in a plurality of image frames to which the same identification number is assigned, that is, a plurality of image frames in which the same object is photographed.
  • the virtual object generator may create a virtual object 1700 including surface information using a point cloud.
  • FIG. 18 is a diagram showing the configuration of an example of a virtual object generating device according to an embodiment of the present invention.
  • the virtual object generating device 810 includes a first object extraction unit 1800, a second object extraction unit 1810, a first grid arrangement unit 1820, a second grid arrangement unit 1830, A correction unit 1840 and an object creation unit 1850 are included.
  • the virtual object generator 810 may be implemented as a computing device including a memory, processor, input/output device, etc., or as a server, cloud system, etc. can be performed by
  • the first object extraction unit 1800 distinguishes a first background area and a first object area from an image frame obtained by photographing a certain space with a depth camera.
  • the second object extraction unit 1810 distinguishes a second background area and a second object area from a measurement frame obtained by measuring the predetermined space with LIDAR. Classification of the background and the object can be performed using an artificial intelligence model, and an example thereof is shown in FIG. 11 .
  • the first grid arrangement unit 1820 arranges pixels of the first object area according to depth values in a first grid space including a grid having a predefined size.
  • the second grid arranging unit 1830 arranges points of the second object area according to distance values in a second grid space including a grid having a predefined size.
  • An example of a lattice space is shown in FIG. 12, and an example of mapping pixels of an object extracted from an image frame to the lattice space is shown in FIG.
  • the correction unit 1840 corrects the depth value of the pixel in the first grid space based on the distance value of the point in the second grid space.
  • An example of a method of correcting through grid space comparison is shown in FIG. 14 .
  • the object generator 1850 creates a virtual object having surface information based on pixels whose depth values are corrected.
  • the object generator 1850 may create an object in a 3D virtual space using all pixels. However, in this case, since the amount of computation increases, the object generator 1850 may create a virtual object by creating a point cloud, examples of which are shown in FIGS. 15 to 17 .
  • Each embodiment of the present invention can also be implemented as computer readable codes on a computer readable recording medium.
  • a computer-readable recording medium includes all types of recording devices in which data that can be read by a computer system is stored. Examples of computer-readable recording media include ROM, RAM, CD-ROM, SSD, and optical data storage devices.
  • the computer-readable recording medium may be distributed to computer systems connected through a network to store and execute computer-readable codes in a distributed manner.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Business, Economics & Management (AREA)
  • General Health & Medical Sciences (AREA)
  • Tourism & Hospitality (AREA)
  • Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Mathematical Physics (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Remote Sensing (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Artificial Intelligence (AREA)
  • Software Systems (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Electromagnetism (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Economics (AREA)
  • Human Resources & Organizations (AREA)
  • Marketing (AREA)
  • Primary Health Care (AREA)
  • Strategic Management (AREA)
  • General Business, Economics & Management (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Analysis (AREA)

Abstract

L'invention concerne un procédé de prévention des accidents relatifs à la sécurité et un appareil associé. Le procédé de prévention des accidents relatifs à la sécurité reproduit une image d'un espace réel et génère un espace virtuel, compare l'espace virtuel et un modèle 3D de l'espace réel et détermine l'espace différentiel entre eux, et délivre en sortie l'espace différentiel en tant que facteur de risque.
PCT/KR2021/017342 2021-11-24 2021-11-24 Procédé de prévention des accidents relatifs à la sécurité et appareil associé WO2023095938A1 (fr)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/KR2021/017342 WO2023095938A1 (fr) 2021-11-24 2021-11-24 Procédé de prévention des accidents relatifs à la sécurité et appareil associé

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/KR2021/017342 WO2023095938A1 (fr) 2021-11-24 2021-11-24 Procédé de prévention des accidents relatifs à la sécurité et appareil associé

Publications (1)

Publication Number Publication Date
WO2023095938A1 true WO2023095938A1 (fr) 2023-06-01

Family

ID=86539803

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/KR2021/017342 WO2023095938A1 (fr) 2021-11-24 2021-11-24 Procédé de prévention des accidents relatifs à la sécurité et appareil associé

Country Status (1)

Country Link
WO (1) WO2023095938A1 (fr)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180012125A1 (en) * 2016-07-09 2018-01-11 Doxel, Inc. Monitoring construction of a structure
KR101988356B1 (ko) * 2018-03-30 2019-09-30 (주)대우건설 공사현장의 3차원 디지털화 및 가상 건설영상 분석을 통한 인공지능 스마트 현장관리 시스템
KR20200109948A (ko) * 2019-03-15 2020-09-23 농업법인회사 (유) 로하스 드론을 이용한 건설현장 공정관리 시스템 및 그를 이용한 건설현장 공정관리 방법
KR102244978B1 (ko) * 2020-12-23 2021-04-28 주식회사 케이씨씨건설 작업 현장의 위험성을 판단하는 인공지능 모델의 학습 방법, 장치 및 컴퓨터프로그램
KR20210115121A (ko) * 2020-03-12 2021-09-27 주식회사 비전21테크 딥러닝기반 재난 안전 건축물 사용자 맞춤형 3차원 모델링 데이터셋 구축방법

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180012125A1 (en) * 2016-07-09 2018-01-11 Doxel, Inc. Monitoring construction of a structure
KR101988356B1 (ko) * 2018-03-30 2019-09-30 (주)대우건설 공사현장의 3차원 디지털화 및 가상 건설영상 분석을 통한 인공지능 스마트 현장관리 시스템
KR20200109948A (ko) * 2019-03-15 2020-09-23 농업법인회사 (유) 로하스 드론을 이용한 건설현장 공정관리 시스템 및 그를 이용한 건설현장 공정관리 방법
KR20210115121A (ko) * 2020-03-12 2021-09-27 주식회사 비전21테크 딥러닝기반 재난 안전 건축물 사용자 맞춤형 3차원 모델링 데이터셋 구축방법
KR102244978B1 (ko) * 2020-12-23 2021-04-28 주식회사 케이씨씨건설 작업 현장의 위험성을 판단하는 인공지능 모델의 학습 방법, 장치 및 컴퓨터프로그램

Similar Documents

Publication Publication Date Title
Han et al. Potential of big visual data and building information modeling for construction performance analytics: An exploratory study
WO2018164460A1 (fr) Procédé de fourniture de contenu de réalité augmentée, et dispositif électronique et système adaptés au procédé
WO2023093217A1 (fr) Procédé et appareil de marquage de données, et dispositif informatique, support de stockage et programme
WO2021132907A1 (fr) Système pour superviser la construction d'une structure par utilisation d'une grue à tour et procédé associé
WO2016035993A1 (fr) Dispositif et procédé d'établissement de carte intérieure utilisant un point de nuage
WO2015182904A1 (fr) Appareil d'étude de zone d'intérêt et procédé de détection d'objet d'intérêt
WO2012091326A2 (fr) Système de vision de rue en temps réel tridimensionnel utilisant des informations d'identification distinctes
KR100545048B1 (ko) 항공사진의 폐쇄영역 도화 시스템 및 방법
JP2007243509A (ja) 画像処理装置
CN116206255B (zh) 基于机器视觉的危险区域人员监控方法与装置
WO2020189909A2 (fr) Système et procédé de mise en oeuvre d'une solution de gestion d'installation routière basée sur un système multi-capteurs 3d-vr
CN102834848A (zh) 用于监视场景中的活动重点的可视化的方法
Sommer et al. Scan methods and tools for reconstruction of built environments as basis for digital twins
CN115035162A (zh) 基于视觉slam的监控视频人员定位跟踪方法及系统
JPWO2017046838A1 (ja) 特定人物検知システム、特定人物検知方法および検知装置
CN115171361A (zh) 一种基于计算机视觉的危险行为智能检测预警方法
CN115797864A (zh) 一种应用于智慧社区的安全管理系统
CN113763419B (zh) 一种目标跟踪方法、设备及计算机可读存储介质
WO2023095938A1 (fr) Procédé de prévention des accidents relatifs à la sécurité et appareil associé
JP2012124658A (ja) 特定人物検知システムおよび検知方法
WO2022124673A1 (fr) Dispositif et procédé pour mesurer le volume d'un objet dans un réceptacle sur la base d'une image d'appareil de prise de vues en utilisant un modèle d'apprentissage automatique
KR102420856B1 (ko) 이미지를 이용한 3차원 객체의 존재 판독 방법 및 그 장치
CN115982824A (zh) 施工现场工人空间管理方法、装置、电子设备及存储介质
WO2023095937A1 (fr) Procédé d'identification d'erreur dans un dessin et appareil associé
KR20230076243A (ko) 산업현장의 안전사고 예방방법 및 그 장치

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21965726

Country of ref document: EP

Kind code of ref document: A1