WO2023095939A1 - Safety enhancement method for industrial equipment, and device thereof - Google Patents

Safety enhancement method for industrial equipment, and device thereof Download PDF

Info

Publication number
WO2023095939A1
WO2023095939A1 PCT/KR2021/017343 KR2021017343W WO2023095939A1 WO 2023095939 A1 WO2023095939 A1 WO 2023095939A1 KR 2021017343 W KR2021017343 W KR 2021017343W WO 2023095939 A1 WO2023095939 A1 WO 2023095939A1
Authority
WO
WIPO (PCT)
Prior art keywords
virtual object
area
virtual
industrial equipment
lidar
Prior art date
Application number
PCT/KR2021/017343
Other languages
French (fr)
Korean (ko)
Inventor
심용수
심상우
Original Assignee
심용수
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 심용수 filed Critical 심용수
Priority to PCT/KR2021/017343 priority Critical patent/WO2023095939A1/en
Publication of WO2023095939A1 publication Critical patent/WO2023095939A1/en

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/88Lidar systems specially adapted for specific applications
    • G01S17/89Lidar systems specially adapted for specific applications for mapping or imaging
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q50/00Systems or methods specially adapted for specific business sectors, e.g. utilities or tourism
    • G06Q50/10Services
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics

Definitions

  • An embodiment of the present invention relates to a method and apparatus for enhancing the safety of various industrial equipment such as trucks, heavy equipment, and construction equipment, and more particularly, to prevent safety accidents that may occur in blind spots of industrial equipment. It relates to a method and device for enhancing safety.
  • the technical problem to be achieved by the embodiment of the present invention is to create an object in a blind spot as a virtual object and provide it in virtual reality or augmented reality, so that workers can easily identify objects located around industrial equipment and enhance safety. It is to provide a method for enhancing the safety of equipment and its device.
  • An example of a method for enhancing the safety of industrial equipment according to an embodiment of the present invention for achieving the above technical problem is using at least one or more photographing devices installed in the blind spot of the industrial equipment to at least one or more existing in the blind spot. creating a virtual object for the object; and displaying the virtual object in augmented reality or virtual reality.
  • an example of a safety enhancement device uses at least one photographing device installed in a blind spot of industrial equipment to detect at least one or more objects present in a blind spot.
  • a virtual object creation unit that creates virtual objects; and a display unit displaying the virtual object in augmented reality or virtual reality.
  • a worker can accurately and easily identify unknown objects located around industrial equipment through virtualized objects, thereby preventing safety accidents.
  • it is possible to monitor everything from the moment an object enters the dangerous zone of industrial equipment to the moment it leaves the safe zone, thereby helping to prevent safety accidents.
  • FIG. 1 is a diagram showing an example in which a photographing device for generating a virtual object is installed in industrial equipment according to an embodiment of the present invention
  • FIG. 2 is a diagram showing an example of a safety accident prevention method for industrial equipment according to an embodiment of the present invention
  • FIG. 3 is a view showing an example in which objects around industrial equipment are created as virtual objects and displayed in a virtual space according to an embodiment of the present invention
  • FIG. 4 is a diagram showing an example of an overall system for enhancing the safety of industrial equipment according to an embodiment of the present invention
  • FIG. 5 is a diagram showing an example of a safety reinforcement method using object recognition according to an embodiment of the present invention.
  • FIG. 6 is a diagram showing another example of a safety reinforcement method using object recognition according to an embodiment of the present invention.
  • FIG. 7 is a flowchart showing an example of a method for enhancing safety of industrial equipment according to an embodiment of the present invention.
  • FIG. 8 is a view showing the configuration of an example of a safety reinforcement device according to an embodiment of the present invention.
  • FIG. 9 is a diagram showing an example of a schematic configuration of a system for creating objects in a virtual space according to an embodiment of the present invention.
  • FIG. 10 is a view showing an example of a photographing method of a photographing device according to an embodiment of the present invention.
  • FIG. 11 is a flowchart illustrating an example of a method for generating an object in a virtual space according to an embodiment of the present invention
  • FIG. 12 is a diagram showing an example of a method for distinguishing a background and an object of a video frame and a measurement frame according to an embodiment of the present invention
  • FIG. 13 is a diagram showing an example of a lattice space according to an embodiment of the present invention.
  • FIG. 14 is a diagram showing an example of displaying objects classified in a video frame in a grid space according to an embodiment of the present invention.
  • 15 is a diagram showing an example of a method of correcting a depth value of a pixel of an object according to an embodiment of the present invention.
  • 16 is a diagram showing an example of a method for generating a 3D virtual object according to an embodiment of the present invention.
  • FIG. 17 is a diagram showing an example of extracting a point cloud for generating a 3D virtual object according to an embodiment of the present invention.
  • FIG. 18 is a diagram showing an example of creating a 3D virtual object according to an embodiment of the present invention.
  • FIG. 19 is a diagram showing the configuration of an example of a virtual object generating device according to an embodiment of the present invention.
  • FIG. 1 is a diagram showing an example in which a photographing device for generating a virtual object is installed in industrial equipment according to an embodiment of the present invention.
  • the truck 100 includes at least one photographing device 110 , 112 , and 114 for photographing nearby objects.
  • the photographing devices 110 , 112 , and 114 may be present in the rear and front side portions of the truck, which are blind spots that the driver cannot directly see with the naked eye.
  • the photographing devices 110 , 112 , and 114 may be arranged to photograph the entire periphery of industrial equipment as well as blind spots.
  • the present embodiment presents the truck 100 as an example of industrial equipment for better understanding, this is only one example and the industrial equipment may be various such as heavy equipment or construction equipment.
  • the industrial equipment may be various such as heavy equipment or construction equipment.
  • various devices used in construction, transportation, factories, etc. such as forklifts, cranes, and excavators, may correspond to the industrial equipment of the present embodiment.
  • the truck 100 will be mainly described as an example of industrial equipment.
  • the photographing devices 110 , 112 , and 114 include a depth camera for obtaining photographic data for generating a virtual object, not a general camera attached to a vehicle or the like.
  • a virtual object can be created with data photographed using a depth camera, but in this case, there is a problem in that the shape of the virtual object is also distorted due to lens distortion of the camera.
  • the photographing devices 110 , 112 , and 114 may include a LiDAR (Light Detection and Ranging) together with a depth camera.
  • LiDAR Light Detection and Ranging
  • the present embodiment does not exclude the photographing devices 110 , 112 , and 114 composed of only the depth camera, for convenience of description, a case in which the photographing devices 9110 , 112 , and 114 include a lidar together with a depth camera will be described.
  • a depth camera with a wide angle of view By using a depth camera with a wide angle of view, a small number of photographing devices 110 , 112 , and 114 can take pictures of surrounding areas including blind spots of industrial equipment.
  • a depth camera with a wide angle of view when a depth camera with a wide angle of view is used, there is a problem in that lens distortion occurs more than that of a depth camera with a narrow angle of view.
  • lidar together it is possible to use a depth camera with severe lens distortion because a virtual object is created by correcting the distortion of the depth camera with lidar.
  • a detailed method of generating an accurate virtual object for example, a digital twin
  • FIG. 2 is a diagram showing an example of a method for preventing safety accidents of industrial equipment according to an embodiment of the present invention.
  • an object 210 may exist around the truck 200 .
  • the object 210 may be various objects or animals that may collide with the movement of the truck, such as objects or facilities.
  • Not all cases in which the object 210 exists around the truck 200 are safety accident risk situations. For example, when the truck 200 goes straight on the road, vehicles located in other lanes or people on the sidewalk are not at risk of a safety accident. Simply informing the driver that the presence of the object 210 around the truck 200 as a safety accident risk situation may rather interfere with driving.
  • the safety enhancement device determines whether a surrounding object 210 is in a dangerous situation according to the moving direction of the truck 200 . For example, when the truck 200 turns right, it may be determined that an object 210 located within a certain area 220 on the right side of the truck 200 is a safety accident risk factor.
  • the safety reinforcement device may predefine a risk area during movement of the industrial equipment according to the type of the industrial equipment. For example, if the industrial equipment is a truck, the danger area when the truck goes straight is a certain area in front of the truck's traveling direction, and when the truck turns right or left, the danger area can be predefined as certain areas on the right and left sides of the truck. . In the case of an excavator, all within a certain radius around the excavator can be defined as a risk area. Since the movement characteristics of industrial equipment are all different, the safety reinforcement device can register and store risk areas according to movement of each industrial equipment in advance.
  • the safety enhancement device receives movement information (for example, angle information of a truck steering device, etc.) of industrial equipment directly from each industrial device, or through a GPS or direction sensor installed in industrial equipment, a filming device, or a safety enhancement device.
  • the direction of movement of industrial equipment can be grasped.
  • the safety reinforcement device may analyze the image frame captured by the photographing device to determine the moving direction or speed of the industrial equipment.
  • various conventional methods for determining the moving direction or speed of industrial equipment may be applied to this embodiment, and are not limited to any one example.
  • FIG. 3 is a diagram illustrating an example in which objects around industrial equipment are created as virtual objects and displayed in a virtual space according to an embodiment of the present invention.
  • the safety enhancement device determines that an object 210 exists in a predefined danger area 220 when the truck 200 turns right, and determines it as a safety accident risk situation, thereby determining the object in the real space.
  • the safety enhancement device does not display objects around the truck as a two-dimensional image, but displays them as a three-dimensional image in real time as shown in FIG. can be checked with
  • the safety enhancement device can display truck images and virtual objects at various angles desired by the driver, such as a head-up display (HUD), augmented reality (AR) glasses, and various display devices.
  • HUD head-up display
  • AR augmented reality
  • FIG. 4 is a diagram showing an example of an overall system for enhancing the safety of industrial equipment according to an embodiment of the present invention.
  • the system for enhancing safety of industrial equipment includes a photographing device 400 , a safety enhancement device 410 and a display device 420 .
  • the photographing device 400 is a device 110 , 112 , and 114 for photographing the surroundings including blind spots of industrial equipment, and may include a depth camera and lidar. Depending on the type of industrial equipment, the photographing device may be located in various places of the industrial equipment.
  • the safety enhancement device 410 uses the photographing data of the photographing device 400 to create virtual objects for objects located around industrial equipment. And, the display device 420 displays the virtual object in augmented reality or virtual reality.
  • the display device 420 may be various according to embodiments such as a HUD, AR glasses, or a general display device.
  • FIG. 5 is a diagram showing an example of a method for enhancing safety using object recognition according to an embodiment of the present invention.
  • the safety reinforcement device 410 generates an alarm through sight or hearing so that the user can know when an object exists in the risk area of industrial equipment, and creates and augments a surrounding object as a virtual object. displayed in real or virtual reality.
  • the safety enhancement device 410 If an object exists in the dangerous area of industrial equipment, there is a risk of safety accident. However, if the object is a tree branch blown by the wind or garbage, it is an object that has nothing to do with safety accidents. If the safety enhancement device 410 generates an alarm or the like to the user whenever an object or the like completely unrelated to the safety accident is located around the industrial equipment, this may rather interfere with the work.
  • the safety enhancement device 410 may recognize the type of object located in the danger area to determine whether there is an actual risk of safety accident.
  • An artificial intelligence model 500 may be used to determine the type of object.
  • the safety reinforcement device 410 inputs a virtual object 510 (ie, a 3D image) generated based on an object located in a danger area to the artificial intelligence model 500 to recognize the object type (520) can do.
  • the artificial intelligence model 500 is a model learned to recognize object types based on 3D images. Since the artificial intelligence model itself for object recognition is already a well-known technology, a detailed description thereof will be omitted.
  • the safety reinforcement device 410 determines whether the object type identified through the artificial intelligence model 500 using a risk factor object list or a non-risk factor list is an object corresponding to a risk factor or an object that does not correspond to a risk factor. can figure it out For example, if leaves or garbage are defined in the object list that is not a risk factor, and the type of virtual object identified through the artificial intelligence model 500 is a leaf, the safety reinforcement device 410 determines that the object is a safety accident. An alarm may not be generated for a user or the like by determining that the element is not an element.
  • FIG. 6 is a diagram showing another example of a method for enhancing safety using object recognition according to an embodiment of the present invention.
  • the safety enhancement device 410 may recognize an object suddenly entering industrial equipment and notify the user. For example, while driving a truck, a rider on a kickboard may suddenly enter in front of the truck or a person may suddenly enter within a work radius of the forklift truck. At this time, since it may be difficult for operators of industrial equipment to respond to the sudden appearance of objects, it is necessary to identify and inform them in advance.
  • the safety enhancement device 410 may set the measurement range of the lidar of the photographing device and the measurement range of the depth camera to be different from each other. That is, the radius of the first area 620 measured by the lidar around the industrial equipment 600 may be larger than the second area 610 captured by the depth camera. In this embodiment, circular areas 610 and 620 are shown around the industrial equipment 600 for better understanding. However, if the industrial equipment is the truck 100 as shown in FIG. 1, the first area 610 and the second area 620 may be a triangular area.
  • the shapes of the first area 620 and the second area 610 may vary depending on the shape of industrial equipment and the installation location of the photographing device, but the size of the first area 620 may be different from that of the second area ( 610) can be made larger than the size.
  • the safety enhancement device 410 notifies the user in a visual or auditory manner when the object 630 enters the first region 620 predefined based on the measurement data measured by the lidar.
  • the safety enhancement device 410 may display a color or image indicating attention on a display device such as a HUD or output a predefined sound.
  • the safety reinforcement device 410 creates a virtual object based on the shooting data captured and measured by the depth camera and LIDAR, and creates it. It can be displayed in augmented reality or virtual reality and provided to the user. Accordingly, the user may be aware in advance that the virtual object is entering the industrial equipment before displaying the virtual object.
  • the safety reinforcement device 410 can release the attention step without creating a virtual object. That is, the safety reinforcement device 410 may notify the user of a safety accident risk situation while tracking the object from the time the object enters the danger zone of the first region 620 until it leaves the danger zone.
  • FIG. 7 is a flowchart illustrating an example of a method for enhancing safety of industrial equipment according to an embodiment of the present invention.
  • the safety enhancement device 410 captures the surroundings of industrial equipment with a photographing device to create virtual objects for objects located in the surroundings (S700).
  • a photographing device to create virtual objects for objects located in the surroundings (S700).
  • the safety enhancement device 410 displays the virtual object in augmented reality or virtual reality (S810).
  • the safety enhancement device 410 may generate and display a virtual object for the object in real time when the object is located in a predefined risk area of industrial equipment.
  • the safety enhancement device 410 recognizes the type of virtual object located around the industrial equipment through an artificial intelligence model as shown in FIG. 5, and then determines whether the corresponding object type corresponds to a predefined safety accident element. Afterwards, an alarm may be provided to the user through visual or auditory information only when it corresponds to the safety accident factor.
  • FIG. 8 is a diagram showing the configuration of an example of a safety reinforcement device according to an embodiment of the present invention.
  • the safety enhancement device 410 includes a virtual object generator 800 and a display unit 810 .
  • the safety enhancement device 410 may be implemented as a computing device including a memory, a processor, and an input/output device.
  • each configuration may be implemented as software, loaded into a memory, and then executed by a processor.
  • the virtual object creation unit 800 creates a virtual object for at least one or more objects existing in the blind spot of industrial equipment using at least one or more photographing devices installed in the blind spot.
  • the photographing device may include a depth camera and lidar.
  • the virtual object generator 800 corrects the depth value of the pixel of the area corresponding to the object in the image frame obtained by photographing with the depth camera with the distance value of the point of the measurement frame obtained by measuring with the LIDAR.
  • a virtual object can be created based on the corrected pixel depth value.
  • a method of generating a virtual object using a depth camera and LIDAR is shown below in FIG. 9 .
  • the display unit 810 displays virtual objects in augmented reality or virtual reality.
  • the display unit 810 may generate an alarm by recognizing the type of the virtual object using the previously learned artificial intelligence model and determining whether the type of the virtual object is a safety accident factor.
  • the display unit 810 displays the existence of the moving object visually or by sound when there is a moving object in the first area, which is the measurement area of the lidar, through the measurement frame measured by the lidar, and the moving object is controlled.
  • a virtual object may be created and displayed or an alarm such as a warning may be output.
  • FIG. 9 is a diagram showing an example of a schematic configuration of a system for creating objects in a virtual space according to an embodiment of the present invention.
  • a photographing device 900 includes a depth camera 902 and a LIDAR 904 .
  • the depth camera 902 is a camera that captures a certain space in which the object 930 exists and provides a depth value of each pixel together.
  • the object 930 means an object to be created in a virtual space.
  • the depth camera 902 itself is a well-known technology, and various types of conventional depth cameras 902 may be used in this embodiment.
  • the depth camera 902 may capture a still image or a video.
  • Photo data including a depth value of each pixel obtained by photographing with the depth camera 902 is called an image frame. That is, a video is composed of a certain number of video frames per second.
  • the LIDAR 904 emits a laser into a certain space, measures signals returned from each point in the space (ie, a reflection point), and outputs distance values for a plurality of points in the certain space.
  • Data composed of distance values for a plurality of points measured by the LIDAR 904 in a certain space at a certain point in time is called a measurement frame.
  • the resolution of a plurality of points measured by the lidar 904, that is, the measurement frame, may be different depending on the lidar 904.
  • LiDAR 904 itself is already widely known technology, and various conventional lidars 904 may be used in this embodiment.
  • the photographing device 900 simultaneously drives the depth camera 902 and the LIDAR 904 to photograph and measure an object 930 in a certain space.
  • the expression 'photographing' of the photographing device 900 or 'measurement' of the photographing device may be interpreted as capturing the depth camera 902 and measuring the LIDAR 904 simultaneously.
  • the number of video frames generated per second by the depth camera 902 and the number of measurement frames generated per second by the LIDAR 904 may be the same or different depending on embodiments.
  • the resolution of the video frame and the resolution of the measurement frame may be the same or different depending on the embodiment.
  • the depth camera 902 and the lidar 904 are simultaneously driven to generate an image frame and a measurement frame for a certain space, they can be mapped to the same time axis and synchronized.
  • the virtual object generating device 910 uses the image frame obtained by photographing with the depth camera 902 and the measurement frame obtained by measuring with the LIDAR 904 together to create an object in the virtual space for the object 930 in the real world (that is, , digital twin). 1 to 8 have been described on the assumption that the virtual object generating device 910 is implemented as a part of the safety enhancement device 410, but the virtual object generating device 910 is a device separate from the safety accident prevention device 410. can be implemented as
  • the virtual object generating device 910 may be connected to the photographing device 900 through a wired or wireless communication network (eg, WebRTC, etc.) and receive real-time video frames and measurement frames generated by the photographing device 900 .
  • the virtual object generating device 910 may provide the result of object creation in the virtual space to the user terminal 920 so that the user can check it.
  • the photographing device 900 and the virtual object generating device 910 may be implemented as one device.
  • the photographing device 910 and the virtual object device 910 may be implemented as part of various devices that display augmented reality or virtual reality, such as AR (augumented reality) glasses, HMD (Head Mounded Display), or wearable devices.
  • AR augumented reality
  • HMD Head Mounded Display
  • the photographing device 900 is implemented as a part of AR glasses, HMD, or a wearable device, and the photographing device 900 is a virtual object generating device that connects real-time photographing and measuring image frames and measurement frames to a wired or wireless communication network. 910, the AR glasses, HMD, etc.
  • the photographing device may receive the virtual object from the virtual object generating device 910 and display the virtual object in augmented reality or virtual reality.
  • the user can immediately check the virtual object created in real time through augmented reality or virtual reality.
  • a detailed method of generating an object in a virtual space will be reviewed again below in FIG. 10 .
  • FIG. 10 is a diagram illustrating an example of a photographing method of a photographing device according to an embodiment of the present invention.
  • At least one photographing device 900 may continuously photograph at least one object 1000 existing in a blind spot. At least one or more objects 1000 and 1010 may exist at various points in time on the basis of the time axis in the image frame and the measurement frame obtained by capturing the photographing device 900 .
  • This embodiment shows an example of an image frame for convenience of description.
  • the virtual object generating device 910 may classify the image frame and the measurement frame in units of objects.
  • the virtual object generating apparatus 910 may assign the same identification information (or index) to the video frame and the measurement frame in which the same object exists in the video frame and the measurement frame.
  • first identification information or first index, hereinafter referred to as identification information
  • Second identification information may be assigned to the plurality of image frames 1030 , 1032 , and 1034 in which 1010) is present.
  • No identification information may be assigned to the image frames 1040 and 1042 in which no object exists, or third identification information may be assigned.
  • the image frames arranged along the time axis can be divided into three groups, that is, A (1050), B (1060), and C (1070).
  • identification information corresponding to each object may be assigned to one video frame and the measurement frame. That is, a plurality of pieces of identification information may be assigned to one image frame and one measurement frame.
  • the photographing device 900 since the photographing device 900 simultaneously drives the depth camera 902 and the lidar 904 to take pictures, the video frame generated by the depth camera 902 and the measurement frame measured by the lidar are synchronized in time. It can be. Therefore, the virtual object generating device 910 determines whether the same object exists only in the video frame, determines the time period of the video frame in which the same object exists, and assigns the same identification information to the video frame during that time period. The same identification information may be assigned by considering the measurement frame generated during the time period identified for the frame as a period in which the same object exists.
  • the virtual object generating device 910 may determine whether objects existing in an image frame are the same using various conventional image recognition algorithms.
  • the virtual object generating device 910 may use an artificial intelligence model as an example of an image recognition algorithm. Since the method itself for determining whether objects in an image are the same is a well-known technique, a detailed description thereof will be omitted.
  • FIG. 11 is a flowchart illustrating an example of a method of generating an object in a virtual space according to an embodiment of the present invention.
  • the virtual object generating apparatus 910 divides a first background area and a first object area from an image frame obtained by photographing a certain space with a depth camera (S1100).
  • the virtual object generating device 910 may distinguish a background and an object from a plurality of image frames in which the same object is photographed.
  • a background and an object may be distinguished for at least one image frame (eg, group A of 1020, 1022, and 1024) in which the same object exists.
  • the virtual object generating apparatus 910 may distinguish the background and the plurality of objects, respectively.
  • An example of a method for distinguishing a background from an object in an image frame will be reviewed again in FIG. 12 .
  • the virtual object generating device 910 distinguishes a second background area and a second object area from a measurement frame obtained by measuring a certain space with lidar (S1110).
  • the virtual object generator 910 may distinguish a background and an object for a plurality of measurement frames in which the same object is measured. For example, as shown in FIG. 11, a background and an object may be distinguished for a plurality of measurement frames to which the same identification information is assigned. As another embodiment, when a plurality of objects exist in the measurement frame, the virtual object generator may distinguish the background and the plurality of objects, respectively.
  • An example of a method for distinguishing a background and an object in a measurement frame is shown in FIG. 12 .
  • the depth camera 902 and the LIDAR 904 are spaced apart from each other by a predetermined distance within the photographing device 900, and therefore, the photographing angles of the video frame and the measurement frame are different from each other.
  • the positions of each pixel of the video frame and each point of the measurement frame may not be mapped on a one-to-one (1:1, scale) basis.
  • this embodiment uses a grid space.
  • the virtual object generator 910 arranges the pixels of the first object area divided from the image frame according to the depth value in the first grid space including the grid of the predefined size (S1120), and also The points of the second object area divided in the measurement frame are arranged according to the distance values in the second lattice space including the lattice of the same size (S1120). Since the image frame and the measurement frame are data obtained by photographing the same space, objects existing in the first object area and the second object area are the same object.
  • the first lattice space and the second lattice space are spaces having grids of the same size in the virtual space. An example of a grid space is shown in FIG. 13 .
  • the virtual object generator 910 corrects the depth value of the pixel in the first grid space based on the distance value of the point in the second grid space (S1130). Since the pixels of the first object area and the points of the second object area exist in the grid space of the same size, if the position, direction, size, etc. of the first grid space and the second grid space match each other, each point of the second grid space and each pixel of the first lattice space may be mapped.
  • a detailed method of correcting a depth value of a pixel in a grid unit of a grid space using a distance value of a point will be reviewed again in FIG. 15 .
  • the virtual object generating apparatus 910 creates an object (ie, a virtual object) in a virtual space having surface information based on the pixel whose depth value is corrected (S1140).
  • the virtual object generating apparatus 910 corrects the pixel depth value of an object existing in a plurality of image frames to which the same identification information is assigned to the point distance value of an object existing in a plurality of measurement frames to which the same identification information is assigned, and then generates a plurality of A virtual object can be created using the corrected pixels of the image frame of . That is, a virtual object may be created by correcting a pixel depth value of an image frame of an object photographed at various angles and positions.
  • the virtual object generating device 910 may generate a 3D virtual object using various types of 3D modeling algorithms.
  • the virtual object generating apparatus 910 may generate a 3D object having surface information as an artificial intelligence model by using a pixel having a depth value.
  • the virtual object generator 910 is a point cloud representing corners and vertices among pixels constituting an object.
  • a virtual object can be created by extracting and inputting the point cloud to a 3D modeling algorithm. An example of generating a virtual object using a point cloud will be reviewed again in FIG. 16 .
  • a virtual object can be created based on the distance value of each point of the measurement frame, but generally the resolution of the measurement frame is lower than that of the video frame, so when creating a virtual object through the measurement frame, the corners of the object are crushed.
  • a virtual object is created using a distance value of a pixel of an image frame having a relatively high resolution.
  • FIG. 12 is a diagram illustrating an example of a method of distinguishing a background and an object of a video frame and a measurement frame according to an embodiment of the present invention.
  • first artificial intelligence model 1200 that distinguishes the background of an image frame from an object
  • second artificial intelligence model 1210 that distinguishes a background from an object of a measurement frame.
  • Each of the artificial intelligence models 1200 and 1210 is a trained model using pre-constructed learning data and may be implemented as a Convolutional Neural Network (CNN) or the like. Since the process of learning and generating an artificial intelligence model itself is already a well-known technology, a description thereof will be omitted.
  • CNN Convolutional Neural Network
  • the first artificial intelligence model 1200 is a model generated through machine learning to distinguish a background and an object in an image frame when an image frame is input. For example, if the first artificial intelligence model 1200 is an artificial intelligence model learned to recognize a chair, the first artificial intelligence model 1200 is a region in which a chair exists in an image frame (ie, a region of a chair in an image frame). pixels) can be distinguished.
  • the second artificial intelligence model 1210 is a model generated through machine learning to distinguish a background and an object in a measurement frame when a measurement frame is input. For example, if the first artificial intelligence model 1200 is an artificial intelligence model trained to recognize a chair, the second artificial intelligence model 1210 corresponds to a region where a chair exists in the measurement frame (ie, corresponds to a chair within the measurement frame). points) can be distinguished.
  • FIG. 13 is a diagram showing an example of a lattice space according to an embodiment of the present invention.
  • the grid space 1300 is a space in which a region in the virtual space is divided into unit grids 1310 of a predetermined size.
  • the grid space 1200 may be a space composed of unit cells 1310 having width, length, and height of d1, d2, and d3, respectively.
  • d1, d2, and d3 may all have the same size (eg, 1 mm) or different sizes.
  • FIG. 14 is a diagram illustrating an example of displaying objects classified in a video frame in a grid space according to an embodiment of the present invention.
  • the virtual object generating apparatus 910 may display pixels of an object area divided in an image frame in the grid space 1400 using the depth values.
  • pixels 1410 are schematically illustrated for ease of understanding.
  • the virtual object generating apparatus 910 may determine the 3D coordinate value (or vector value of the pixel, etc.) of each pixel in the grid space by mapping the pixels of the object in the image frame to the grid space 1400 . That is, a 3D coordinate value of each pixel of the object may be generated using a predefined point in the lattice space as a reference point (0,0,0 or X,Y,Z).
  • the virtual object generating apparatus 910 may map points representing objects in the measurement frame to the grid space. If the point of the object in the measurement frame is displayed in the grid space, it can also be displayed in a shape similar to that of FIG. 14.
  • 15 is a diagram illustrating an example of a method of correcting a depth value of a pixel of an object according to an embodiment of the present invention.
  • a position corresponding to a first lattice 1500 in a first lattice space to which an object of an image frame is mapped and a first lattice 1500 in a second lattice space to which an object of a measurement frame is mapped denotes the second grating 1510, respectively.
  • the first lattice space in which the pixels of the object in the image frame are mapped and the second lattice space in which the points of the object in the measurement frame are mapped are first matched in position, direction, size, etc.
  • this embodiment shows an example in which one pixel 1502 and one point 1512 are present in the first grid 1500 and the second grid 1510, respectively, for convenience of explanation, but one grid 1500 , 1510) may have a plurality of pixels or a plurality of points. Also, the number of pixels and points present in the first grid 1500 and the second grid 1510 may be the same or different.
  • the virtual object generator 910 corrects the depth value of the pixel 1502 of the first grid 1500 based on the distance value of the point 1512 of the second grid 1510. Since the distance value of the point measured by lidar is more accurate, the virtual object generator 910 corrects the depth value of the pixel based on the distance value of the point. For example, if the coordinate values of the pixels 1502 of the first grid 1500 in the grid space and the coordinate values of the points of the second grid 1510 are different from each other, the virtual object generator 910 operates the first grid The pixel 1502 of (1500) is corrected (1520) according to the coordinate values of the points of the second grid 1510.
  • positions indicated by pixels of the first grid 1500 and points of the second grid 1510 may not be mapped one-to-one. Therefore, by using a plurality of points that exist in the second grid 1510 or a plurality of points that exist in the surrounding grids that exist in the upper, lower, left, right, etc. of the second grid 1510, the values existing between each point are interpolated, etc. Through this, it is possible to determine the distance value of the coordinate value of the point corresponding to the coordinate value of the pixel. In addition, coordinate values of pixels of the first grid 1500 may be corrected using distance values of points generated through interpolation.
  • 16 is a diagram illustrating an example of a method of generating a 3D virtual object according to an embodiment of the present invention.
  • the virtual object generating apparatus 910 may generate a 3D virtual object 1620 including surface information by inputting a point cloud 1610 to a 3D modeling algorithm 1600 .
  • a point cloud can be composed of points representing key points that can define an object, such as vertices and edges of an object.
  • Various conventional methods of extracting a point cloud from an image frame captured by a depth camera may be applied to this embodiment. Since the method of extracting a point cloud itself is a well-known technique, an additional description thereof will be omitted.
  • the virtual object generating apparatus 910 maps the object extracted from the image frame to the lattice space, and corrects the distance value (or coordinate value) of each pixel mapped to the lattice space in the same manner as shown in FIG. 15 . Then, a point cloud to be used for generating a 3D virtual object is extracted from the corrected distance value (or coordinate value) of each pixel. An example of extracting a point cloud for the object of FIG. 14 is shown in FIG. 17 .
  • the virtual object generating device 910 may use an artificial intelligence model such as machine learning as a 3D modeling algorithm 1600 .
  • Various conventional algorithms for generating a 3D object based on a point cloud may be applied to this embodiment. Since the method of generating a 3D object using a point cloud itself is a well-known technology, a detailed description thereof will be omitted.
  • FIG. 17 is a diagram illustrating an example of extracting a point cloud for generating a 3D virtual object according to an embodiment of the present invention.
  • an example of extracting a point cloud 1710 for a chair is shown.
  • Various conventional methods of extracting a point cloud may be applied to this embodiment.
  • a 3D virtual object may be generated by extracting a point cloud after correcting a depth value of a pixel in a plurality of image frames to which the same identification number is assigned, that is, a plurality of image frames in which the same object is photographed.
  • the virtual object generator may create a virtual object 1800 including surface information using a point cloud.
  • FIG. 19 is a diagram showing the configuration of an example of a virtual object generating device according to an embodiment of the present invention.
  • the virtual object generating device 910 includes a first object extraction unit 1900, a second object extraction unit 1910, a first grid arrangement unit 1920, a second grid arrangement unit 1930, It includes a correction unit 1940 and an object creation unit 1950.
  • the virtual object generator 810 may be implemented as a computing device including a memory, processor, input/output device, etc., or as a server, cloud system, etc. can be performed by
  • the first object extraction unit 1900 distinguishes a first background area and a first object area from an image frame obtained by photographing a certain space with a depth camera.
  • the second object extraction unit 1910 distinguishes a second background area and a second object area from a measurement frame obtained by measuring the predetermined space with LIDAR. Classification of the background and the object can be performed using an artificial intelligence model, and an example thereof is shown in FIG. 12 .
  • the first grid arrangement unit 1920 arranges pixels of the first object area according to depth values in a first grid space including a grid having a predefined size.
  • the second grid arranging unit 1930 arranges points of the second object area according to distance values in a second grid space including a grid having a predefined size.
  • An example of the lattice space is shown in FIG. 13, and an example of mapping the pixels of an object extracted from an image frame to the lattice space is shown in FIG.
  • the correction unit 1940 corrects the depth value of the pixel in the first grid space based on the distance value of the point in the second grid space.
  • An example of a method of correcting through grid space comparison is shown in FIG. 15 .
  • the object generator 1950 creates a virtual object having surface information based on pixels whose depth values are corrected.
  • the object generator 1950 may create an object in a 3D virtual space using all pixels. However, in this case, since the amount of computation increases, the object generator 1950 may create a virtual object by creating a point cloud, examples of which are shown in FIGS. 16 to 18 .
  • Each embodiment of the present invention can also be implemented as computer readable codes on a computer readable recording medium.
  • a computer-readable recording medium includes all types of recording devices in which data that can be read by a computer system is stored. Examples of computer-readable recording media include ROM, RAM, CD-ROM, SSD, and optical data storage devices.
  • the computer-readable recording medium may be distributed to computer systems connected through a network to store and execute computer-readable codes in a distributed manner.

Abstract

Disclosed are a safety enhancement method for industrial equipment, and a device thereof. The safety enhancement device uses one or more imaging devices installed at a blind spot of a piece of industrial equipment to generate a virtual object for one or more objects present in the blind spot, and displays the virtual object in augmented reality or virtual reality.

Description

산업장비의 안전강화방법 및 그 장치Industrial Equipment Safety Reinforcement Method and Device
본 발명의 실시 예는 트럭, 중장비, 건설장비 등의 각종 산업장비의 안전을 강화하는 방법 및 그 장치에 관한 것으로, 보다 상세하게는 산업장비의 사각지대에서 발생할 수 있는 안전사고를 방지할 수 있도록 안전을 강화하는 방법 및 그 장치에 관한 것이다.An embodiment of the present invention relates to a method and apparatus for enhancing the safety of various industrial equipment such as trucks, heavy equipment, and construction equipment, and more particularly, to prevent safety accidents that may occur in blind spots of industrial equipment. It relates to a method and device for enhancing safety.
트럭이나 중장비 또는 건설장비 등의 산업장비는 작업자가 직접 육안으로 볼 수 없는 사각지대가 존재한다. 사각지대에 위치한 사람이나 물건 등을 작업자가 인지하지 못한 채 산업장비를 조작하는 경우 안전사고가 발생할 수 있다. 이를 방지하기 위하여 트럭이나 중장비 등의 사각지대에 카메라를 설치하거나 근접 센서를 설치한다. 그러나 사각지대가 넓은 경우 하나의 카메라로 모든 사각지대를 커버하기 어렵고 또한 카메라의 화각(angle of view)을 넓게 할 경우 촬영 이미지가 왜곡(lens distortion)되어 작업자가 정확한 거리 등을 파악하기 어려운 문제점이 존재한다. 센서의 경우 근접한 객체가 존재함을 알 수 있을 뿐 그 객체의 위치가 정확히 어디인지 알 수 없으며 근접한 객체의 종류가 무엇인지도 확인하기 어렵다.Industrial equipment such as trucks, heavy equipment, or construction equipment has blind spots that workers cannot see directly with the naked eye. Safety accidents can occur when workers operate industrial equipment without recognizing people or objects located in blind spots. To prevent this, a camera or a proximity sensor is installed in a blind spot of a truck or heavy equipment. However, if the blind spot is wide, it is difficult to cover all the blind spots with one camera, and if the angle of view of the camera is widened, the captured image is distorted, making it difficult for the operator to determine the exact distance. exist. In the case of a sensor, it is possible to know that a nearby object exists, but it is difficult to know exactly where the location of the object is, and it is difficult to determine the type of the nearby object.
본 발명의 실시 예가 이루고자 하는 기술적 과제는, 사각지대의 객체를 가상객체로 생성하여 가상현실 또는 증강현실로 제공함으로써 작업자가 산업장비의 주변에 위치한 객체를 용이하게 식별하여 안전을 강화할 수 있도록 하는 산업장비의 안전강화방법 및 그 장치를 제공하는 데 있다.The technical problem to be achieved by the embodiment of the present invention is to create an object in a blind spot as a virtual object and provide it in virtual reality or augmented reality, so that workers can easily identify objects located around industrial equipment and enhance safety. It is to provide a method for enhancing the safety of equipment and its device.
상기의 기술적 과제를 달성하기 위한, 본 발명의 실시 예에 따른 산업장비의 안전강화방법의 일 예는, 산업장비의 사각지대에 설치된 적어도 하나 이상의 촬영장치를 이용하여 사각지대에 존재하는 적어도 하나 이상의 객체에 대한 가상객체를 생성하는 단계; 및 상기 가상객체를 증강현실 또는 가상현실에 표시하는 단계;를 포함한다.An example of a method for enhancing the safety of industrial equipment according to an embodiment of the present invention for achieving the above technical problem is using at least one or more photographing devices installed in the blind spot of the industrial equipment to at least one or more existing in the blind spot. creating a virtual object for the object; and displaying the virtual object in augmented reality or virtual reality.
상기의 기술적 과제를 달성하기 위한, 본 발명의 실시 예에 따른 안전강화장치의 일 예는, 산업장비의 사각지대에 설치된 적어도 하나 이상의 촬영장치를 이용하여 사각지대에 존재하는 적어도 하나 이상의 객체에 대한 가상객체를 생성하는 가상객체생성부; 및 상기 가상객체를 증강현실 또는 가상현실에 표시하는 표시부;를 포함한다.In order to achieve the above technical problem, an example of a safety enhancement device according to an embodiment of the present invention uses at least one photographing device installed in a blind spot of industrial equipment to detect at least one or more objects present in a blind spot. a virtual object creation unit that creates virtual objects; and a display unit displaying the virtual object in augmented reality or virtual reality.
본 발명의 실시 예에 따르면, 작업자는 산업장비의 주변에 위치한 미지(unknown)의 객체를 가상화(vituralization)된 객체를 통해 정확하고 용이하게 파악할 수 있어 안전사고를 방지할 수 있다. 다른 예로, 객체가 산업장비의 위험지대에 들어오는 순간부터 안전지대로 나가는 순간까지를 모두 모니터링할 수 있어 안전사고 예방에 도움을 줄 수 있다.According to an embodiment of the present invention, a worker can accurately and easily identify unknown objects located around industrial equipment through virtualized objects, thereby preventing safety accidents. As another example, it is possible to monitor everything from the moment an object enters the dangerous zone of industrial equipment to the moment it leaves the safe zone, thereby helping to prevent safety accidents.
도 1은 본 발명의 실시 예에 따른 산업장비에 가상객체 생성을 위한 촬영장치를 설치한 일 예를 도시한 도면,1 is a diagram showing an example in which a photographing device for generating a virtual object is installed in industrial equipment according to an embodiment of the present invention;
도 2는 본 발명의 실시 예에 따른 산업장비의 안전사고 예방 방법의 예를 도시한 도면,2 is a diagram showing an example of a safety accident prevention method for industrial equipment according to an embodiment of the present invention;
도 3은 본 발명의 실시 예에 따른 산업장비 주변의 객체를 가상객체로 생성하여 가상공간에 표시한 예를 도시한 도면,3 is a view showing an example in which objects around industrial equipment are created as virtual objects and displayed in a virtual space according to an embodiment of the present invention;
도 4는 본 발명의 실시 예에 따른 산업장비의 안전강화를 위한 전반적인 시스템의 일 예를 도시한 도면, 4 is a diagram showing an example of an overall system for enhancing the safety of industrial equipment according to an embodiment of the present invention;
도 5는 본 발명의 실시 예에 따른 객체인식을 이용한 안전강화방법의 일 예를 도시한 도면,5 is a diagram showing an example of a safety reinforcement method using object recognition according to an embodiment of the present invention;
도 6은 본 발명의 실시 예에 따른 객체인식을 이용한 안전강화방법의 다른 일 예를 도시한 도면,6 is a diagram showing another example of a safety reinforcement method using object recognition according to an embodiment of the present invention;
도 7은 본 발명의 실시 예에 따른 산업장비의 안전강화방법의 일 예를 도시한 흐름도,7 is a flowchart showing an example of a method for enhancing safety of industrial equipment according to an embodiment of the present invention;
도 8은 본 발명의 실시 예에 따른 안전강화장치의 일 예의 구성을 도시한 도면,8 is a view showing the configuration of an example of a safety reinforcement device according to an embodiment of the present invention;
도 9는 본 발명의 실시 예에 따른 가상공간의 객체 생성을 위한 시스템의 개략적인 구성의 일 예를 도시한 도면,9 is a diagram showing an example of a schematic configuration of a system for creating objects in a virtual space according to an embodiment of the present invention;
도 10은 본 발명의 실시 예에 따른 촬영장치의 촬영방법의 일 예를 도시한 도면,10 is a view showing an example of a photographing method of a photographing device according to an embodiment of the present invention;
도 11은 본 발명의 실시 예에 따른 가상공간의 객체를 생성하는 방법의 일 예를 도시한 흐름도,11 is a flowchart illustrating an example of a method for generating an object in a virtual space according to an embodiment of the present invention;
도 12는 본 발명의 실시 예에 따른 영상프레임과 측정프레임의 배경과 객체를 구분하는 방법의 일 예를 도시한 도면,12 is a diagram showing an example of a method for distinguishing a background and an object of a video frame and a measurement frame according to an embodiment of the present invention;
도 13은 본 발명의 실시 예에 따른 격자공간의 일 예를 도시한 도면,13 is a diagram showing an example of a lattice space according to an embodiment of the present invention;
도 14는 본 발명의 실시 예에 따른 영상프레임에서 구분한 객체를 격자공간에 표시한 일 예를 도시한 도면,14 is a diagram showing an example of displaying objects classified in a video frame in a grid space according to an embodiment of the present invention;
도 15는 본 발명의 실시 예에 따른 객체의 픽셀의 깊이값을 보정하는 방법의 일 예를 도시한 도면,15 is a diagram showing an example of a method of correcting a depth value of a pixel of an object according to an embodiment of the present invention;
도 16은 본 발명의 실시 예에 따른 3차원 가상 객체를 생성하는 방법의 일 예를 도시한 도면,16 is a diagram showing an example of a method for generating a 3D virtual object according to an embodiment of the present invention;
도 17은 본 발명의 실시 예에 따른 3차원 가상 객체 생성을 위한 포인트 클라우드를 추출한 일 예를 도시한 도면,17 is a diagram showing an example of extracting a point cloud for generating a 3D virtual object according to an embodiment of the present invention;
도 18은 본 발명의 실시 예에 따른 3차원 가상 객체의 생성 예를 도시한 도면, 그리고,18 is a diagram showing an example of creating a 3D virtual object according to an embodiment of the present invention, and
도 19는 본 발명의 실시 예에 다른 가상객체생성장치의 일 예의 구성을 도시한 도면이다.19 is a diagram showing the configuration of an example of a virtual object generating device according to an embodiment of the present invention.
이하에서, 첨부된 도면들을 참조하여 본 발명의 실시 예에 따른 산업장비의 안전강화방법 및 그 장치에 대해 상세히 살펴본다.Hereinafter, with reference to the accompanying drawings, a method and device for enhancing safety of industrial equipment according to an embodiment of the present invention will be described in detail.
도 1은 본 발명의 실시 예에 따른 산업장비에 가상객체 생성을 위한 촬영장치를 설치한 일 예를 도시한 도면이다.1 is a diagram showing an example in which a photographing device for generating a virtual object is installed in industrial equipment according to an embodiment of the present invention.
도 1을 참조하면, 트럭(100)에는 주변에 위치한 객체를 촬영하는 촬영장치(110,112,114)가 적어도 하나 이상 존재한다. 예를 들어, 운전자가 직접 육안으로 볼 수 없는 사각지대인 트럭의 뒷부분과 앞측면 부분 등에 촬영장치(110,112,114)가 존재할 수 있다. 일 실시 예로, 촬영장치(110,112,114)는 사각지대뿐만 아니라 산업장비의 주변 전체를 촬영하도록 배치될 수 있다. Referring to FIG. 1 , the truck 100 includes at least one photographing device 110 , 112 , and 114 for photographing nearby objects. For example, the photographing devices 110 , 112 , and 114 may be present in the rear and front side portions of the truck, which are blind spots that the driver cannot directly see with the naked eye. As an example, the photographing devices 110 , 112 , and 114 may be arranged to photograph the entire periphery of industrial equipment as well as blind spots.
본 실시 예는 이해를 돕기 위하여 산업장비의 예로 트럭(100)을 제시하고 있으나, 이는 하나의 예일 뿐 산업장비는 중장비나 건설장비 등 다양할 수 있다. 예를 들어, 지게차, 크레인, 굴착기 등 건설이나 운송, 공장 등에서 사용되는 각종 기기가 본 실시 예의 산업장비에 해당할 수 있다. 다만 이하에서는 설명의 편의를 위하여 산업장비의 예로 트럭(100)을 위주로 설명한다.Although the present embodiment presents the truck 100 as an example of industrial equipment for better understanding, this is only one example and the industrial equipment may be various such as heavy equipment or construction equipment. For example, various devices used in construction, transportation, factories, etc., such as forklifts, cranes, and excavators, may correspond to the industrial equipment of the present embodiment. However, hereinafter, for convenience of description, the truck 100 will be mainly described as an example of industrial equipment.
촬영장치(110,112,114)는 차량 등에 부착된 일반 카메라가 아니라 가상객체의 생성을 위한 촬영데이터를 획득하기 위한 깊이카메라(depteh camera)를 포함한다. 일 실시 예로, 깊이카메라를 이용하여 촬영한 데이터로 가상객체를 생성할 수 있으나 이 경우 카메라 렌즈 왜곡(lens distortion)으로 인해 가상객체 또한 그 형상이 왜곡되는 문제점이 존재한다. 이를 해결하는 방법으로, 촬영장치(110,112,114)는 깊이카메라와 함께 라이다(LiDAR, Light Detection and Ranging)를 포함할 수 있다. 본 실시 예가 깊이카메라만으로 구성되는 촬영장치(110,112,114)를 배제하는 것은 아니나, 설명의 편의를 위하여 이하에서는 촬영장치9110,112,114)가 깊이카메라와 함께 라이다를 포함하는 경우를 가정하여 설명한다. The photographing devices 110 , 112 , and 114 include a depth camera for obtaining photographic data for generating a virtual object, not a general camera attached to a vehicle or the like. As an example, a virtual object can be created with data photographed using a depth camera, but in this case, there is a problem in that the shape of the virtual object is also distorted due to lens distortion of the camera. As a method of solving this problem, the photographing devices 110 , 112 , and 114 may include a LiDAR (Light Detection and Ranging) together with a depth camera. Although the present embodiment does not exclude the photographing devices 110 , 112 , and 114 composed of only the depth camera, for convenience of description, a case in which the photographing devices 9110 , 112 , and 114 include a lidar together with a depth camera will be described.
화각이 넓은 깊이카메라를 사용하여 적은 수의 촬영장치(110,112,114)로 산업장비의 사각지대를 포함한 주변 영역을 빈틈없이 촬영할 수 있다. 그러나 화각이 넓은 깊이카메라를 사용할 경우 화각이 좁은 깊이카메라에 비해 렌즈 왜곡이 더 많이 발생하는 문제점이 존재한다. 라이다를 함께 포함하는 경우에 깊이카메라의 왜곡을 라이다로 보정하여 가상객체를 생성하므로 렌즈 왜곡이 심한 깊이카메라의 사용도 가능하다. 깊이카메라와 라이다를 함께 이용하여 객체에 대한 정확한 가상객체(예를 들어, 디지털트윈)를 생성하는 구체적인 방법에 대해서는 도 9 이하에서 다시 살펴본다.By using a depth camera with a wide angle of view, a small number of photographing devices 110 , 112 , and 114 can take pictures of surrounding areas including blind spots of industrial equipment. However, when a depth camera with a wide angle of view is used, there is a problem in that lens distortion occurs more than that of a depth camera with a narrow angle of view. In the case of including lidar together, it is possible to use a depth camera with severe lens distortion because a virtual object is created by correcting the distortion of the depth camera with lidar. A detailed method of generating an accurate virtual object (for example, a digital twin) for an object by using a depth camera and LIDAR together will be reviewed again below in FIG. 9 .
도 2는 본 발명의 실시 예에 따른 산업장비의 안전사고 예방방법의 일 예를 도시한 도면이다.2 is a diagram showing an example of a method for preventing safety accidents of industrial equipment according to an embodiment of the present invention.
도 2a를 참조하면, 트럭(200)의 주변에 객체(210)가 존재할 수 있다. 본 실시 예는 객체(210)의 예로 사람을 도시하고 있으나, 객체(210)는 물건이나 시설물 등 트럭의 이동에 따라 부딪힐 수 있는 각종 사물이나 동물일 수 있다. 트럭(200) 주변에 객체(210)가 존재하는 경우가 모두 안전사고 위험상황인 것은 아니다. 예를 들어, 트럭(200)이 도로에서 직진할 때 다른 차선에 위치한 차량이나 인도에 있는 사람 등이 안전사고 위험이 있는 것은 아니다. 단순히 트럭(200) 주변에 객체(210)가 존재하는 경우를 안전사고 위험상황이라고 운전자 등에게 알리면 오히려 운전 등에 방해가 될 수 있다.Referring to FIG. 2A , an object 210 may exist around the truck 200 . Although this embodiment shows a person as an example of the object 210, the object 210 may be various objects or animals that may collide with the movement of the truck, such as objects or facilities. Not all cases in which the object 210 exists around the truck 200 are safety accident risk situations. For example, when the truck 200 goes straight on the road, vehicles located in other lanes or people on the sidewalk are not at risk of a safety accident. Simply informing the driver that the presence of the object 210 around the truck 200 as a safety accident risk situation may rather interfere with driving.
도 2b를 참조하면, 안전강화장치는 트럭(200)의 이동방향 등에 따라 주변의 객체(210)가 위험상황인지 파악한다. 예를 들어, 트럭(200)이 우회전하는 경우에 트럭(200)의 우측면의 일정 영역(220) 내에 위치한 객체(210)는 안전사고 위험요소라고 판단할 수 있다. 이를 위해, 안전강화장치는 산업장비의 종류에 따라 산업장비의 이동시 위험영역을 미리 정의하고 있을 수 있다. 예를 들어, 산업장비가 트럭이면 트럭의 직진시 위험영역은 트럭 진행방향의 앞쪽의 일정영역이고, 트럭의 우회전 또는 좌회전시 위험영역은 트럭의 우측면과 좌측면의 일정영역으로 미리 정의될 수 있다. 포크레인의 경우 포크레인을 중심으로 일정 반경 내를 모두 위험영역으로 정의할 수 있다. 산업장비의 움직임 특성이 모두 다르므로, 안전강화장치는 산업장비별 움직임에 따른 위험영역을 미리 등록받아 저장할 수 있다. Referring to FIG. 2B , the safety enhancement device determines whether a surrounding object 210 is in a dangerous situation according to the moving direction of the truck 200 . For example, when the truck 200 turns right, it may be determined that an object 210 located within a certain area 220 on the right side of the truck 200 is a safety accident risk factor. To this end, the safety reinforcement device may predefine a risk area during movement of the industrial equipment according to the type of the industrial equipment. For example, if the industrial equipment is a truck, the danger area when the truck goes straight is a certain area in front of the truck's traveling direction, and when the truck turns right or left, the danger area can be predefined as certain areas on the right and left sides of the truck. . In the case of an excavator, all within a certain radius around the excavator can be defined as a risk area. Since the movement characteristics of industrial equipment are all different, the safety reinforcement device can register and store risk areas according to movement of each industrial equipment in advance.
안전강화장치는 산업장비의 움직임 정보(예를 들어, 트럭의 조향장치의 각도 정보 등)를 각 산업장비로부터 직접 입력받거나, 산업장비나 촬영장치 또는 안전강화장치 등에 설치된 GPS나 방향센서 등을 통해 산업장비의 이동방향 등을 파악할 수 있다. 또 다른 예로, 안전강화장치는 촬영장치를 통해 촬영된 영상프레임을 분석하여 산업장비의 이동방향이나 속도 등을 파악할 수 있다. 이 외에도 산업장비의 이동방향이나 속도 등을 파악하는 종래의 다양한 방법이 본 실시 예에 적용될 수 있으며 어느 하나의 예로 한정되는 것은 아니다.The safety enhancement device receives movement information (for example, angle information of a truck steering device, etc.) of industrial equipment directly from each industrial device, or through a GPS or direction sensor installed in industrial equipment, a filming device, or a safety enhancement device. The direction of movement of industrial equipment can be grasped. As another example, the safety reinforcement device may analyze the image frame captured by the photographing device to determine the moving direction or speed of the industrial equipment. In addition to this, various conventional methods for determining the moving direction or speed of industrial equipment may be applied to this embodiment, and are not limited to any one example.
도 3은 본 발명의 실시 예에 따른 산업장비 주변의 객체를 가상객체로 생성하여 가상공간에 표시한 예를 도시한 도면이다.3 is a diagram illustrating an example in which objects around industrial equipment are created as virtual objects and displayed in a virtual space according to an embodiment of the present invention.
도 2 및 도 3을 함께 참조하면, 안전강화장치는 트럭(200)의 우회전시 미리 정의된 위험영역(220)에 객체(210)가 존재하면, 이를 안전사고 위험상황으로 판단하여 실 공간의 객체(210)에 대한 가상객체를 생성하고 이를 증강현실 또는 가상현실 등으로 표시하여 운전자 등에게 알려준다. 안전강화장치는 트럭 주변의 객체를 2차원 이미지로 표시하는 것이 아니라 도 3과 같이 3차원 이미지로 실시간 표시하므로, 운전자 등은 트럭(200)과 객체(210)를 다양한 각도에서 증강현실 또는 가상현실로 확인할 수 있다. 예를 들어, 안전강화장치는 트럭의 헤드업디스플레이(HUD, Head-up Display), AR(Augumented Reality) 글래스, 각종 디스플레이 장치 등에 트럭 이미지와 가상객체를 운전자가 원하는 다양한 각도에서 표시할 수 있다. Referring to FIGS. 2 and 3 together, the safety enhancement device determines that an object 210 exists in a predefined danger area 220 when the truck 200 turns right, and determines it as a safety accident risk situation, thereby determining the object in the real space. Creates a virtual object for (210) and displays it in augmented reality or virtual reality to notify the driver. Since the safety enhancement device does not display objects around the truck as a two-dimensional image, but displays them as a three-dimensional image in real time as shown in FIG. can be checked with For example, the safety enhancement device can display truck images and virtual objects at various angles desired by the driver, such as a head-up display (HUD), augmented reality (AR) glasses, and various display devices.
도 4는 본 발명의 실시 예에 따른 산업장비의 안전강화를 위한 전반적인 시스템의 일 예를 도시한 도면이다.4 is a diagram showing an example of an overall system for enhancing the safety of industrial equipment according to an embodiment of the present invention.
도 4를 참조하면, 산업장비의 안전강화를 위한 시스템은 촬영장치(400), 안전강화장치(410) 및 표시장치(420)를 포함한다. 촬영장치(400)는 도 1에서 설명한 것과 같이 산업장비의 사각지대를 포함한 주변을 촬영하는 장치(110,112,114)로 깊이카메라와 라이다를 포함할 수 있다. 산업장비의 종류에 따라 촬영장치는 산업장비의 여러 곳에 위치할 수 있다.Referring to FIG. 4 , the system for enhancing safety of industrial equipment includes a photographing device 400 , a safety enhancement device 410 and a display device 420 . As described in FIG. 1 , the photographing device 400 is a device 110 , 112 , and 114 for photographing the surroundings including blind spots of industrial equipment, and may include a depth camera and lidar. Depending on the type of industrial equipment, the photographing device may be located in various places of the industrial equipment.
안전강화장치(410)는 촬영장치(400)의 촬영데이터를 이용하여 산업장비 주변에 위치한 객체에 대한 가상객체를 생성한다. 그리고 표시장치(420)는 가상객체를 증간현실 또는 가상현실에 표시한다. 표시장치(420)는 HUD, AR 글래스 또는 일반 디스플레이 장치 등 실시 예에 따라 다양할 수 있다. The safety enhancement device 410 uses the photographing data of the photographing device 400 to create virtual objects for objects located around industrial equipment. And, the display device 420 displays the virtual object in augmented reality or virtual reality. The display device 420 may be various according to embodiments such as a HUD, AR glasses, or a general display device.
도 5는 본 발명의 실시 예에 따른 객체인식을 이용한 안전강화방법의 일 예를 도시한 도면이다.5 is a diagram showing an example of a method for enhancing safety using object recognition according to an embodiment of the present invention.
도 5를 참조하면, 안전강화장치(410)는 산업장비의 위험영역에 객체가 존재하면 이를 사용자가 알 수 있도록 시각 또는 청각 등을 통해 알람을 발생하고, 주변의 객체를 가상객체로 생성하여 증강현실 또는 가상현실에 표시한다. Referring to FIG. 5, the safety reinforcement device 410 generates an alarm through sight or hearing so that the user can know when an object exists in the risk area of industrial equipment, and creates and augments a surrounding object as a virtual object. displayed in real or virtual reality.
산업장비의 위험영역에 객체가 존재하면 안전사고의 위험이 존재한다. 그러나 그 객체가 바람에 날리는 나뭇가지나 쓰레기 등이라면 이는 안전사고와 무관한 객체이다. 안전강화장치(410)가 안전사고와 전혀 무관한 객체 등이 산업장비의 주변에 위치할 때마다 사용자에게 알람 등을 발생하면 이는 오히려 작업에 방해가 될 수 있다. If an object exists in the dangerous area of industrial equipment, there is a risk of safety accident. However, if the object is a tree branch blown by the wind or garbage, it is an object that has nothing to do with safety accidents. If the safety enhancement device 410 generates an alarm or the like to the user whenever an object or the like completely unrelated to the safety accident is located around the industrial equipment, this may rather interfere with the work.
일 실시 예로, 안전강화장치(410)는 위험영역에 위치한 객체의 종류를 인식하여 실제 안전사고의 위험이 존재하는지 파악할 수 있다. 객체의 종류를 파악하기 위하여 인공지능모델(500)을 사용할 수 있다. 예를 들어, 안전강화장치(410)는 위험영역에 위치한 객체를 기반으로 생성된 가상객체(510)(즉, 3차원 이미지)를 인공지능모델(500)에 입력하여 객체 종류를 인식(520)할 수 있다. 인공지능모델(500)은 3차원 이미지를 기반으로 객체 종류를 인식하도록 학습된 모델이다. 객체 인식을 위한 인공지능모델 그 자체는 이미 널리 알려진 기술이므로 이에 대한 상세한 설명은 생략한다. As an example, the safety enhancement device 410 may recognize the type of object located in the danger area to determine whether there is an actual risk of safety accident. An artificial intelligence model 500 may be used to determine the type of object. For example, the safety reinforcement device 410 inputs a virtual object 510 (ie, a 3D image) generated based on an object located in a danger area to the artificial intelligence model 500 to recognize the object type (520) can do. The artificial intelligence model 500 is a model learned to recognize object types based on 3D images. Since the artificial intelligence model itself for object recognition is already a well-known technology, a detailed description thereof will be omitted.
안전강화장치(410)는 위험요소 객체 리스트 또는 위험요소가 아닌 객체 리스트를 이용하여 인공지능모델(500)을 통해 파악된 객체 종류가 위험요소에 해당하는 객체인지 아니면 위험요소에 해당하지 않는 객체인지 파악할 수 있다. 예를 들어, 위험요소가 아닌 객체 리스트에 나뭇잎이나 쓰레기 등이 정의되어 있고, 인공지능모델(500)을 통해 파악된 가상객체의 종류가 나뭇잎이면, 안전강화장치(410)는 해당 객체가 안전사고 요소가 아니라고 판단하여 사용자 등에 알람을 발생하지 않을 수 있다.The safety reinforcement device 410 determines whether the object type identified through the artificial intelligence model 500 using a risk factor object list or a non-risk factor list is an object corresponding to a risk factor or an object that does not correspond to a risk factor. can figure it out For example, if leaves or garbage are defined in the object list that is not a risk factor, and the type of virtual object identified through the artificial intelligence model 500 is a leaf, the safety reinforcement device 410 determines that the object is a safety accident. An alarm may not be generated for a user or the like by determining that the element is not an element.
도 6은 본 발명의 실시 예에 따른 객체인식을 이용한 안전강화방법의 다른 일 예를 도시한 도면이다.6 is a diagram showing another example of a method for enhancing safety using object recognition according to an embodiment of the present invention.
도 6을 참조하면, 안전강화장치(410)는 갑작스럽게 산업장비로 진입하는 객체를 인지하여 사용자에게 알려줄 수 있다. 예를 들어, 트럭의 운전 중에 킥보드 탑승자가 갑작스럽게 트럭 앞으로 진입하거나 지게차의 작업 반경 내에 사람이 갑작스럽게 진입할 수 있다. 이때 산업장비의 운전자 등은 갑작스러운 객체 출현에 대응하기가 어려울 수 있으므로 이를 미리 파악하여 알려 줄 필요가 있다.Referring to FIG. 6 , the safety enhancement device 410 may recognize an object suddenly entering industrial equipment and notify the user. For example, while driving a truck, a rider on a kickboard may suddenly enter in front of the truck or a person may suddenly enter within a work radius of the forklift truck. At this time, since it may be difficult for operators of industrial equipment to respond to the sudden appearance of objects, it is necessary to identify and inform them in advance.
이를 위해, 안전강화장치(410)는 촬영장치의 라이다의 측정범위와 깊이카메라의 측정범위를 서로 다르게 설정할 수 있다. 즉, 산업장비(600)를 중심으로 라이다가 측정하는 제1 영역(620)의 반경은 깊이카메라가 촬영하는 제2 영역(610)보다 더 클 수 있다. 본 실시 예는 이해를 돕기 위하여 산업장비(600)를 중심으로 원형의 영역(610,620)을 도시하고 있으나, 도 1과 같이 산업장비가 트럭(100)이면, 제1 영역(610)과 제2 영역(620)은 삼각형 형태의 영역일 수 있다. 즉, 산업장비의 모양과 촬영장치의 설치 위치 등에 따라 제1 영역(620)과 제2 영역(610)의 모양은 다양할 수 있으며, 다만, 제1 영역(620)의 크기가 제2 영역(610)의 크기보다 더 크게 할 수 있다.To this end, the safety enhancement device 410 may set the measurement range of the lidar of the photographing device and the measurement range of the depth camera to be different from each other. That is, the radius of the first area 620 measured by the lidar around the industrial equipment 600 may be larger than the second area 610 captured by the depth camera. In this embodiment, circular areas 610 and 620 are shown around the industrial equipment 600 for better understanding. However, if the industrial equipment is the truck 100 as shown in FIG. 1, the first area 610 and the second area 620 may be a triangular area. That is, the shapes of the first area 620 and the second area 610 may vary depending on the shape of industrial equipment and the installation location of the photographing device, but the size of the first area 620 may be different from that of the second area ( 610) can be made larger than the size.
안전강화장치(410)는 라이다가 측정한 측정데이터를 기반으로 기 정의된 제1 영역(620) 내에 객체(630)가 진입하면 이를 시각 또는 청각적 방법으로 사용자에게 알려준다. 예를 들어, 도 1의 트럭(100)의 경우 안전강화장치(410)는 HUD 등의 디스플레이장치에 주의를 나타내는 색상이나 이미지 등을 표시하거나 기 정의된 소리를 출력할 수 있다. 이후 객체(630)가 깊이카메라의 촬영범위인 제2 영역(610)에 진입하면, 안전강화장치(410)는 깊이카메라와 라이다로 촬영하고 측정한 촬영데이터를 기반으로 가상객체를 생성하고 이를 증강현실 또는 가상현실로 표시하여 사용자에게 제공할 수 있다. 따라서 사용자는 가상객체의 표시 전에 미리 객체가 산업장비로 진입하고 있음을 인지하고 있을 수 있다.The safety enhancement device 410 notifies the user in a visual or auditory manner when the object 630 enters the first region 620 predefined based on the measurement data measured by the lidar. For example, in the case of the truck 100 of FIG. 1 , the safety enhancement device 410 may display a color or image indicating attention on a display device such as a HUD or output a predefined sound. Then, when the object 630 enters the second region 610, which is the shooting range of the depth camera, the safety reinforcement device 410 creates a virtual object based on the shooting data captured and measured by the depth camera and LIDAR, and creates it. It can be displayed in augmented reality or virtual reality and provided to the user. Accordingly, the user may be aware in advance that the virtual object is entering the industrial equipment before displaying the virtual object.
객체(630)가 측정영역인 제1 영역(620)에 진입 후 촬영영역인 제2 영역(610)에 진입하지 않고 다시 측정영역의 바깥(즉, 안전지대)로 나간다면, 안전강화장치(410)는 가상객체를 생성하지 않고 주의 단계를 해제할 수 있다. 즉, 안전강화장치(410)는 제1 영역(620)의 위험영역 내에 객체가 진입하는 시점부터 위험영역을 벗어날 때까지 객체를 추적하면서 안전사고 위험상황을 사용자에게 알려줄 수 있다.If the object 630 enters the first area 620, which is the measurement area, and does not enter the second area 610, which is the shooting area, and goes out of the measurement area again (ie, the safety zone), the safety reinforcement device 410 ) can release the attention step without creating a virtual object. That is, the safety reinforcement device 410 may notify the user of a safety accident risk situation while tracking the object from the time the object enters the danger zone of the first region 620 until it leaves the danger zone.
도 7은 본 발명의 실시 예에 따른 산업장비의 안전강화방법의 일 예를 도시한 흐름도이다.7 is a flowchart illustrating an example of a method for enhancing safety of industrial equipment according to an embodiment of the present invention.
도 7을 참조하면, 안전강화장치(410)는 산업장비의 주변을 촬영장치로 촬영하여 주변에 위치한 객체에 대한 가상객체를 생성한다(S700). 깊이카메라와 라이다로 구성된 촬영장치를 이용하여 가상객체를 생성하는 방법의 예가 도 9 이하에서 도시되어 있다. Referring to FIG. 7 , the safety enhancement device 410 captures the surroundings of industrial equipment with a photographing device to create virtual objects for objects located in the surroundings (S700). An example of a method of generating a virtual object using a photographing device composed of a depth camera and lidar is shown below in FIG. 9 .
안전강화장치(410)는 가상객체를 증강현실 또는 가상현실에 표시한다(S810). 일 실시 예로, 안전강화장치(410)는 객체가 기 정의된 산업장비의 위험영역에 위치하면 객체에 대한 가상객체를 실시간 생성하여 표시할 수 있다. 다른 실시 예로, 안전강화장치(410)는 산업장비의 주변에 위치한 가상객체의 종류를 도 5와 같이 인공지능모델을 통해 인식한 후 해당 객체 종류가 미리 정의된 안전사고 요소에 해당하는지 아닌지를 파악한 후 안전사고 요소에 해당하는 경우에만 사용자에게 시각 또는 청각 등을 통해 알람을 제공할 수 있다. The safety enhancement device 410 displays the virtual object in augmented reality or virtual reality (S810). As an example, the safety enhancement device 410 may generate and display a virtual object for the object in real time when the object is located in a predefined risk area of industrial equipment. As another embodiment, the safety enhancement device 410 recognizes the type of virtual object located around the industrial equipment through an artificial intelligence model as shown in FIG. 5, and then determines whether the corresponding object type corresponds to a predefined safety accident element. Afterwards, an alarm may be provided to the user through visual or auditory information only when it corresponds to the safety accident factor.
도 8은 본 발명의 실시 예에 따른 안전강화장치의 일 예의 구성을 도시한 도면이다.8 is a diagram showing the configuration of an example of a safety reinforcement device according to an embodiment of the present invention.
도 8을 참조하면, 안전강화장치(410)는 가상객체생성부(800) 및 표시부(810)를 포함한다. 일 실시 예로, 안전강화장치(410)는 메모리, 프로세서, 입출력장치를 포함하는 컴퓨팅 장치로 구현될 수 있다. 이 경우 각 구성은 소프트웨어로 구현되어 메모리에 탑재된 후 프로세서에 의해 수행될 수 있다.Referring to FIG. 8 , the safety enhancement device 410 includes a virtual object generator 800 and a display unit 810 . As an example, the safety enhancement device 410 may be implemented as a computing device including a memory, a processor, and an input/output device. In this case, each configuration may be implemented as software, loaded into a memory, and then executed by a processor.
가상객체생성부(800)는 산업장비의 사각지대에 설치된 적어도 하나 이상의 촬영장치를 이용하여 사각지대에 존재하는 적어도 하나 이상의 객체에 대한 가상객체를 생성한다. 촬영장치는 깊이카메라와 라이다를 포함할 수 있다. 이 경우에 가상객체생성부(800)는 깊이카메라로 촬영하여 얻은 영상프레임에서 상기 객체에 해당하는 영역의 픽셀의 깊이값을 상기 라이다로 측정하여 얻은 측정프레임의 포인트의 거리값으로 보정한 후 보정된 픽셀의 깊이값을 기준으로 가상객체를 생성할 수 있다. 깊이카메라와 라이다를 이용한 가상객체의 생성방법이 도 9 이하에 도시되어 있다.The virtual object creation unit 800 creates a virtual object for at least one or more objects existing in the blind spot of industrial equipment using at least one or more photographing devices installed in the blind spot. The photographing device may include a depth camera and lidar. In this case, the virtual object generator 800 corrects the depth value of the pixel of the area corresponding to the object in the image frame obtained by photographing with the depth camera with the distance value of the point of the measurement frame obtained by measuring with the LIDAR. A virtual object can be created based on the corrected pixel depth value. A method of generating a virtual object using a depth camera and LIDAR is shown below in FIG. 9 .
표시부(810)는 가상객체를 증강현실 또는 가상현실에 표시한다. 일 실시 예로, 표시부(810)는 기 학습된 인공지능모델을 이용하여 가상객체의 종류를 인식하고, 가상객체의 종류가 안전사고 요소인지 판단하여 알람을 발생할 수 있다. 다른 실시 예로, 표시부(810)는 라이다가 측정하는 측정프레임을 통해 라이다의 측정영역인 제1 영역 내 움직이는 객체가 존재하면, 움직이는 객체의 존재를 시각 또는 음향으로 표시하고, 움직이는 객체가 제1 영역보다 좁은 영역을 촬영영역으로 갖는 깊이카메라의 제2 영역에 들어오면, 가상객체를 생성하여 표시하거나, 경고 등의 알람을 출력할 수 있다.The display unit 810 displays virtual objects in augmented reality or virtual reality. As an example, the display unit 810 may generate an alarm by recognizing the type of the virtual object using the previously learned artificial intelligence model and determining whether the type of the virtual object is a safety accident factor. As another embodiment, the display unit 810 displays the existence of the moving object visually or by sound when there is a moving object in the first area, which is the measurement area of the lidar, through the measurement frame measured by the lidar, and the moving object is controlled. When entering the second area of the depth camera having an area narrower than the first area as a capturing area, a virtual object may be created and displayed or an alarm such as a warning may be output.
도 9는 본 발명의 실시 예에 따른 가상공간의 객체 생성을 위한 시스템의 개략적인 구성의 일 예를 도시한 도면이다.9 is a diagram showing an example of a schematic configuration of a system for creating objects in a virtual space according to an embodiment of the present invention.
도 9를 참조하면, 촬영장치(900)는 깊이카메라(902)와 라이다(904)를 포함한다. 깊이카메라(902)는 객체(930)가 존재하는 일정 공간을 촬영하여 각 픽셀의 깊이값을 함께 제공하는 카메라이다. 본 실시 예에서 객체(930)라고 함은 가상공간에 생성하고자 하는 대상체를 의미한다. 깊이카메라(902) 그 자체는 이미 널리 알려진 기술이며, 종래의 다양한 종류의 깊이카메라(902)가 본 실시 예에 사용될 수 있다. 본 실시 예에서, 깊이카메라(902)는 정지영상 또는 동영상을 촬영할 수 있다. 다만, 이하에서는 설명의 편의를 위하여 깊이카메라가 동영상을 촬영하는 경우를 위주로 설명한다. 깊이카메라(902)로 촬영하여 얻은 각 픽셀의 깊이값을 포함하는 사진데이터를 영상프레임이라고 한다. 즉, 동영상은 초당 일정 개수의 영상프레임으로 구성된다.Referring to FIG. 9 , a photographing device 900 includes a depth camera 902 and a LIDAR 904 . The depth camera 902 is a camera that captures a certain space in which the object 930 exists and provides a depth value of each pixel together. In this embodiment, the object 930 means an object to be created in a virtual space. The depth camera 902 itself is a well-known technology, and various types of conventional depth cameras 902 may be used in this embodiment. In this embodiment, the depth camera 902 may capture a still image or a video. However, hereinafter, for convenience of description, a case in which a video is captured by a depth camera will be mainly described. Photo data including a depth value of each pixel obtained by photographing with the depth camera 902 is called an image frame. That is, a video is composed of a certain number of video frames per second.
라이다(904)는 레이저를 일정 공간에 발사하고 공간의 각 포인트(즉, 반사지점)에서 되돌아오는 신호를 측정하여 일정 공간의 복수의 포인트에 대한 거리값을 출력한다. 라이다(904)가 일정 시점에 일정 공간에 대하여 측정한 복수 개의 포인트에 대한 거리값으로 구성된 데이터를 측정프레임이라고 한다. 라이다(904)가 측정하는 복수 개의 지점, 즉, 측정프레임의 해상도는 라이다(904)에 따라 상이할 수 있다. 라이다(904) 그 자체는 이미 널리 알려진 기술이며, 종래의 다양한 라이다(904)가 본 실시 예에 사용될 수 있다. The LIDAR 904 emits a laser into a certain space, measures signals returned from each point in the space (ie, a reflection point), and outputs distance values for a plurality of points in the certain space. Data composed of distance values for a plurality of points measured by the LIDAR 904 in a certain space at a certain point in time is called a measurement frame. The resolution of a plurality of points measured by the lidar 904, that is, the measurement frame, may be different depending on the lidar 904. LiDAR 904 itself is already widely known technology, and various conventional lidars 904 may be used in this embodiment.
촬영장치(900)는 깊이카메라(902)와 라이다(904)를 동시에 구동하여 일정 공간의 객체(930)를 촬영하고 측정한다. 다양한 실시 예에서, 촬영장치(900)의 '촬영' 또는 촬영장치의 '측정'이라는 표현은 깊이카메라(902)의 촬영과 라이다(904)의 측정이 동시에 이루어지는 것으로 해석될 수 있다. 깊이카메라(902)가 초당 생성하는 영상프레임의 개수와 라이다(904)가 초당 생성하는 측정프레임의 개수는 실시 예에 따라 동일하거나 서로 다를 수 있다. 또한, 영상프레임의 해상도와 측정프레임의 해상도는 실시 예에 따라 동일하거나 서로 다를 수 있다. 다만 깊이카메라(902)와 라이다(904)는 동시에 구동하여 일정 공간에 대한 영상프레임과 측정프레임을 생성하므로 동일 시간축에 각각 맵핑되어 동기화될 수 있다. The photographing device 900 simultaneously drives the depth camera 902 and the LIDAR 904 to photograph and measure an object 930 in a certain space. In various embodiments, the expression 'photographing' of the photographing device 900 or 'measurement' of the photographing device may be interpreted as capturing the depth camera 902 and measuring the LIDAR 904 simultaneously. The number of video frames generated per second by the depth camera 902 and the number of measurement frames generated per second by the LIDAR 904 may be the same or different depending on embodiments. Also, the resolution of the video frame and the resolution of the measurement frame may be the same or different depending on the embodiment. However, since the depth camera 902 and the lidar 904 are simultaneously driven to generate an image frame and a measurement frame for a certain space, they can be mapped to the same time axis and synchronized.
가상객체생성장치(910)는 깊이카메라(902)로 촬영하여 얻은 영상프레임과 라이다(904)로 측정하여 얻은 측정프레임을 함께 이용하여 현실 세계의 객체(930)에 대한 가상공간의 객체(즉, 디지털 트윈)를 생성한다. 도 1 내지 도 8은 가상객체생성장치(910)가 안전강화장치(410)의 일부로 구현된 경우를 가정하여 설명하였으나, 가상객체생성장치(910)는 안전사고예방장치(410)와 별개의 장치로 구현될 수 있다. The virtual object generating device 910 uses the image frame obtained by photographing with the depth camera 902 and the measurement frame obtained by measuring with the LIDAR 904 together to create an object in the virtual space for the object 930 in the real world (that is, , digital twin). 1 to 8 have been described on the assumption that the virtual object generating device 910 is implemented as a part of the safety enhancement device 410, but the virtual object generating device 910 is a device separate from the safety accident prevention device 410. can be implemented as
가상객체생성장치(910)는 촬영장치(900)와 유선 또는 무선 통신망(예를 들어, WebRTC 등)을 통해 연결되어 촬영장치(900)가 생성하는 영상프레임과 측정프레임을 실시간 수신할 수 있다. 가상객체생성장치(910)는 가상공간의 객체 생성 결과를 사용자가 확인할 수 있도록 사용자 단말(920)에 제공할 수 있다. The virtual object generating device 910 may be connected to the photographing device 900 through a wired or wireless communication network (eg, WebRTC, etc.) and receive real-time video frames and measurement frames generated by the photographing device 900 . The virtual object generating device 910 may provide the result of object creation in the virtual space to the user terminal 920 so that the user can check it.
다른 실시 예로, 촬영장치(900)와 가상객체생성장치(910)는 하나의 장치로 구현될 수 있다. 예를 들어, 촬영장치(910)와 가상객체장치(910)는 AR(augumented reality) 글래스, HMD(Head Mounded Display) 또는 웨어러블 디바이스 등 증강현실 또는 가상현실을 표시하는 다양한 기기의 일부로 구현될 수 있다. 또 다른 실시 예로, 촬영장치(900)는 AR 글래스 또는 HMD, 웨어러블 디바이스의 일부로 구현되고, 촬영장치(900)는 실시간 촬영 및 측정한 영상프레임과 측정프레임을 유선 또는 무선 통신망으로 연결된 가상객체생성장치(910)로 전송하고, 촬영장치가 구현된 AR 글래스, HMD 등은 가상객체생성장치(910)로부터 가상객체를 수신하여 증강현실 또는 가상현실에 가상객체를 표시할 수 있다. 사용자는 실시간 생성되는 가상객체를 증강현실 또는 가상현실을 통해 바로 확인할 수 있다. 가상공간의 객체를 생성하는 구체적인 방법에 대해서는 도 10 이하에서 다시 살펴본다.As another embodiment, the photographing device 900 and the virtual object generating device 910 may be implemented as one device. For example, the photographing device 910 and the virtual object device 910 may be implemented as part of various devices that display augmented reality or virtual reality, such as AR (augumented reality) glasses, HMD (Head Mounded Display), or wearable devices. . As another embodiment, the photographing device 900 is implemented as a part of AR glasses, HMD, or a wearable device, and the photographing device 900 is a virtual object generating device that connects real-time photographing and measuring image frames and measurement frames to a wired or wireless communication network. 910, the AR glasses, HMD, etc. implemented with the photographing device may receive the virtual object from the virtual object generating device 910 and display the virtual object in augmented reality or virtual reality. The user can immediately check the virtual object created in real time through augmented reality or virtual reality. A detailed method of generating an object in a virtual space will be reviewed again below in FIG. 10 .
도 10은 본 발명의 실시 예에 따른 촬영장치의 촬영방법의 일 예를 도시한 도면이다. 10 is a diagram illustrating an example of a photographing method of a photographing device according to an embodiment of the present invention.
도 9 및 도 10을 참조하면, 적어도 하나 이상의 촬영장치(900)는 사각지대에 존재하는 적어도 하나 이상의 객체(1000)를 연속 촬영할 수 있다. 촬영장치(900)가 촬영하여 얻은 영상프레임과 측정프레임에는 적어도 하나 이상의 객체(1000,1010)가 시간축을 기준으로 여러 시점에 존재할 수 있다. 본 실시 예는 설명의 편의를 위하여 영상프레임의 예를 도시하고 있다.Referring to FIGS. 9 and 10 , at least one photographing device 900 may continuously photograph at least one object 1000 existing in a blind spot. At least one or more objects 1000 and 1010 may exist at various points in time on the basis of the time axis in the image frame and the measurement frame obtained by capturing the photographing device 900 . This embodiment shows an example of an image frame for convenience of description.
가상객체생성장치(910)는 영상프레임과 측정프레임을 객체 단위로 구분할 수 있다. 가상객체생성장치(910)는 영상프레임과 측정프레임에서 동일 객체가 존재하는 영상프레임과 측정프레임에 동일한 식별정보(또는 인덱스)를 부여할 수 있다. 예를 들어, 제1 객체(1000)가 존재하는 복수의 영상프레임(1020,1022,1024)에 모두 제1 식별정보(또는 제1 인덱스, 이하 식별정보라고 함)를 부여하고, 제2 객체(1010)가 존재하는 복수의 영상프레임(1030,1032,1034)에 제2 식별정보를 부여할 수 있다. 객체가 존재하지 않는 영상프레임(1040,1042)은 아무런 식별정보를 부여하지 않거나 또는 제3 식별정보를 부여할 수 있다. 본 실시 예에서 시간축에 따라 배열되는 영상프레임이 세 개의 그룹, 즉, A(1050), B(1060), C(1070)로 구분될 수 있다. 다른 실시 예로 영상프레임과 측정프레임에 복수의 객체가 존재하면, 하나의 영상프레임과 측정프레임에 각 객체에 해당하는 식별정보가 부여될 수 있다. 즉 하나의 영상프레임과 측정프레임에 복수의 식별정보가 부여될 수 있다. The virtual object generating device 910 may classify the image frame and the measurement frame in units of objects. The virtual object generating apparatus 910 may assign the same identification information (or index) to the video frame and the measurement frame in which the same object exists in the video frame and the measurement frame. For example, first identification information (or first index, hereinafter referred to as identification information) is assigned to all of the plurality of image frames 1020, 1022, and 1024 in which the first object 1000 exists, and the second object ( Second identification information may be assigned to the plurality of image frames 1030 , 1032 , and 1034 in which 1010) is present. No identification information may be assigned to the image frames 1040 and 1042 in which no object exists, or third identification information may be assigned. In this embodiment, the image frames arranged along the time axis can be divided into three groups, that is, A (1050), B (1060), and C (1070). In another embodiment, if a plurality of objects exist in the video frame and the measurement frame, identification information corresponding to each object may be assigned to one video frame and the measurement frame. That is, a plurality of pieces of identification information may be assigned to one image frame and one measurement frame.
다른 실시 예로, 촬영장치(900)가 깊이카메라(902)와 라이다(904)를 동시에 구동하여 촬영하므로, 깊이카메라(902)가 생성하는 영상프레임과 라이다가 측정하는 측정프레임은 시간이 동기화될 수 있다. 따라서 가상객체생성장치(910)는 영상프레임에 대해서만 동일 객체가 존재하는지 파악하여 동일 객체가 존재하는 영상프레임의 시구간을 파악하여 그 시구간 동안의 영상프레임에 동일 식별정보를 부여하고, 또한 영상프레임에 대하여 파악된 시구간 동안 생성된 측정프레임에 대해서도 동일 객체가 존재하는 구간으로 간주하여 동일 식별정보를 부여할 수 있다. In another embodiment, since the photographing device 900 simultaneously drives the depth camera 902 and the lidar 904 to take pictures, the video frame generated by the depth camera 902 and the measurement frame measured by the lidar are synchronized in time. It can be. Therefore, the virtual object generating device 910 determines whether the same object exists only in the video frame, determines the time period of the video frame in which the same object exists, and assigns the same identification information to the video frame during that time period. The same identification information may be assigned by considering the measurement frame generated during the time period identified for the frame as a period in which the same object exists.
가상객체생성장치(910)는 종래의 다양한 영상인식알고리즘을 이용하여 영상프레임에 존재하는 객체의 동일 여부를 파악할 수 있다. 예를 들어, 가상객체생성장치(910)는 영상인식알고리즘의 일 예로 인공지능모델을 이용할 수 있다. 영상 내 객체의 동일 여부를 파악하는 방법 그 자체는 이미 널리 알려진 기술이므로 이에 대한 상세한 설명은 생략한다.The virtual object generating device 910 may determine whether objects existing in an image frame are the same using various conventional image recognition algorithms. For example, the virtual object generating device 910 may use an artificial intelligence model as an example of an image recognition algorithm. Since the method itself for determining whether objects in an image are the same is a well-known technique, a detailed description thereof will be omitted.
도 11은 본 발명의 실시 예에 따른 가상공간의 객체를 생성하는 방법의 일 예를 도시한 흐름도이다.11 is a flowchart illustrating an example of a method of generating an object in a virtual space according to an embodiment of the present invention.
도 11을 참조하면, 가상객체생성장치(910)는 일정 공간을 깊이카메라로 촬영하여 얻은 영상프레임에서 제1 배경영역과 제1 객체영역을 구분한다(S1100). 가상객체생성장치(910)는 동일한 객체가 촬영된 복수의 영상프레임에 대하여 배경과 객체를 구분할 수 있다. 예를 들어, 도 10과 같이 동일 객체가 존재하는 적어도 하나 이상의 영상프레임(예를 들어, 1020,1022,1024의 A 그룹)에 대하여 각각 배경과 객체를 구분할 수 있다. 다른 실시 예로, 영상프레임에 복수의 객체가 존재하는 경우에 가상객체생성장치(910)는 배경과 복수의 객체를 각각 구분할 수 있다. 영상프레임에서 배경과 객체를 구분하는 방법의 일 예에 대해서는 도 12에서 다시 살펴본다. Referring to FIG. 11 , the virtual object generating apparatus 910 divides a first background area and a first object area from an image frame obtained by photographing a certain space with a depth camera (S1100). The virtual object generating device 910 may distinguish a background and an object from a plurality of image frames in which the same object is photographed. For example, as shown in FIG. 10 , a background and an object may be distinguished for at least one image frame (eg, group A of 1020, 1022, and 1024) in which the same object exists. As another embodiment, when a plurality of objects exist in an image frame, the virtual object generating apparatus 910 may distinguish the background and the plurality of objects, respectively. An example of a method for distinguishing a background from an object in an image frame will be reviewed again in FIG. 12 .
가상객체생성장치(910)는 일정 공간을 라이다로 측정하여 얻은 측정프레임에서 제2 배경영역과 제2 객체영역을 구분한다(S1110). 가상객체생성장치(910)는 동일한 객체가 측정된 복수의 측정프레임에 대하여 배경과 객체를 구분할 수 있다. 예를 들어, 도 11과 같이 동일 식별정보가 부여된 복수의 측정프레임에 대하여 배경과 객체를 구분할 수 있다. 다른 실시 예로, 측정프레임에 복수의 객체가 존재하는 경우에 가상객체생성장치는 배경과 복수의 객체를 각각 구분할 수 있다. 측정프레임에서 배경과 객체를 구분하는 방법의 예가 도 12에 도시되어 있다. The virtual object generating device 910 distinguishes a second background area and a second object area from a measurement frame obtained by measuring a certain space with lidar (S1110). The virtual object generator 910 may distinguish a background and an object for a plurality of measurement frames in which the same object is measured. For example, as shown in FIG. 11, a background and an object may be distinguished for a plurality of measurement frames to which the same identification information is assigned. As another embodiment, when a plurality of objects exist in the measurement frame, the virtual object generator may distinguish the background and the plurality of objects, respectively. An example of a method for distinguishing a background and an object in a measurement frame is shown in FIG. 12 .
깊이카메라(902)와 라이다(904)는 도 9에 도시된 바와 같이 촬영장치(900) 내에서 일정 거리 이격되어 위치하며, 따라서 영상프레임과 측정프레임의 촬영각도는 서로 다르다. 또한 영상프레임과 측정프레임의 해상도 등도 서로 다를 수 있으므로, 영상프레임의 각 픽셀과 측정프레임의 각 포인트의 위치도 일대일(1:1, scale) 맵핑(mapping)되지 않을 수 있다. 이러한 차이를 가지는 영상프레임과 측정프레임을 맵핑시키기 위하여 본 실시 예는 격자공간을 이용한다.As shown in FIG. 9 , the depth camera 902 and the LIDAR 904 are spaced apart from each other by a predetermined distance within the photographing device 900, and therefore, the photographing angles of the video frame and the measurement frame are different from each other. In addition, since the resolutions of the video frame and the measurement frame may be different from each other, the positions of each pixel of the video frame and each point of the measurement frame may not be mapped on a one-to-one (1:1, scale) basis. In order to map an image frame and a measurement frame having such a difference, this embodiment uses a grid space.
구체적으로, 가상객체생성장치(910)는 기 정의된 크기의 격자를 포함하는 제1 격자공간에 영상프레임에서 구분한 제1 객체영역의 픽셀을 깊이값에 따라 배치하고(S1120), 또한 기 정의된 크기의 격자를 포함하는 제2 격자공간에 측정프레임에서 구분한 제2 객체영역의 포인트를 거리값에 따라 배치한다(S1120). 영상프레임과 측정프레임이 동일 공간을 촬영하여 얻은 데이터이므로, 제1 객체영역과 제2 객체영역에 존재하는 객체는 동일 객체이다. 제1 격자공간과 제2 격자공간은 가상공간 내 동일한 크기의 격자를 가지는 공간이다. 격자공간의 일 예가 도 13에 도시되어 있다.Specifically, the virtual object generator 910 arranges the pixels of the first object area divided from the image frame according to the depth value in the first grid space including the grid of the predefined size (S1120), and also The points of the second object area divided in the measurement frame are arranged according to the distance values in the second lattice space including the lattice of the same size (S1120). Since the image frame and the measurement frame are data obtained by photographing the same space, objects existing in the first object area and the second object area are the same object. The first lattice space and the second lattice space are spaces having grids of the same size in the virtual space. An example of a grid space is shown in FIG. 13 .
가상객체생성장치(910)는 제1 격자공간의 픽셀의 깊이값을 제2 격자공간의 포인트의 거리값을 기준으로 보정한다(S1130). 동일한 크기의 격자공간에 제1 객체영역의 픽셀과 제2 객체영역의 포인트가 존재하므로, 제1 격자공간과 제2 격자공간의 위치, 방향, 크기 등을 서로 일치시키면 제2 격자공간의 각 포인트와 제1 격자공간의 각 픽셀을 맵핑시킬 수 있다. 격자공간의 격자단위로 픽셀의 깊이값을 포인트의 거리값을 이용하여 보정하는 구체적인 방법에 대해 도 15에 다시 살펴본다.The virtual object generator 910 corrects the depth value of the pixel in the first grid space based on the distance value of the point in the second grid space (S1130). Since the pixels of the first object area and the points of the second object area exist in the grid space of the same size, if the position, direction, size, etc. of the first grid space and the second grid space match each other, each point of the second grid space and each pixel of the first lattice space may be mapped. A detailed method of correcting a depth value of a pixel in a grid unit of a grid space using a distance value of a point will be reviewed again in FIG. 15 .
가상객체생성장치(910)는 깊이값이 보정된 픽셀을 기준으로 표면정보가 존재하는 가상공간의 객체(즉, 가상객체)를 생성한다(S1140). 가상객체생성장치(910)는 동일 식별정보가 부여된 복수의 영상프레임에 존재하는 객체의 픽셀 깊이값을 동일 식별정보가 부여된 복수의 측정프레임에 존재하는 객체의 포인트 거리값으로 보정한 후 복수의 영상프레임의 보정된 픽셀을 이용하여 가상객체를 생성할 수 있다. 즉, 다양한 각도와 위치에서 촬영된 객체의 영상프레임의 픽셀 깊이값을 보정하여 가상객체를 생성할 수 있다. The virtual object generating apparatus 910 creates an object (ie, a virtual object) in a virtual space having surface information based on the pixel whose depth value is corrected (S1140). The virtual object generating apparatus 910 corrects the pixel depth value of an object existing in a plurality of image frames to which the same identification information is assigned to the point distance value of an object existing in a plurality of measurement frames to which the same identification information is assigned, and then generates a plurality of A virtual object can be created using the corrected pixels of the image frame of . That is, a virtual object may be created by correcting a pixel depth value of an image frame of an object photographed at various angles and positions.
가상객체생성장치(910)는 다양한 종류의 3차원 모델링 알고리즘을 이용하여 3차원 가상 객체를 생성할 수 있다. 예를 들어, 가상객체생성장치(910)는 깊이값을 가진 픽셀을 이용하여 표면정보를 가지는 3차원 객체를 인공지능모델로 생성할 수 있다. 다른 실시 예로, 픽셀 전체를 이용하여 3차원 모델링을 수행하는 경우 그 연산량이 많은 단점이 존재하므로, 가상객체생성장치(910)는 객체를 구성하는 픽셀 중 모서리와 꼭짓점(vertex) 등을 나타내는 포인트 클라우드를 추출하고, 포인트 클라우드를 3차원 모델링 알고리즘에 입력하여 가상 객체를 생성할 수 있다. 포인트 클라우드를 이용하여 가상객체를 생성하는 예에 대해서는 도 16에서 다시 살펴본다. 측정프레임의 각 포인트의 거리값을 기준으로 가상객체를 생성할 수도 있으나, 일반적으로 측정프레임의 해상도가 영상프레임보다 해상도보다 낮으므로, 측정프레임을 통해 가상객체를 생성할 경우 객체의 모서리 등이 뭉개져 표현될 수 있어 본 실시 예는 상대적으로 해상도가 높은 영상프레임의 픽셀의 거리값을 이용하여 가상 객체를 생성한다. The virtual object generating device 910 may generate a 3D virtual object using various types of 3D modeling algorithms. For example, the virtual object generating apparatus 910 may generate a 3D object having surface information as an artificial intelligence model by using a pixel having a depth value. As another embodiment, when 3D modeling is performed using all pixels, since there is a disadvantage in that the amount of computation is large, the virtual object generator 910 is a point cloud representing corners and vertices among pixels constituting an object. A virtual object can be created by extracting and inputting the point cloud to a 3D modeling algorithm. An example of generating a virtual object using a point cloud will be reviewed again in FIG. 16 . A virtual object can be created based on the distance value of each point of the measurement frame, but generally the resolution of the measurement frame is lower than that of the video frame, so when creating a virtual object through the measurement frame, the corners of the object are crushed. In this embodiment, a virtual object is created using a distance value of a pixel of an image frame having a relatively high resolution.
도 12는 본 발명의 실시 예에 따른 영상프레임과 측정프레임의 배경과 객체를 구분하는 방법의 일 예를 도시한 도면이다.12 is a diagram illustrating an example of a method of distinguishing a background and an object of a video frame and a measurement frame according to an embodiment of the present invention.
도 12를 참조하면, 영상프레임의 배경과 객체를 구분하는 제1 인공지능모델(1200)과, 측정프레임의 배경과 객체를 구분하는 제2 인공지능모델(1210)이 존재한다. 각 인공지능모델(1200,1210)은 미리 구축된 학습데이터를 이용하여 훈련된 모델로 CNN(Convolutional Neural Network) 등으로 구현될 수 있다. 인공지능모델의 학습 및 생성 과정 그 자체는 이미 널리 알려진 기술이므로 이에 대한 설명은 생략한다.Referring to FIG. 12 , there is a first artificial intelligence model 1200 that distinguishes the background of an image frame from an object, and a second artificial intelligence model 1210 that distinguishes a background from an object of a measurement frame. Each of the artificial intelligence models 1200 and 1210 is a trained model using pre-constructed learning data and may be implemented as a Convolutional Neural Network (CNN) or the like. Since the process of learning and generating an artificial intelligence model itself is already a well-known technology, a description thereof will be omitted.
제1 인공지능모델(1200)은 영상프레임을 입력받으면 영상프레임에서 배경과 객체를 구분하도록 머신러닝을 통해 생성된 모델이다. 예를 들어, 제1 인공지능모델(1200)이 의자를 인식하도록 학습된 인공지능모델이면, 제1 인공지능모델(1200)은 영상프레임에서 의자가 존재하는 영역(즉, 영상프레임 내 의자영역의 픽셀들)을 구분할 수 있다. The first artificial intelligence model 1200 is a model generated through machine learning to distinguish a background and an object in an image frame when an image frame is input. For example, if the first artificial intelligence model 1200 is an artificial intelligence model learned to recognize a chair, the first artificial intelligence model 1200 is a region in which a chair exists in an image frame (ie, a region of a chair in an image frame). pixels) can be distinguished.
제2 인공지능모델(1210)은 측정프레임을 입력받으면 측정프레임에서 배경과 객체를 구분하도록 머신러닝을 통해 생성된 모델이다. 예를 들어, 제1 인공지능모델(1200)이 의자를 인식하도록 학습된 인공지능모델이면, 제2 인공지능모델(1210)은 측정프레임에서 의자가 존재하는 영역(즉, 측정프레임 내 의자에 해당하는 포인트들)을 구분할 수 있다. The second artificial intelligence model 1210 is a model generated through machine learning to distinguish a background and an object in a measurement frame when a measurement frame is input. For example, if the first artificial intelligence model 1200 is an artificial intelligence model trained to recognize a chair, the second artificial intelligence model 1210 corresponds to a region where a chair exists in the measurement frame (ie, corresponds to a chair within the measurement frame). points) can be distinguished.
도 13은 본 발명의 실시 예에 따른 격자공간의 일 예를 도시한 도면이다.13 is a diagram showing an example of a lattice space according to an embodiment of the present invention.
도 13을 참조하면, 격자공간(1300)은 가상공간 내 영역을 일정 크기의 단위격자(1310)로 구분한 공간이다. 예를 들어, 격자공간(1200)은 가로, 세로 및 높이가 각각 d1,d2,d3인 단위격자(1310)로 구성되는 공간일 수 있다. 실시 예에 따라, d1,d2,d3는 모두 동일한 크기(예를 들어, 1mm)이거나 서로 다른 크기일 수 있다. Referring to FIG. 13 , the grid space 1300 is a space in which a region in the virtual space is divided into unit grids 1310 of a predetermined size. For example, the grid space 1200 may be a space composed of unit cells 1310 having width, length, and height of d1, d2, and d3, respectively. Depending on the embodiment, d1, d2, and d3 may all have the same size (eg, 1 mm) or different sizes.
도 14는 본 발명의 실시 예에 따른 영상프레임에서 구분한 객체를 격자공간에 표시한 일 예를 도시한 도면이다.14 is a diagram illustrating an example of displaying objects classified in a video frame in a grid space according to an embodiment of the present invention.
도 14를 참조하면, 가상객체생성장치(910)는 영상프레임에 구분한 객체영역의 픽셀을 그 깊이값을 이용하여 격자공간(1400)에 표시할 수 있다. 본 실시 예는 이해를 돕기 위하여 픽셀들(1410)을 개략적으로 도시하고 있다.Referring to FIG. 14 , the virtual object generating apparatus 910 may display pixels of an object area divided in an image frame in the grid space 1400 using the depth values. In this embodiment, pixels 1410 are schematically illustrated for ease of understanding.
가상객체생성장치(910)는 영상프레임 내 객체의 픽셀을 격자공간(1400)에 맵핑하여 격자공간 내 각 픽셀의 3차원 좌표값(또는 픽셀의 벡터값 등)을 파악할 수 있다. 즉, 격자공간에서 기 정의된 지점을 기준점(0,0,0 또는 X,Y,Z)으로 객체의 각 픽셀의 3차원 좌표값을 생성할 수 있다.The virtual object generating apparatus 910 may determine the 3D coordinate value (or vector value of the pixel, etc.) of each pixel in the grid space by mapping the pixels of the object in the image frame to the grid space 1400 . That is, a 3D coordinate value of each pixel of the object may be generated using a predefined point in the lattice space as a reference point (0,0,0 or X,Y,Z).
이와 같은 방법으로, 가상객체생성장치(910)는 측정프레임 내 객체를 나타내는 포인트들을 격자공간에 맵핑할 수 있다. 측정프레임 내 객체의 포인트를 격자공간에 표시하면 이 또한 도 14와 비슷한 모양으로 표시될 수 있다. In this way, the virtual object generating apparatus 910 may map points representing objects in the measurement frame to the grid space. If the point of the object in the measurement frame is displayed in the grid space, it can also be displayed in a shape similar to that of FIG. 14.
도 15는 본 발명의 실시 예에 따른 객체의 픽셀의 깊이값을 보정하는 방법의 일 예를 도시한 도면이다.15 is a diagram illustrating an example of a method of correcting a depth value of a pixel of an object according to an embodiment of the present invention.
도 15를 참조하면, 영상프레임의 객체가 맵핑된 제1 격자공간의 어느 하나의 제1 격자(1500)와 측정프레임의 객체가 맵핑된 제2 격자공간에서 제1 격자(1500)와 대응되는 위치는 제2 격자(1510)를 각각 도시하고 있다. 본 실시 예에서, 영상프레임의 객체의 픽셀이 맵핑된 제1 격자공간과 측정프레임의 객체의 포인트가 맵핑된 제2 격자공간은 위치, 방향, 크기 등이 먼저 정합되어 있다고 가정한다. 또한 본 실시 예는 설명의 편의를 위하여 제1 격자(1500)와 제2 격자(1510)에 각각 하나의 픽셀(1502)과 포인트(1512)가 존재하는 예를 도시하고 있으나, 하나의 격자(1500,1510)에는 복수의 픽셀 또는 복수의 포인트가 존재할 수 있다. 또한 제1 격자(1500)와 제2 격자(1510)에 존재하는 픽셀과 포인트의 개수는 동일하거나 서로 다를 수 있다.Referring to FIG. 15, a position corresponding to a first lattice 1500 in a first lattice space to which an object of an image frame is mapped and a first lattice 1500 in a second lattice space to which an object of a measurement frame is mapped denotes the second grating 1510, respectively. In this embodiment, it is assumed that the first lattice space in which the pixels of the object in the image frame are mapped and the second lattice space in which the points of the object in the measurement frame are mapped are first matched in position, direction, size, etc. In addition, this embodiment shows an example in which one pixel 1502 and one point 1512 are present in the first grid 1500 and the second grid 1510, respectively, for convenience of explanation, but one grid 1500 , 1510) may have a plurality of pixels or a plurality of points. Also, the number of pixels and points present in the first grid 1500 and the second grid 1510 may be the same or different.
가상객체생성장치(910)는 제2 격자(1510)의 포인트(1512)의 거리값을 기초로 제1 격자(1500)의 픽셀(1502)의 깊이값을 보정한다. 라이다가 측정한 포인트의 거리값이 보다 정확하므로 가상객체생성장치(910)는 포인트의 거리값을 기준으로 픽셀의 깊이값을 보정한다. 예를 들어, 제1 격자(1500)의 픽셀(1502)의 격자공간에서의 좌표값과 제2 격자(1510)의 포인트의 좌표값이 서로 상이하면, 가상객체생성장치(910)는 제1 격자(1500)의 픽셀(1502)을 제2 격자(1510)의 포인트의 좌표값에 따라 보정(1520)한다. The virtual object generator 910 corrects the depth value of the pixel 1502 of the first grid 1500 based on the distance value of the point 1512 of the second grid 1510. Since the distance value of the point measured by lidar is more accurate, the virtual object generator 910 corrects the depth value of the pixel based on the distance value of the point. For example, if the coordinate values of the pixels 1502 of the first grid 1500 in the grid space and the coordinate values of the points of the second grid 1510 are different from each other, the virtual object generator 910 operates the first grid The pixel 1502 of (1500) is corrected (1520) according to the coordinate values of the points of the second grid 1510.
영상프레임과 측정프레임의 해상도가 서로 다를 수 있으므로, 제1 격자(1500)의 픽셀과 제2 격자(1510)의 포인트가 가리키는 위치가 서로 일대일 맵핑되지 않을 수 있다. 따라서 제2 격자(1510) 내에 존재하는 복수의 포인트 또는 제2 격자(1510)의 상하좌우 등에 존재하는 주변 격자에 존재하는 복수의 포인트를 이용하여 각 포인트 사이에 존재하는 값들을 보간법(interpolation) 등을 통해 파악하여 픽셀의 좌표값에 해당하는 포인트의 좌표값에 대한 거리값을 파악할 수 있다. 그리고 보간을 통해 생성된 포인트의 거리값을 이용하여 제1 격자(1500)의 픽셀의 좌표값을 보정할 수 있다. Since the image frame and the measurement frame may have different resolutions, positions indicated by pixels of the first grid 1500 and points of the second grid 1510 may not be mapped one-to-one. Therefore, by using a plurality of points that exist in the second grid 1510 or a plurality of points that exist in the surrounding grids that exist in the upper, lower, left, right, etc. of the second grid 1510, the values existing between each point are interpolated, etc. Through this, it is possible to determine the distance value of the coordinate value of the point corresponding to the coordinate value of the pixel. In addition, coordinate values of pixels of the first grid 1500 may be corrected using distance values of points generated through interpolation.
도 16은 본 발명의 실시 예에 따른 3차원 가상 객체를 생성하는 방법의 일 예를 도시한 도면이다.16 is a diagram illustrating an example of a method of generating a 3D virtual object according to an embodiment of the present invention.
도 16을 참조하면, 가상객체생성장치(910)는 포인트 클라우드(1610)를 3차원 모델링 알고리즘(1600)에 입력하여 표면정보를 포함하는 3차원 가상 객체(1620)를 생성할 수 있다. 포인트 클라우드는 객체의 꼭짓점과 모서리 등 객체를 정의할 수 있는 주요지점을 나타내는 지점으로 구성될 수 있다. 깊이카메라가 촬영한 영상프레임에서 포인트 클라우드를 추출하는 종래의 다양한 방법이 본 실시 예에 적용될 수 있다. 포인트 클라우드를 추출하는 방법 그 자체는 이미 널리 알려진 기술이므로 이에 대한 추가적인 설명은 생략한다. Referring to FIG. 16 , the virtual object generating apparatus 910 may generate a 3D virtual object 1620 including surface information by inputting a point cloud 1610 to a 3D modeling algorithm 1600 . A point cloud can be composed of points representing key points that can define an object, such as vertices and edges of an object. Various conventional methods of extracting a point cloud from an image frame captured by a depth camera may be applied to this embodiment. Since the method of extracting a point cloud itself is a well-known technique, an additional description thereof will be omitted.
가상객체생성장치(910)는 영상프레임에서 추출한 객체를 격자공간에 맵핑하고, 격자공간에 맵핑된 각 픽셀의 거리값(또는 좌표값)을 도 15와 같은 방법으로 보정한다. 그리고 보정된 각 픽셀의 거리값(또는 좌표값)에서 3차원 가상 객체의 생성에 사용할 포인트 클라우드를 추출한다. 도 14의 객체에 대한 포인트 클라우드를 추출한 예가 도 17에 도시되어 있다. 가상객체생성장치(910)는 3차원 모델링 알고리즘(1600)으로 머신러닝 등의 인공지능모델을 이용할 수 있다. 포인트 클라우드를 기반으로 3차원 객체를 생성하는 종래의 다양한 알고리즘이 본 실시 예에 적용될 수 있다. 포인트 클라우드를 이용한 3차원 객체 생성 방법 그 자체는 이미 널리 알려진 기술이므로 이에 대한 상세한 설명은 생략한다. The virtual object generating apparatus 910 maps the object extracted from the image frame to the lattice space, and corrects the distance value (or coordinate value) of each pixel mapped to the lattice space in the same manner as shown in FIG. 15 . Then, a point cloud to be used for generating a 3D virtual object is extracted from the corrected distance value (or coordinate value) of each pixel. An example of extracting a point cloud for the object of FIG. 14 is shown in FIG. 17 . The virtual object generating device 910 may use an artificial intelligence model such as machine learning as a 3D modeling algorithm 1600 . Various conventional algorithms for generating a 3D object based on a point cloud may be applied to this embodiment. Since the method of generating a 3D object using a point cloud itself is a well-known technology, a detailed description thereof will be omitted.
도 17은 본 발명의 실시 예에 따른 3차원 가상 객체 생성을 위한 포인트 클라우드를 추출한 일 예를 도시한 도면이다. 도 17을 참조하면, 의자에 대한 포인트 클라우드(1710)의 추출 예가 도시되어 있다. 포인트 클라우드를 추출하는 종래의 다양한 방법이 본 실시 예에 적용될 수 있다. 다른 실시 예로, 동일한 식별번호가 부여된 복수의 영상프레임, 즉 동일 객체를 촬영한 복수의 영상프레임에서 픽셀의 깊이값을 보정한 후 각각 포인트 클라우드를 추출하여 3차원 가상객체를 생성할 수 있다.17 is a diagram illustrating an example of extracting a point cloud for generating a 3D virtual object according to an embodiment of the present invention. Referring to FIG. 17 , an example of extracting a point cloud 1710 for a chair is shown. Various conventional methods of extracting a point cloud may be applied to this embodiment. As another embodiment, a 3D virtual object may be generated by extracting a point cloud after correcting a depth value of a pixel in a plurality of image frames to which the same identification number is assigned, that is, a plurality of image frames in which the same object is photographed.
도 18은 본 발명의 실시 예에 따른 3차원 가상 객체의 생성 예를 도시한 도면이다. 도 18을 참조하면, 가상객체생성장치는 포인트 클라우드를 이용하여 표면정보를 포함하는 가상객체(1800)를 생성할 수 있다. 18 is a diagram illustrating an example of generating a 3D virtual object according to an embodiment of the present invention. Referring to FIG. 18 , the virtual object generator may create a virtual object 1800 including surface information using a point cloud.
도 19는 본 발명의 실시 예에 다른 가상객체생성장치의 일 예의 구성을 도시한 도면이다.19 is a diagram showing the configuration of an example of a virtual object generating device according to an embodiment of the present invention.
도 19를 참조하면, 가상객체생성장치(910)는 제1 객체추출부(1900), 제2 객체추출부(1910), 제1 격자배치부(1920), 제2 격자배치부(1930), 보정부(1940) 및 객체생성부(1950)를 포함한다. 일 실시 예로, 가상객체생성장치(810)는 메모리, 프로세서, 입출력장치 등을 포함하는 컴퓨팅 장치 또는 서버, 클라우드 시스템 등으로 구현될 수 있으며, 이 경우 각 구성은 소프트웨어 구현되어 메모리에 탑재된 후 프로세서에 의해 수행될 수 있다.Referring to FIG. 19 , the virtual object generating device 910 includes a first object extraction unit 1900, a second object extraction unit 1910, a first grid arrangement unit 1920, a second grid arrangement unit 1930, It includes a correction unit 1940 and an object creation unit 1950. As an example, the virtual object generator 810 may be implemented as a computing device including a memory, processor, input/output device, etc., or as a server, cloud system, etc. can be performed by
제1 객체추출부(1900)는 일정 공간을 깊이카메라로 촬영하여 얻은 영상프레임에서 제1 배경영역과 제1 객체영역을 구분한다. 제2 객체추출부(1910)는 상기 일정 공간을 라이다로 측정하여 얻은 측정프레임에서 제2 배경영역과 제2 객체영역을 구분한다. 배경과 객체의 구분은 인공지능모델을 이용하여 수행할 수 있으며 이에 대한 예가 도 12에 도시되어 있다.The first object extraction unit 1900 distinguishes a first background area and a first object area from an image frame obtained by photographing a certain space with a depth camera. The second object extraction unit 1910 distinguishes a second background area and a second object area from a measurement frame obtained by measuring the predetermined space with LIDAR. Classification of the background and the object can be performed using an artificial intelligence model, and an example thereof is shown in FIG. 12 .
제1 격자배치부(1920)는 기 정의된 크기의 격자를 포함하는 제1 격자공간에 상기 제1 객체영역의 픽셀을 깊이값에 따라 배치한다. 제2 격자배치부(1930)는 기 정의된 크기의 격자를 포함하는 제2 격자공간에 상기 제2 객체영역의 포인트를 거리값에 따라 배치한다. 격자공간의 예가 도 13에 도시되어 있고, 격자공간에 영상프레임으로부터 추출한 객체의 픽셀들을 맵핑한 예가 도 14에 도시되어 있다.The first grid arrangement unit 1920 arranges pixels of the first object area according to depth values in a first grid space including a grid having a predefined size. The second grid arranging unit 1930 arranges points of the second object area according to distance values in a second grid space including a grid having a predefined size. An example of the lattice space is shown in FIG. 13, and an example of mapping the pixels of an object extracted from an image frame to the lattice space is shown in FIG.
보정부(1940)는 제1 격자공간의 픽셀의 깊이값을 상기 제2 격자공간의 포인트의 거리값을 기준으로 보정한다. 격자공간의 비교를 통해 보정하는 방법의 일 예가 도 15에 도시되어 있다.The correction unit 1940 corrects the depth value of the pixel in the first grid space based on the distance value of the point in the second grid space. An example of a method of correcting through grid space comparison is shown in FIG. 15 .
객체생성부(1950)는 깊이값이 보정된 픽셀을 기준으로 표면정보가 존재하는 가상 객체를 생성한다 . 객체생성부(1950)는 픽셀 전체를 이용하여 3차원 가상공간의 객체를 생성할 수 있다. 그러나 이 경우 연산량이 많아지므로, 객체생성부(1950)는 포인트 클라우드를 생성하여 가상객체를 생성할 수 있으며, 이에 대한 예가 도 16 내지 도 18에 도시되어 있다.The object generator 1950 creates a virtual object having surface information based on pixels whose depth values are corrected. The object generator 1950 may create an object in a 3D virtual space using all pixels. However, in this case, since the amount of computation increases, the object generator 1950 may create a virtual object by creating a point cloud, examples of which are shown in FIGS. 16 to 18 .
본 발명의 각 실시 예는 또한 컴퓨터로 읽을 수 있는 기록매체에 컴퓨터가 읽을 수 있는 코드로서 구현하는 것이 가능하다. 컴퓨터가 읽을 수 있는 기록매체는 컴퓨터 시스템에 의하여 읽혀질 수 있는 데이터가 저장되는 모든 종류의 기록장치를 포함한다. 컴퓨터가 읽을 수 있는 기록매체의 예로는 ROM, RAM, CD-ROM, SSD, 광데이터 저장장치 등이 있다. 또한 컴퓨터가 읽을 수 있는 기록매체는 네트워크로 연결된 컴퓨터 시스템에 분산되어 분산방식으로 컴퓨터가 읽을 수 있는 코드가 저장되고 실행될 수 있다.Each embodiment of the present invention can also be implemented as computer readable codes on a computer readable recording medium. A computer-readable recording medium includes all types of recording devices in which data that can be read by a computer system is stored. Examples of computer-readable recording media include ROM, RAM, CD-ROM, SSD, and optical data storage devices. In addition, the computer-readable recording medium may be distributed to computer systems connected through a network to store and execute computer-readable codes in a distributed manner.
이제까지 본 발명에 대하여 그 바람직한 실시 예들을 중심으로 살펴보았다. 본 발명이 속하는 기술 분야에서 통상의 지식을 가진 자는 본 발명이 본 발명의 본질적인 특성에서 벗어나지 않는 범위에서 변형된 형태로 구현될 수 있음을 이해할 수 있을 것이다. 그러므로 개시된 실시 예들은 한정적인 관점이 아니라 설명적인 관점에서 고려되어야 한다. 본 발명의 범위는 전술한 설명이 아니라 특허청구범위에 나타나 있으며, 그와 동등한 범위 내에 있는 모든 차이점은 본 발명에 포함된 것으로 해석되어야 할 것이다.So far, the present invention has been looked at mainly with its preferred embodiments. Those skilled in the art to which the present invention pertains will be able to understand that the present invention can be implemented in a modified form without departing from the essential characteristics of the present invention. Therefore, the disclosed embodiments should be considered from a descriptive point of view rather than a limiting point of view. The scope of the present invention is shown in the claims rather than the foregoing description, and all differences within the equivalent scope will be construed as being included in the present invention.

Claims (9)

  1. 산업장비의 사각지대에 설치된 적어도 하나 이상의 촬영장치를 이용하여 사각지대에 존재하는 적어도 하나 이상의 객체에 대한 가상객체를 생성하는 단계; 및Creating a virtual object for at least one or more objects existing in the blind spot using at least one photographing device installed in the blind spot of industrial equipment; and
    상기 가상객체를 증강현실 또는 가상현실에 표시하는 단계;를 포함하는 것을 특징으로 하는 산업장비의 안전강화방법.The step of displaying the virtual object in augmented reality or virtual reality; safety reinforcement method for industrial equipment, characterized in that it comprises a.
  2. 제 1항에 있어서, According to claim 1,
    기 학습된 인공지능모델을 이용하여 상기 가상객체의 종류를 인식하는 단계; 및recognizing the type of the virtual object using a pre-learned artificial intelligence model; and
    상기 가상객체의 종류에 따라 알람을 발생하는 단계;를 더 포함하는 것을 특징으로 하는 산업장비의 안전강화방법.The method of reinforcing safety of industrial equipment, characterized in that it further comprises; generating an alarm according to the type of the virtual object.
  3. 제 1항에 있어서, According to claim 1,
    상기 촬영장치는 깊이카메라와 라이다를 포함하고, The photographing device includes a depth camera and lidar,
    상기 가상객체를 생성하는 단계는, 상기 깊이카메라로 촬영하여 얻은 영상프레임에서 상기 객체에 해당하는 영역의 픽셀의 깊이값을 상기 라이다로 측정하여 얻은 측정프레임의 포인트의 거리값으로 보정한 후 보정된 픽셀의 깊이값을 기준으로 가상객체를 생성하는 단계;를 포함하는 것을 특징으로 하는 산업장비의 안전강화방법.In the step of generating the virtual object, the depth value of a pixel of an area corresponding to the object in the image frame obtained by photographing with the depth camera is measured with the LIDAR and corrected with the distance value of the point of the measurement frame, and then corrected. A method for reinforcing safety of industrial equipment comprising the steps of generating a virtual object based on the depth value of a pixel.
  4. 제 1항에 있어서, 상기 표시하는 단계는, The method of claim 1, wherein the displaying step,
    라이다가 측정하는 측정프레임을 통해 상기 라이다의 측정영역인 제1 영역 내 움직이는 객체가 존재하면, 움직이는 객체의 존재를 시각 또는 음향으로 표시하는 단계; 및If there is a moving object in the first region, which is the measurement region of the lidar, through the measurement frame measured by the lidar, visually or soundly indicating the existence of the moving object; and
    상기 움직이는 객체가 상기 제1 영역보다 좁은 촬영영역으로 갖는 깊이카메라의 제2 영역에 들어오면, 상기 가상객체를 표시하는 단계;를 더 포함하는 것을 특징으로 하는 산업장비의 안전강화방법.and displaying the virtual object when the moving object enters a second area of the depth camera having a capturing area narrower than the first area.
  5. 산업장비의 사각지대에 설치된 적어도 하나 이상의 촬영장치를 이용하여 사각지대에 존재하는 적어도 하나 이상의 객체에 대한 가상객체를 생성하는 가상객체생성부; 및a virtual object creation unit that creates a virtual object for at least one or more objects existing in the blind spot using at least one photographing device installed in the blind spot of industrial equipment; and
    상기 가상객체를 증강현실 또는 가상현실에 표시하는 표시부;를 포함하는 것을 특징으로 하는 안전강화장치.Safety reinforcement device comprising a; display unit for displaying the virtual object in augmented reality or virtual reality.
  6. 제 5항에 있어서, 상기 표시부는, The method of claim 5, wherein the display unit,
    기 학습된 인공지능모델을 이용하여 상기 가상객체의 종류를 인식하고, 상기 가상객체의 종류에 따라 알람을 발생하는 것을 특징으로 하는 안전강화장치.A safety enhancement device characterized in that recognizing the type of the virtual object using the previously learned artificial intelligence model and generating an alarm according to the type of the virtual object.
  7. 제 5항에 있어서, According to claim 5,
    상기 촬영장치는 깊이카메라와 라이다를 포함하고, The photographing device includes a depth camera and lidar,
    상기 가상객체생성부는 상기 깊이카메라로 촬영하여 얻은 영상프레임에서 상기 객체에 해당하는 영역의 픽셀의 깊이값을 상기 라이다로 측정하여 얻은 측정프레임의 포인트의 거리값으로 보정한 후 보정된 픽셀의 깊이값을 기준으로 가상객체를 생성하는 것을 특징으로 하는 안전강화장치.The virtual object generator corrects the pixel depth value of the area corresponding to the object in the image frame obtained by photographing with the depth camera with the distance value of the point of the measurement frame obtained by measuring the LIDAR, and then corrects the corrected pixel depth. A safety enhancement device characterized in that for generating a virtual object based on the value.
  8. 제 5항에 있어서, 상기 표시부는,The method of claim 5, wherein the display unit,
    라이다가 측정하는 측정프레임을 통해 상기 라이다의 측정영역인 제1 영역 내 움직이는 객체가 존재하면, 움직이는 객체의 존재를 시각 또는 음향으로 표시하고, 상기 움직이는 객체가 상기 제1 영역보다 좁은 영역을 촬영영역으로 갖는 깊이카메라의 제2 영역에 들어오면, 상기 가상객체를 표시하는 것을 특징으로 하는 안전강화장치.If there is a moving object in the first area, which is the measuring area of the lidar, through the measurement frame measured by the lidar, the existence of the moving object is displayed visually or by sound, and the moving object covers an area narrower than the first area. The safety reinforcement device, characterized in that for displaying the virtual object when entering a second area of the depth camera having a photographing area.
  9. 제 1항에 기재된 방법을 수행하기 위한 컴퓨터 프로그램을 기록한 컴퓨터로 읽을 수 있는 기록매체.A computer-readable recording medium on which a computer program for performing the method according to claim 1 is recorded.
PCT/KR2021/017343 2021-11-24 2021-11-24 Safety enhancement method for industrial equipment, and device thereof WO2023095939A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/KR2021/017343 WO2023095939A1 (en) 2021-11-24 2021-11-24 Safety enhancement method for industrial equipment, and device thereof

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/KR2021/017343 WO2023095939A1 (en) 2021-11-24 2021-11-24 Safety enhancement method for industrial equipment, and device thereof

Publications (1)

Publication Number Publication Date
WO2023095939A1 true WO2023095939A1 (en) 2023-06-01

Family

ID=86539711

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/KR2021/017343 WO2023095939A1 (en) 2021-11-24 2021-11-24 Safety enhancement method for industrial equipment, and device thereof

Country Status (1)

Country Link
WO (1) WO2023095939A1 (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160379411A1 (en) * 2015-06-26 2016-12-29 Paccar Inc Augmented reality system for vehicle blind spot prevention
KR20190103079A (en) * 2019-08-14 2019-09-04 엘지전자 주식회사 Vehicle external information output method using augmented reality and apparatus therefor
KR101988356B1 (en) * 2018-03-30 2019-09-30 (주)대우건설 Smart field management system through 3d digitization of construction site and analysis of virtual construction image
KR102310602B1 (en) * 2021-06-17 2021-10-14 주식회사 인피닉 Method for correcting difference of multiple sensors, and computer program recorded on record-medium for executing method therefor
KR20210136194A (en) * 2020-05-06 2021-11-17 이지스로직 주식회사 Display device for construction equipment using LiDAR and AR

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160379411A1 (en) * 2015-06-26 2016-12-29 Paccar Inc Augmented reality system for vehicle blind spot prevention
KR101988356B1 (en) * 2018-03-30 2019-09-30 (주)대우건설 Smart field management system through 3d digitization of construction site and analysis of virtual construction image
KR20190103079A (en) * 2019-08-14 2019-09-04 엘지전자 주식회사 Vehicle external information output method using augmented reality and apparatus therefor
KR20210136194A (en) * 2020-05-06 2021-11-17 이지스로직 주식회사 Display device for construction equipment using LiDAR and AR
KR102310602B1 (en) * 2021-06-17 2021-10-14 주식회사 인피닉 Method for correcting difference of multiple sensors, and computer program recorded on record-medium for executing method therefor

Similar Documents

Publication Publication Date Title
CN106170828B (en) External recognition device
WO2020122300A1 (en) Deep learning-based number recognition system
KR20090109437A (en) Method and System for Image Matching While Driving Vehicle
CN111160220B (en) Deep learning-based parcel detection method and device and storage medium
WO2016031190A1 (en) Information processing device and recognition support method
WO2018097595A1 (en) Method and device for providing driving information by using camera image
KR101660254B1 (en) recognizing system of vehicle number for parking crossing gate
CN112967283A (en) Target identification method, system, equipment and storage medium based on binocular camera
CN115171361B (en) Dangerous behavior intelligent detection and early warning method based on computer vision
CN116206255B (en) Dangerous area personnel monitoring method and device based on machine vision
KR102033118B1 (en) Apparatus for warning of a work site risk, a method therefor and a computer recordable medium storing program to perform the method
JP2012198857A (en) Approaching object detector and approaching object detection method
WO2023095939A1 (en) Safety enhancement method for industrial equipment, and device thereof
JP6125102B2 (en) Information display system
CN114821497A (en) Method, device and equipment for determining position of target object and storage medium
JP7280852B2 (en) Person detection system, person detection program, trained model generation program and trained model
US11823305B2 (en) Method and device for masking objects contained in an image
WO2018101746A2 (en) Apparatus and method for reconstructing road surface blocked area
JP3228603B2 (en) Image monitoring device and its use
KR20190134303A (en) Apparatus and method for image recognition
KR20210157551A (en) Variable vehicle speed warning system including pedestrian determination system
EP2778603A1 (en) Image processing apparatus and image processing method
WO2022197042A1 (en) Illegal intersection entry recognition and image storage device
KR20230076244A (en) Method and apparatus for improving safety of industrial equipment
WO2023095938A1 (en) Safety accident prevention method and apparatus therefor

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21965727

Country of ref document: EP

Kind code of ref document: A1