WO2023095937A1 - Procédé d'identification d'erreur dans un dessin et appareil associé - Google Patents

Procédé d'identification d'erreur dans un dessin et appareil associé Download PDF

Info

Publication number
WO2023095937A1
WO2023095937A1 PCT/KR2021/017341 KR2021017341W WO2023095937A1 WO 2023095937 A1 WO2023095937 A1 WO 2023095937A1 KR 2021017341 W KR2021017341 W KR 2021017341W WO 2023095937 A1 WO2023095937 A1 WO 2023095937A1
Authority
WO
WIPO (PCT)
Prior art keywords
virtual object
space
model
overlapping
virtual
Prior art date
Application number
PCT/KR2021/017341
Other languages
English (en)
Korean (ko)
Inventor
심용수
심상우
Original Assignee
심용수
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 심용수 filed Critical 심용수
Priority to PCT/KR2021/017341 priority Critical patent/WO2023095937A1/fr
Publication of WO2023095937A1 publication Critical patent/WO2023095937A1/fr

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/88Lidar systems specially adapted for specific applications
    • G01S17/89Lidar systems specially adapted for specific applications for mapping or imaging
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • G06T17/10Constructive solid geometry [CSG] using solid primitives, e.g. cylinders, cubes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery

Definitions

  • An embodiment of the present invention relates to a method and apparatus for determining an error between a drawing and an actual object made based on the drawing.
  • Digital twin technology which implements objects such as machines, equipment, and objects in the real world into a virtual world in a computer, is used in various fields such as architecture, energy, aviation, healthcare, automobiles, and national defense. For example, by measuring the tolerance range of BIM (Building Information Modeling) made with drawings and digital twins at construction sites or industrial sites, errors occurring in the design phase, process/construction phase, and quality phase can be reduced.
  • BIM Building Information Modeling
  • a technical problem to be achieved by an embodiment of the present invention is a method for accurately identifying the error between a real object and a drawing by generating a real object in the real world as a virtual object and comparing it with a 3D model of the drawing. and to provide the device.
  • An example of a drawing error detection method for achieving the above technical problem is to create a virtual object by photographing and measuring an object in a certain space with a depth camera and lidar; overlapping a 3D model of a predefined drawing of the object with the virtual object; and determining an error between the drawing and the virtual object based on the overlapping result.
  • An example of a drawing error detection device for achieving the above technical problem is an object generator for generating a virtual object by photographing and measuring an object in a certain space with a depth camera and lidar; an overlapping unit for overlapping the virtual object with a 3D model made based on a predefined drawing of the object; and an error detection unit that determines an error between the drawing and the virtual object based on the overlapping result.
  • the present invention after generating an object in the real world as a virtual object in the virtual world, it is possible to determine whether the object exactly matches the drawing by comparing it with a 3D model of the drawing.
  • the problem of a difference between the virtual object and the real object due to lens distortion is solved through LiDAR (Light Detection and Ranging) to accurately match or match the object. You can create matching virtual objects within tolerance.
  • FIG. 1 is a diagram showing an example of a drawing error detection device according to an embodiment of the present invention.
  • FIG. 2 is a diagram showing an example of a drawing error detection method according to an embodiment of the present invention.
  • FIG. 3 is a diagram showing an example in which a virtual object and a 3D model of a drawing are overlapped and displayed according to an embodiment of the present invention
  • FIG. 4 is a flowchart illustrating an example of a drawing error detection method according to an embodiment of the present invention
  • FIG. 5 is a diagram showing an example of a method of overlapping a plurality of spatial objects with a 3D model of a drawing according to an embodiment of the present invention
  • FIG. 6 is a diagram showing the configuration of an example of a drawing error detection device according to an embodiment of the present invention.
  • FIG. 7 is a diagram showing an example of a schematic configuration of a system for creating objects in a virtual space according to an embodiment of the present invention.
  • FIG. 8 is a view showing an example of a photographing method of a photographing device according to an embodiment of the present invention.
  • FIG. 9 is a flowchart illustrating an example of a method for generating an object in a virtual space according to an embodiment of the present invention.
  • FIG. 10 is a diagram showing an example of a method for distinguishing a background and an object of a video frame and a measurement frame according to an embodiment of the present invention
  • FIG. 11 is a diagram showing an example of a lattice space according to an embodiment of the present invention.
  • FIG. 12 is a diagram showing an example of displaying objects classified in a video frame in a grid space according to an embodiment of the present invention
  • FIG. 13 is a diagram showing an example of a method of correcting a depth value of a pixel of an object according to an embodiment of the present invention
  • FIG. 14 is a diagram showing an example of a method for generating a 3D virtual object according to an embodiment of the present invention.
  • FIG. 15 is a diagram showing an example of extracting a point cloud for generating a 3D virtual object according to an embodiment of the present invention.
  • 16 is a diagram showing an example of generating a 3D virtual object according to an embodiment of the present invention.
  • FIG. 17 is a diagram showing the configuration of an example of a virtual object generating device according to an embodiment of the present invention.
  • FIG. 1 is a diagram showing an example of a drawing error detection device according to an embodiment of the present invention.
  • the drawing error detection device 100 compares a virtual object 110 made based on an object in the real world with a 3D model 120 of the drawing for the object to determine whether there is an error.
  • objects in the real world are all objects made according to drawings, such as buildings (eg, sewer pipes, columns, walls, etc.), various facilities, or various objects (wardrobe, sink, chair, shoes, etc.) includes However, in the following, for convenience of description, examples of objects are mainly presented, such as a building or an appendage of a building.
  • the 3D model 120 means 3D data of a drawing designed in advance for object creation, and as an example, there is a 3D model made of CAD drawings.
  • Buildings, etc. are constructed according to pre-designed drawings. In order to check whether a building constructed according to the drawing is constructed in accordance with the drawing, it is visually checked and the length or thickness is measured, which takes a lot of time and may be difficult to accurately measure. Accordingly, in this embodiment, various objects, such as construction or completed buildings or facilities, are created as virtual objects 110 according to the drawing, and then the virtual object 110 is compared with the 3D model 120 of the drawing through a process. A method for determining whether an error 130 between the object and the real world occurs is presented.
  • FIG. 2 is a diagram illustrating an example of a drawing error detection method according to an embodiment of the present invention.
  • the drawing error detection apparatus 100 creates a virtual object by photographing and measuring an object through a depth camera and lidar. In order to overlap (250) the 3D model of the virtual object and the drawing, the virtual object and the drawing must be in a file format readable by the 3D software (240).
  • the file format of the original data 200 of the virtual object generated by the drawing error detection device 100 may be different from the file format of the drawing.
  • the drawing error detection device 100 converts the original data 200 of the virtual object into a first format readable by the 3D software 240 (eg, RAWDATA or “*. obj" format file) to create a virtual object file 220.
  • the 3D software 240 may be various conventional software capable of displaying 3D data such as a CAD program.
  • the 3D software 240 may be implemented as a part of the drawing error detection device 100 .
  • the drawing error detection apparatus 100 may convert the original data 200 of a virtual object into a plurality of different formats such as a second format in addition to the first format. That is, the drawing error detection device 100 includes a plurality of data format conversion algorithms, and converts the original data 200 of the virtual object to the same format as the drawing file 230 by applying the format conversion algorithm. there is.
  • the data format conversion process itself is a well-known technology.
  • the drawing error detection device may apply various conventional data format conversion algorithms to this embodiment. If the format of the original data 200 of the virtual object generated by the drawing error detection device 100 is the same as the format of the drawing file 230, the data format conversion process as in the present embodiment may be omitted.
  • FIG. 3 is a diagram illustrating an example in which a virtual object and a 3D model of a drawing are overlapped and displayed according to an embodiment of the present invention.
  • the drawing error detection device 100 overlaps (320) a 3D model 300 made based on the drawing and a virtual object 310 created by photographing a real object with a depth camera and lidar. Check for errors. A detailed method of generating a virtual object using a depth camera and LIDAR will be reviewed again with reference to FIGS. 7 to 16 .
  • At least one object existing in real space is made according to a pre-designed drawing.
  • a pre-designed drawing For example, water supply and sewage pipes or wall structures of buildings are made according to construction drawings.
  • the drawing error detection device creates a virtual object 310 by photographing an object made based on a drawing using a photographing device.
  • the drawing error detection device 100 overlaps 320 the object 302 and the virtual object 310 existing in the 3D model 300 of the drawing to determine the portion 330 where the error exists, and the error portion ( 330) can be distinguished and displayed in a way that users can identify, such as colors or symbols.
  • the virtual object 310 and the 3D model 300 are What is necessary is just to display it as it is in the 1st coordinate system (or the 2nd coordinate system). However, if the first coordinate system and the second coordinate system are different (eg, the origin or the value range of each axis), it is difficult to overlap the virtual object 310 and the 3D model 300 as they are.
  • the drawing error detection device 100 adjusts the size, direction, etc. of the virtual object 310 to the object corresponding to the 3D model based on feature points (eg, corners, vertices, etc.). Can be nested by transforming. Various conventional methods of extracting feature points from an image may be applied to this embodiment.
  • feature points eg, corners, vertices, etc.
  • Various conventional methods of extracting feature points from an image may be applied to this embodiment.
  • a plurality of objects exist in the 3D model 300 such as a building, it may be difficult to determine which of the plurality of objects the virtual object 310 should overlap with.
  • the drawing error detection device 100 may use an artificial intelligence model trained to recognize the same object.
  • the drawing error detection device 100 identifies the pipe object 302 in the 3D model 300 using an artificial intelligence model trained to identify the pipe object, and the identified 3D model 300
  • the pipe object 302 and the virtual object 310 may be overlapped by transforming the size or direction as needed.
  • various artificial intelligence models for recognizing each object may exist. Since learning and generation of artificial intelligence models for object identification are already widely known technologies, additional explanations thereof will be omitted.
  • FIG. 4 is a flowchart illustrating an example of a drawing error detection method according to an embodiment of the present invention.
  • the drawing error detection apparatus 100 creates a virtual object by photographing and measuring a real object in a real space through a depth camera and LIDAR (S400). For example, when a virtual object is created using only a depth camera, there is a problem that the virtual object is distorted due to lens distortion, etc. After correcting the depth value of the pixel of the area corresponding to the distance value of the point of the measurement frame obtained by measuring with LIDAR, a virtual object is created based on the corrected pixel depth value. Creation of virtual objects will be reviewed again below in FIG. 7 .
  • the drawing error detection apparatus 100 overlaps the virtual object and the 3D model of the drawing (S410) to determine the error between the drawing and the virtual object (S420). Since the virtual object is created by photographing an object made in real space based on the drawing, it is possible to determine whether the real object matches the drawing by comparing the virtual object with the 3D model of the drawing. In order to accurately determine the error with the drawing, it is important that the virtual object and the real object are created without error. To this end, the present embodiment creates a virtual object using lidar together with a depth camera as described above.
  • FIG. 5 is a diagram illustrating an example of a method of overlapping a plurality of spatial objects with a 3D model of a drawing according to an embodiment of the present invention.
  • a building or the like is divided into a plurality of spaces 500, and objects of the same type or different types may exist in each space.
  • objects such as gas pipes or water and sewage pipes may exist in various spaces of a building.
  • a virtual object for one type of object may be created by capturing a real space, or a virtual object for a plurality of objects may be created by distinguishing a plurality of objects from a single photographed image.
  • a case of recognizing an object of one type in a plurality of spaces will be mainly described.
  • the user designates the region of the 3D model of the drawing to be compared with the virtual object to the drawing error detection device 100, or can be entered. For example, if a virtual object is created by photographing the first space, the user inputs space identification information indicating that the point at which the virtual object was created is the first space to the drawing error detection device 100 or the first space in the 3D model. You can select an area of space.
  • the drawing error detecting apparatus 100 may determine an area of error by identifying an area corresponding to the first space in the 3D model of the drawing and then overlapping and comparing the corresponding area of the 3D model with the virtual object in the first space.
  • the drawing error detection device 100 may create virtual objects for each of the plurality of spaces by photographing a plurality of spaces, such as a building, with a photographing device. Even in this case, as in the above example, the user may directly input to the drawing error detection device 100 which space each virtual object corresponds to among a plurality of spaces. However, if the number of virtual objects is more than tens or hundreds, it may be difficult for the user to input them one by one. As a method of solving this problem, the drawing error detection device 100 may store information about a photographing space together when storing data photographed by a photographing device. For example, the photographing device may photograph after recognizing the tag 510 (eg, barcode, QR code, RFID, etc.) located on one side of each space.
  • the tag 510 eg, barcode, QR code, RFID, etc.
  • the photographing device may store identification information and photographing data of the tag 510 or may provide the drawing error detection device 100 . As another embodiment, the photographing device may store the identification information for each space directly input from the user together with the photographing data of each space or may transmit it to the drawing error detection device 100 .
  • the drawing error detection apparatus 100 may generate virtual objects 520 for each space, map them to identification information of each space, and store them.
  • the drawing error detection device 100 classifies the corresponding area of the 3D model of the drawing based on the identification information of each space, and stores the virtual object 520 mapped with the identification information of each space to the corresponding area of the 3D model. By overlapping, errors can be identified.
  • FIG. 6 is a diagram showing the configuration of an example of a drawing error detection device according to an embodiment of the present invention.
  • the drawing error detection device 100 includes an object generator 600 , an overlapping unit 610 and an error detection unit 620 .
  • the drawing error detection device 100 may be implemented as a computing device including a memory, a processor, an input/output device, and the like.
  • each component of the present embodiment may be implemented as software, loaded into a memory, and then executed by a processor.
  • the object generator 600 creates a virtual object by photographing and measuring a real object in a certain space with a depth camera and lidar. A method of creating a virtual object is shown below in FIG. 7 .
  • the overlapping unit 610 overlaps a virtual object with a 3D model made based on a predefined drawing of the object.
  • An example of overlap is shown in FIG. 3 .
  • an example of a method of overlapping a plurality of virtual objects existing in a plurality of spaces with a corresponding region of the 3D model of the drawing is shown in FIG. 5 .
  • the error detection unit 620 determines an error between the drawing and the virtual object based on the overlapping result. Through comparison of the drawing and the virtual object, it is possible to determine whether the object created based on the drawing has errors with the design of the drawing.
  • FIG. 7 is a diagram showing an example of a schematic configuration of a system for creating objects in a virtual space according to an embodiment of the present invention.
  • a photographing device 700 includes a depth camera 702 and a LIDAR 704 .
  • the depth camera 702 is a camera that captures a certain space in which the real object 730 exists and provides a depth value of each pixel together.
  • the depth camera 702 itself is a well-known technology, and various types of conventional depth cameras 702 may be used in this embodiment.
  • the depth camera 702 may capture a still image or a video.
  • Photo data including a depth value of each pixel obtained by photographing with the depth camera 702 is called an image frame. That is, a video is composed of a certain number of video frames per second.
  • the LIDAR 704 emits a laser into a certain space, measures a signal returned from each point in the space (ie, a reflection point), and outputs distance values for a plurality of points in the certain space.
  • Data consisting of distance values for a plurality of points measured by the LIDAR 704 in a certain space at a certain point in time is called a measurement frame.
  • the resolution of a plurality of points measured by the lidar 704, that is, the measurement frame, may be different depending on the lidar 704.
  • LiDAR 704 itself is already widely known technology, and various conventional lidar 704 may be used in this embodiment.
  • the photographing device 700 simultaneously drives the depth camera 702 and the LIDAR 704 to photograph and measure a real object 730 in a certain space.
  • the expression 'photographing' of the photographing device 700 or 'measurement' of the photographing device may be interpreted as photographing by the depth camera 702 and measuring by the LIDAR 704 simultaneously.
  • the number of video frames generated per second by the depth camera 702 and the number of measurement frames generated per second by the LIDAR 704 may be the same or different depending on embodiments.
  • the resolution of the video frame and the resolution of the measurement frame may be the same or different depending on the embodiment.
  • the depth camera 702 and the lidar 704 are simultaneously driven to generate an image frame and a measurement frame for a certain space, they can be synchronized by being mapped to the same time axis.
  • the virtual object generating device 710 uses the image frame obtained by photographing with the depth camera 702 and the measurement frame obtained by measuring with the LIDAR 704 together to create an object in the virtual space for the object 730 in the real world (that is, , digital twin). 1 to 6 have been described on the assumption that the virtual object generator 710 is implemented as a part of the drawing error detection device 100, but the virtual object generator 710 is separate from the drawing error detection device 100. It can be implemented as a device.
  • the virtual object generating device 710 may be connected to the photographing device 700 through a wired or wireless communication network (eg, WebRTC, etc.) to receive the image frame and the measurement frame generated by the photographing device 700 in real time.
  • a wired or wireless communication network eg, WebRTC, etc.
  • the virtual object generating device 710 captures the image frame and the measurement frame generated by the photographing device 700 for a certain period of time and measures them, and then stores the storage medium (for example, Universal Serial Bus (USB)). It may be received through a wired or wireless communication network (for example, a local area network, etc.) with the photographing device 700 after shooting is completed. It can be provided to the user terminal 720 so that it can be checked.
  • USB Universal Serial Bus
  • the photographing device 700 and the virtual object generating device 710 may be implemented as one device.
  • the photographing device 110 and the virtual object device 710 may be implemented as part of various devices that display augmented reality or virtual reality, such as AR (augumented reality) glasses, HMD (Head Mounded Display), or wearable devices.
  • AR augumented reality
  • HMD Head Mounded Display
  • the photographing device 700 is implemented as a part of AR glasses, HMD, or a wearable device, and the photographing device 700 is a virtual object generating device that connects real-time photographing and measuring image frames and measurement frames to a wired or wireless communication network. 710, the AR glasses, HMD, etc.
  • the photographing device may receive the virtual object from the virtual object generating device 710 and display the virtual object in augmented reality or virtual reality.
  • the user can immediately check the virtual object created in real time through augmented reality or virtual reality.
  • a detailed method of generating an object in a virtual space will be reviewed again below in FIG. 2 .
  • FIG. 8 is a diagram illustrating an example of a photographing method of a photographing device according to an embodiment of the present invention.
  • the photographing device 700 may continuously photograph one object 800 or continuously photograph a plurality of objects 800 and 810 .
  • the photographing device 700 may continuously photograph objects of the same type or different types located in various spaces. That is, at least one object 800 or 810 desired by the user may exist at various points in time on the basis of the time axis in the image frame and the measurement frame acquired by the photographing device 700 .
  • This embodiment shows an example of an image frame for convenience of description.
  • the user may photograph the sewer pipe in space A of the building using the photographing device 700 and move to space B to photograph the sewer pipe.
  • the photographing device may continue to be maintained in a photographing state or may be turned off during movement.
  • the same type of object, the sewage pipe exists in the video frames and measurement frames captured in space A and space B.
  • a plurality of objects 800 and 810 may be captured together in an image frame and a measurement frame captured in each space.
  • Object a and object b may exist in the video frame and measurement frame captured in space A
  • object a and object c may exist in the video frame and measurement frame captured in space B.
  • the virtual object generating device 710 may classify the video frame and the measurement frame in units of objects.
  • the virtual object generating apparatus 710 may assign the same identification information (or index) to the video frame and the measurement frame in which the same object exists in the video frame and the measurement frame.
  • first identification information (or first index, hereinafter referred to as identification information) is assigned to all of the plurality of image frames 820 , 822 , and 824 in which the first object 800 exists, and the second object 810 exists.
  • Second identification information may be assigned to a plurality of image frames 830 , 832 , and 834 to be used. No identification information may be assigned to the image frames 840 and 842 in which no object exists, or third identification information may be assigned.
  • the image frames arranged along the time axis can be divided into three groups, that is, A (850), B (860), and C (870).
  • identification information corresponding to each object may be assigned to one video frame and the measurement frame. That is, a plurality of pieces of identification information may be assigned to one image frame and one measurement frame.
  • the photographing device 700 since the photographing device 700 simultaneously drives the depth camera 702 and the lidar 704 to take pictures, the video frame generated by the depth camera 702 and the measurement frame measured by the lidar are synchronized in time. It can be. Therefore, the virtual object generating device 710 determines whether the same object exists only in the video frame, determines the time period of the video frame in which the same object exists, and assigns the same identification information to the video frame during that time period. The same identification information may be assigned by considering the measurement frame generated during the time period identified for the frame as a period in which the same object exists.
  • the virtual object generating device 710 may use various conventional image recognition algorithms to determine whether objects existing in image frames are the same.
  • the virtual object generating device 710 may use an artificial intelligence model as an example of an image recognition algorithm. Since the method itself for determining whether objects in an image are the same is a well-known technique, a detailed description thereof will be omitted.
  • FIG. 9 is a flowchart illustrating an example of a method of generating a virtual space object according to an embodiment of the present invention.
  • the virtual object generating apparatus 710 divides a first background area and a first object area from an image frame obtained by photographing a certain space with a depth camera (S900).
  • the virtual object generating apparatus 710 may distinguish a background and an object from a plurality of image frames in which the same object is photographed. For example, as shown in FIG. 8 , at least one or more image frames (eg, group A of 820, 822, and 824) in which the same object exists may be distinguished from a background, respectively.
  • the virtual object generating apparatus 710 may distinguish the background and the plurality of objects, respectively. An example of a method of distinguishing a background and an object in an image frame will be reviewed again in FIG. 10 .
  • the virtual object generating device 710 distinguishes a second background area and a second object area from a measurement frame obtained by measuring a certain space with lidar (S910).
  • the virtual object generating device 710 may distinguish a background and an object for a plurality of measurement frames in which the same object is measured. For example, as shown in FIG. 8 , a background and an object may be distinguished for a plurality of measurement frames to which the same identification information is assigned.
  • the virtual object generating apparatus 710 may distinguish the background and the plurality of objects, respectively.
  • An example of a method for distinguishing a background and an object in a measurement frame is shown in FIG. 10 .
  • the depth camera 702 and the LIDAR 704 are spaced apart from each other by a predetermined distance within the photographing device 700, and therefore, the photographing angles of the image frame and the measurement frame are different from each other.
  • the positions of each pixel of the video frame and each point of the measurement frame may not be mapped on a one-to-one (1:1, scale) basis.
  • this embodiment uses a grid space.
  • the virtual object generating apparatus 710 arranges the pixels of the first object area divided from the image frame according to the depth value in the first grid space including the grid having a predefined size (S920), and also The points of the second object area divided in the measurement frame are arranged according to the distance values in the second lattice space including the lattice of the same size (S920). Since the image frame and the measurement frame are data obtained by photographing the same space, objects existing in the first object area and the second object area are the same object.
  • the first lattice space and the second lattice space are spaces having grids of the same size in the virtual space. An example of a grid space is shown in FIG. 11 .
  • the virtual object generator 710 corrects the depth value of the pixel in the first grid space based on the distance value of the point in the second grid space (S930). Since the pixels of the first object area and the points of the second object area exist in the grid space of the same size, if the position, direction, size, etc. of the first grid space and the second grid space match each other, each point of the second grid space and each pixel of the first lattice space may be mapped.
  • a specific method of correcting a depth value of a pixel in a grid unit of a grid space using a distance value of a point will be reviewed again with reference to FIG. 13 .
  • the virtual object generating device 710 creates an object (ie, a virtual object) in a virtual space having surface information based on the pixel whose depth value is corrected (S940).
  • the virtual object generating device 710 corrects the pixel depth value of an object existing in a plurality of image frames to which the same identification information is assigned to the point distance value of an object existing in a plurality of measurement frames to which the same identification information is assigned, and then generates a plurality of A virtual object can be created using the corrected pixels of the image frame of . That is, a virtual object may be created by correcting a pixel depth value of an image frame of an object photographed at various angles and positions.
  • the virtual object generating device 710 may generate a 3D virtual object using various types of 3D modeling algorithms.
  • the virtual object generating apparatus 710 may generate a 3D object having surface information as an artificial intelligence model using a pixel having a depth value.
  • the virtual object generator 710 is a point cloud representing corners and vertices among pixels constituting an object.
  • a virtual object can be created by extracting and inputting the point cloud to a 3D modeling algorithm. An example of generating a virtual object using a point cloud will be reviewed again in FIG. 14 .
  • a virtual object can be created based on the distance value of each point of the measurement frame, but generally the resolution of the measurement frame is lower than the resolution of the video frame.
  • a virtual object is created using a distance value of a pixel of an image frame having a relatively high resolution.
  • FIG. 10 is a diagram illustrating an example of a method for distinguishing a background and an object of a video frame and a measurement frame according to an embodiment of the present invention.
  • first artificial intelligence model 1000 that distinguishes the background of an image frame from an object
  • second artificial intelligence model 1010 that distinguishes a background from an object of a measurement frame.
  • Each of the artificial intelligence models 1000 and 1010 is a trained model using pre-constructed learning data and may be implemented as a Convolutional Neural Network (CNN) or the like. Since the process of learning and generating an artificial intelligence model itself is already a well-known technology, a description thereof will be omitted.
  • CNN Convolutional Neural Network
  • the first artificial intelligence model 1000 is a model generated through machine learning to distinguish a background and an object in an image frame when an image frame is input. For example, if the first artificial intelligence model 1000 is an artificial intelligence model learned to recognize a chair, the first artificial intelligence model 1000 is a region in which a chair exists in an image frame (ie, a region of a chair in an image frame). pixels) can be distinguished.
  • the second artificial intelligence model 1010 is a model generated through machine learning to distinguish a background and an object in a measurement frame when a measurement frame is input. For example, if the first artificial intelligence model 1000 is an artificial intelligence model learned to recognize a chair, the second artificial intelligence model 1010 corresponds to a region where a chair exists in the measurement frame (ie, corresponds to a chair within the measurement frame). points) can be distinguished.
  • FIG. 11 is a diagram showing an example of a lattice space according to an embodiment of the present invention.
  • the grid space 1100 is a space in which a region in the virtual space is divided into unit grids 1110 of a certain size.
  • the grid space 1100 may be a space composed of unit cells 1110 having width, length, and height of d1, d2, and d3, respectively.
  • d1, d2, and d3 may all have the same size (eg, 1 mm) or different sizes.
  • FIG. 12 is a diagram showing an example of displaying objects classified in a video frame in a grid space according to an embodiment of the present invention.
  • the virtual object generating apparatus 710 may display pixels 1210 of an object area divided in an image frame in a grid space 1200 using the depth values.
  • pixels are schematically illustrated for ease of understanding.
  • the virtual object generating apparatus 710 may determine the 3D coordinate value (or vector value of the pixel, etc.) of each pixel in the grid space by mapping the pixels of the object in the image frame to the grid space 1200 . That is, a 3D coordinate value of each pixel of the object may be generated using a predefined point in the lattice space as a reference point (0,0,0 or X,Y,Z).
  • the virtual object generating apparatus 710 may map points representing objects in the measurement frame to the grid space. If the point of the object in the measurement frame is displayed in the grid space, it can also be displayed in a shape similar to that of FIG. 12.
  • FIG. 13 is a diagram illustrating an example of a method of correcting a depth value of a pixel of an object according to an embodiment of the present invention.
  • a position corresponding to one of the first grids 1300 in the first grid space to which the objects of the image frame are mapped and the first grid 1300 in the second grid space to which the objects of the measurement frame are mapped denotes the second grating 1310, respectively.
  • the first lattice space in which the pixels of the object in the image frame are mapped and the second lattice space in which the points of the object in the measurement frame are mapped are first matched in position, direction, size, etc.
  • this embodiment shows an example in which one pixel 1302 and one point 1312 are present in the first grid 1300 and the second grid 1310, respectively, for convenience of description, but one grid 1300 , 1310) may have a plurality of pixels or a plurality of points. Also, the number of pixels and points present in the first grid 1300 and the second grid 1310 may be the same or different.
  • the virtual object generator 710 corrects the depth value of the pixel 1302 of the first grid 1300 based on the distance value of the point 1312 of the second grid 1310. Since the distance value of the point measured by lidar is more accurate, the virtual object generator 710 corrects the depth value of the pixel based on the distance value of the point. For example, if the coordinate values of the pixels 1302 of the first grid 1300 in the grid space and the coordinate values of the points of the second grid 1310 are different from each other, the virtual object generator 710 operates the first grid The pixel 1302 of (1300) is corrected (1320) according to the coordinate values of the points of the second grid 1310.
  • positions indicated by pixels of the first grid 1300 and points of the second grid 1310 may not be mapped one-to-one. Therefore, by using a plurality of points that exist in the second grid 1310 or a plurality of points that exist in the surrounding grids that exist in the upper, lower, left, right, etc. of the second grid 1310, the values existing between each point are interpolated, etc. Through this, it is possible to determine the distance value of the coordinate value of the point corresponding to the coordinate value of the pixel. In addition, coordinate values of pixels of the first grid 1300 may be corrected using distance values of points generated through interpolation.
  • FIG. 14 is a diagram illustrating an example of a method of generating a 3D virtual object according to an embodiment of the present invention.
  • the virtual object generating apparatus 710 may generate a 3D virtual object 1420 including surface information by inputting a point cloud 1410 to a 3D modeling algorithm 1400 .
  • a point cloud can be composed of points representing key points that can define an object, such as vertices and edges of an object.
  • Various conventional methods of extracting a point cloud from an image frame captured by a depth camera may be applied to this embodiment. Since the method of extracting a point cloud itself is a well-known technique, an additional description thereof will be omitted.
  • the virtual object generating apparatus 710 maps the object extracted from the image frame to the lattice space, and corrects the distance value (or coordinate value) of each pixel mapped to the lattice space in the same manner as shown in FIG. 13 . Then, a point cloud to be used for generating a 3D virtual object is extracted from the corrected distance value (or coordinate value) of each pixel. An example of extracting a point cloud for the object of FIG. 12 is shown in FIG. 15 .
  • the virtual object generating device 710 may use an artificial intelligence model such as machine learning as a 3D modeling algorithm 1400 .
  • Various conventional algorithms for generating a 3D object based on a point cloud may be applied to this embodiment. Since the method of generating a 3D object using a point cloud itself is a well-known technology, a detailed description thereof will be omitted.
  • FIG. 15 is a diagram illustrating an example of extracting a point cloud for generating a 3D virtual object according to an embodiment of the present invention.
  • FIG. 15 an example of extracting a point cloud 1510 for a chair is shown.
  • Various conventional methods of extracting a point cloud may be applied to this embodiment.
  • a 3D virtual object may be generated by extracting a point cloud after correcting a depth value of a pixel in a plurality of image frames to which the same identification number is assigned, that is, a plurality of image frames in which the same object is photographed.
  • the virtual object generator may create a virtual object 1600 including surface information using a point cloud.
  • FIG. 17 is a diagram showing the configuration of an example of a virtual object generating device according to an embodiment of the present invention.
  • the virtual object generating device 710 includes a first object extraction unit 1700, a second object extraction unit 1710, a first grid arrangement unit 1720, a second grid arrangement unit 1730, A correction unit 1740 and an object creation unit 1750 are included.
  • the virtual object generator 710 may be implemented as a computing device including a memory, processor, input/output device, etc., or as a server, cloud system, etc. In this case, each component is implemented in software and loaded into the memory, and then the processor can be performed by
  • the first object extraction unit 1700 distinguishes a first background area and a first object area from an image frame obtained by photographing a certain space with a depth camera.
  • the second object extraction unit 1710 distinguishes a second background area and a second object area from a measurement frame obtained by measuring the predetermined space with LIDAR. Classification of the background and the object can be performed using an artificial intelligence model, and an example thereof is shown in FIG. 10 .
  • the first grid arranging unit 1720 arranges pixels of the first object area according to depth values in a first grid space including a grid having a predefined size.
  • the second grid arranging unit 1730 arranges points of the second object area according to distance values in a second grid space including a grid having a predefined size.
  • An example of a grid space is shown in FIG. 11, and an example of mapping pixels of an object extracted from an image frame to the grid space is shown in FIG.
  • the correction unit 1740 corrects the depth value of the pixel in the first grid space based on the distance value of the point in the second grid space.
  • An example of a method of correcting through grid space comparison is shown in FIG. 13 .
  • the object generator 1750 creates a virtual object having surface information based on pixels whose depth values are corrected.
  • the object generator 1750 may create an object in a 3D virtual space using all pixels. However, in this case, since the amount of computation increases, the object generator 1750 may create a virtual object by creating a point cloud, examples of which are shown in FIGS. 14 to 16 .
  • Each embodiment of the present invention can also be implemented as computer readable codes on a computer readable recording medium.
  • a computer-readable recording medium includes all types of recording devices in which data that can be read by a computer system is stored. Examples of computer-readable recording media include ROM, RAM, CD-ROM, SSD, and optical data storage devices.
  • the computer-readable recording medium may be distributed to computer systems connected through a network to store and execute computer-readable codes in a distributed manner.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Graphics (AREA)
  • Geometry (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Software Systems (AREA)
  • Computer Hardware Design (AREA)
  • General Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Electromagnetism (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Image Processing (AREA)

Abstract

Sont divulgués un procédé d'identification d'une erreur dans un dessin et un appareil associé. Un appareil permettant l'identification d'une erreur dans un dessin utilise à la fois une caméra de profondeur et un LiDAR pour capturer et mesurer un objet dans un certain espace, et génère ainsi un objet virtuel, superpose l'objet virtuel sur un modèle tridimensionnel d'un dessin prédéfini pour l'objet, et identifie une erreur entre le dessin et l'objet virtuel sur la base du résultat de la superposition.
PCT/KR2021/017341 2021-11-24 2021-11-24 Procédé d'identification d'erreur dans un dessin et appareil associé WO2023095937A1 (fr)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/KR2021/017341 WO2023095937A1 (fr) 2021-11-24 2021-11-24 Procédé d'identification d'erreur dans un dessin et appareil associé

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/KR2021/017341 WO2023095937A1 (fr) 2021-11-24 2021-11-24 Procédé d'identification d'erreur dans un dessin et appareil associé

Publications (1)

Publication Number Publication Date
WO2023095937A1 true WO2023095937A1 (fr) 2023-06-01

Family

ID=86539800

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/KR2021/017341 WO2023095937A1 (fr) 2021-11-24 2021-11-24 Procédé d'identification d'erreur dans un dessin et appareil associé

Country Status (1)

Country Link
WO (1) WO2023095937A1 (fr)

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20140081729A (ko) * 2012-12-21 2014-07-01 다솔 시스템즈 델미아 코포레이션 가상 오브젝트들의 로케이션 보정
US20160249039A1 (en) * 2015-02-24 2016-08-25 HypeVR Lidar stereo fusion live action 3d model video reconstruction for six degrees of freedom 360° volumetric virtual reality video

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20140081729A (ko) * 2012-12-21 2014-07-01 다솔 시스템즈 델미아 코포레이션 가상 오브젝트들의 로케이션 보정
US20160249039A1 (en) * 2015-02-24 2016-08-25 HypeVR Lidar stereo fusion live action 3d model video reconstruction for six degrees of freedom 360° volumetric virtual reality video

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
HAN, JUNSU; LEE, MI YOUNG; BAIK, SUG WOOK: "A Study for Calibration Method about Photographing Information by Using a Virtual Camera", THE JOURNAL OF KOREAN INSTITUTE OF NEXT GENERATION COMPUTING, KR, vol. 10, no. 2, 24 April 2014 (2014-04-24), KR, pages 75 - 83, XP009546698, ISSN: 1975-681X *
PARK KYEONG-BEOM; CHOI SUNG HO; LEE JAE YEOL; GHASEMI YALDA; MOHAMMED MUSTAFA; JEONG HEEJIN: "Hands-Free Human–Robot Interaction Using Multimodal Gestures and Deep Learning in Wearable Mixed Reality", IEEE ACCESS, IEEE, USA, vol. 9, 6 April 2021 (2021-04-06), USA , pages 55448 - 55464, XP011849742, DOI: 10.1109/ACCESS.2021.3071364 *
SHAO CHONG; ISLAM BASHIMA; NIRJON SHAHRIAR: "MARBLE: Mobile Augmented Reality Using a Distributed BLE Beacon Infrastructure", 2018 IEEE/ACM THIRD INTERNATIONAL CONFERENCE ON INTERNET-OF-THINGS DESIGN AND IMPLEMENTATION (IOTDI), IEEE, 17 April 2018 (2018-04-17), pages 60 - 71, XP033350838, DOI: 10.1109/IoTDI.2018.00016 *

Similar Documents

Publication Publication Date Title
WO2019107614A1 (fr) Procédé et système d'inspection de qualité basée sur la vision artificielle utilisant un apprentissage profond dans un processus de fabrication
JP6011102B2 (ja) 物体姿勢推定方法
WO2016053067A1 (fr) Génération de modèle tridimensionnel à l'aide de bords
WO2022121283A1 (fr) Détection d'informations de point clé de véhicule et commande de véhicule
WO2015182904A1 (fr) Appareil d'étude de zone d'intérêt et procédé de détection d'objet d'intérêt
JP2007129709A (ja) イメージングデバイスをキャリブレートするための方法、イメージングデバイスの配列を含むイメージングシステムをキャリブレートするための方法およびイメージングシステム
KR101553273B1 (ko) 증강현실 서비스를 제공하는 방법 및 장치
JPWO2020179065A1 (ja) 画像処理装置、画像処理方法及びプログラム
US20170076428A1 (en) Information processing apparatus
WO2022039330A1 (fr) Système et procédé d'analyse de document à base d'ocr à l'aide d'une cellule virtuelle
WO2019132566A1 (fr) Procédé de génération automatique d'image à profondeurs multiples
CN115205128A (zh) 基于结构光的深度相机温漂校正方法、系统、设备及介质
WO2016006786A1 (fr) Système de restitution et son procédé de restitution
CN113281780B (zh) 对图像数据进行标注的方法、装置及电子设备
CN113763419B (zh) 一种目标跟踪方法、设备及计算机可读存储介质
JP6399362B2 (ja) 結線作業支援システム
WO2023095937A1 (fr) Procédé d'identification d'erreur dans un dessin et appareil associé
WO2011078430A1 (fr) Procédé de recherche séquentielle pour reconnaître une pluralité de marqueurs à base de points de caractéristique et procédé de mise d'oeuvre de réalité augmentée utilisant ce procédé
WO2023095936A1 (fr) Procédé de génération d'objet pour espace virtuel, et dispositif l'utilisant
WO2022124673A1 (fr) Dispositif et procédé pour mesurer le volume d'un objet dans un réceptacle sur la base d'une image d'appareil de prise de vues en utilisant un modèle d'apprentissage automatique
WO2023095938A1 (fr) Procédé de prévention des accidents relatifs à la sécurité et appareil associé
WO2022191424A1 (fr) Dispositif électronique et son procédé de commande
KR20230076242A (ko) 도면의 오차 파악 방법 및 그 장치
KR102217215B1 (ko) 스케일바를 이용한 3차원 모델 제작 서버 및 방법
CN116704518A (zh) 一种文本识别方法及装置、电子设备、存储介质

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21965725

Country of ref document: EP

Kind code of ref document: A1