KR20170067373A - System and Method for Extracting Automatically 3D Object Based on Drone Photograph Image - Google Patents

System and Method for Extracting Automatically 3D Object Based on Drone Photograph Image Download PDF

Info

Publication number
KR20170067373A
KR20170067373A KR1020150173967A KR20150173967A KR20170067373A KR 20170067373 A KR20170067373 A KR 20170067373A KR 1020150173967 A KR1020150173967 A KR 1020150173967A KR 20150173967 A KR20150173967 A KR 20150173967A KR 20170067373 A KR20170067373 A KR 20170067373A
Authority
KR
South Korea
Prior art keywords
data
mesh
image
camera
dimensional
Prior art date
Application number
KR1020150173967A
Other languages
Korean (ko)
Other versions
KR101754599B1 (en
Inventor
김경호
Original Assignee
가톨릭대학교 산학협력단
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 가톨릭대학교 산학협력단 filed Critical 가톨릭대학교 산학협력단
Priority to KR1020150173967A priority Critical patent/KR101754599B1/en
Publication of KR20170067373A publication Critical patent/KR20170067373A/en
Application granted granted Critical
Publication of KR101754599B1 publication Critical patent/KR101754599B1/en

Links

Images

Classifications

    • H04N13/0048
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B64AIRCRAFT; AVIATION; COSMONAUTICS
    • B64CAEROPLANES; HELICOPTERS
    • B64C39/00Aircraft not otherwise provided for
    • B64C39/02Aircraft not otherwise provided for characterised by special use
    • B64C39/024Aircraft not otherwise provided for characterised by special use of the remote controlled vehicle type, i.e. RPV
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B64AIRCRAFT; AVIATION; COSMONAUTICS
    • B64DEQUIPMENT FOR FITTING IN OR TO AIRCRAFT; FLIGHT SUITS; PARACHUTES; ARRANGEMENT OR MOUNTING OF POWER PLANTS OR PROPULSION TRANSMISSIONS IN AIRCRAFT
    • B64D47/00Equipment not otherwise provided for
    • B64D47/08Arrangements of cameras
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • G06T17/20Finite element generation, e.g. wire-frame surface description, tesselation
    • H04N13/0003
    • B64C2201/127

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Aviation & Aerospace Engineering (AREA)
  • Computer Graphics (AREA)
  • Geometry (AREA)
  • Software Systems (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Length Measuring Devices By Optical Means (AREA)

Abstract

A system for automatically extracting a 3D object based on a drone shot image has a plurality of camera devices installed on a lower surface of a body of a flying robot and a body of a flying robot to photograph an object in various directions and form an exchangeable image file format , ≪ / RTI > EXIF) information; Dimensional image including EXIF information from the camera device of the drone device, extracts a plurality of feature points for calculating three-dimensional coordinates, obtains point cloud data from the extracted feature points, and obtains a vertex ) Data is optimized to generate a plurality of mesh data which are mesh-shaped surface data, and the generated mesh data are organized to generate occlusion data. Then, occlusion data is generated on the surface of the mesh data And generates a shadow map data using the 3D modeling conversion apparatus.

Description

TECHNICAL FIELD [0001] The present invention relates to a system and a method for automatically extracting a 3D object based on a drone shot image,

The present invention relates to a system for extracting three-dimensional objects, and more particularly, to a system for extracting three-dimensional coordinates of a feature point from two-dimensional image data in various directions of a photographing object using a drone photographing image, And more particularly, to a system and method for automatically extracting a 3D object based on a drone shot image that implements a real feeling image.

In general, a laser scanner is used to produce a three-dimensional shape for a particular subject.

A laser scanner is an advanced measuring instrument that acquires three-dimensional coordinate values in point form by scanning a myriad of laser beams at dense intervals on the surface of the subject.

Laser scanners have been developed in a variety of forms depending on the field conditions and the size of the objects used.

However, the actual laser scanner has difficulty in restoring the three-dimensional shape when the size of the object to be scanned is large and the scanning range is wide when scanning the natural terrain or the object.

In order to obtain a conventional 3D stereoscopic image, a three-dimensional stereoscopic shape is obtained by photographing a specific shooting object through aerial photographing. However, the stereoscopic image can not be restored due to lack of a shot image, .

In order to solve such a problem, the present invention obtains the three-dimensional coordinates of the minutiae points from the two-dimensional image data in various directions of the photographed object using the drone photographed image, and uses the obtained three- The object of the present invention is to provide a system and a method for automatically extracting a 3D object based on a drone shot image to be implemented.

According to an aspect of the present invention, there is provided a system for automatically extracting a 3D object based on a drone shot image,

A dron device for capturing an object in various directions and generating a two-dimensional image including an exchangeable image file format (EXIF) information, wherein a plurality of camera devices are installed on a lower surface of the flying robot body and the flying robot body; And

A vertex that receives a two-dimensional image including EXIF information from a camera device of a drone device and extracts a plurality of feature points for calculating three-dimensional coordinates, acquires point cloud data from the extracted feature points, and defines a texture plane, Data is optimized to generate a plurality of mesh data that are mesh-shaped surface data, and occlusion data is generated by summarizing the generated mesh data. Then, occlusion data is generated on the surface of the mesh data And generates a shadow map data by using the 3D modeling conversion apparatus.

A method for automatically extracting a 3D object based on a drone shot image according to an aspect of the present invention includes:

Extracting a plurality of feature points for receiving a two-dimensional image including an exchangeable image file format (EXIF) information from a camera device of a drone device and calculating three-dimensional coordinates;

Acquiring point cloud data from extracted minutiae points and optimizing vertex data defining a texture plane to generate a plurality of mesh data that is mesh type surface data; And

And generating occlusion data by summarizing the generated mesh data, and then generating shadow map data using occlusion data on the surface of the mesh data.

According to the above-described configuration, since the three-dimensional object is generated by superimposing the photographed images in various directions and angles using the drones, a realistic image can be restored, It is effective.

The present invention overcomes the constraints of aerial photographing by using a drones and develops a three-dimensional modeling conversion platform, thereby creating new business models and expanding various businesses in related fields.

FIG. 1 illustrates a system for automatically extracting 3D objects based on a drone shot image according to an embodiment of the present invention. Referring to FIG.
2 is a detailed view of a camera device of a system for automatically extracting 3D objects based on a drone shot image according to an embodiment of the present invention.
3 is a block diagram briefly showing a configuration of a system for automatically extracting 3D objects based on a drone shot image according to an embodiment of the present invention.
4 is a flowchart illustrating a method of automatically extracting a 3D object based on a drone shot image according to an embodiment of the present invention.

Throughout the specification, when an element is referred to as "comprising ", it means that it can include other elements as well, without excluding other elements unless specifically stated otherwise.

FIG. 1 illustrates a system for automatically extracting 3D objects based on a drone shot image according to an exemplary embodiment of the present invention. FIG. 2 illustrates a system for automatically extracting 3D objects based on a drone shot image according to an exemplary embodiment of the present invention. Fig. 3 is a detailed view showing a camera device of the system for extracting the image.

A system for automatically extracting 3D objects based on a drone shot image according to an embodiment of the present invention includes a drone device 100 and a 3D modeling conversion device 200. [

The drones 100 include a flying robot body 102 and a quadrotor portion 103 formed to extend across the flying robot body 102 in the direction of north, south, east, and west.

The flying robot body 102 and the quadrotor portion 103 are made of aluminum alloy steel which is light and durable, and the propellers 102a and 103a are mounted on the upper surface, respectively. The propellers 102a and 103a may include a propeller of a predetermined shape and an airflow in a downward direction is generated by the operation of the propellers 102a and 103a so that the dragon device 100 can fly .

A camera device 110 is installed on the lower surface of the flying robot body 102 to photograph an object.

The camera apparatus 110 includes a hemispherical arc-shaped frame 111 and a plurality of camera units 113 coupled to the arc-shaped frame 111 using rollers 112 for fixing the rails.

Each camera unit 113 photographs an object while moving along the arc-shaped frame 111 using the rail fixing rollers 112.

The camera unit 113 is coupled with a driven gear 115 which is coupled to the inner gear 114 inwardly and engaged with the inner gear 114 and a rotary motor 119 that provides rotational force to the drive gear 115 .

The quadrotor unit 103 levitates the flying robot body 102 to a specific position while hovering, thrusting, rolling, and pitching according to posture control and position control of a flying robot control unit (not shown) . In addition, the drone device 100 is a flying robot of the prior art and omits the detailed description of the components necessary for the flight.

The 3D modeling conversion apparatus 200 acquires point cloud data from the two-dimensional image received from the camera apparatus 110 and projects the information obtained from the obtained point cloud data in the three-dimensional space to obtain coordinate values (height, angle) Dimensional stereoscopic image is generated by matching the images of the respective surfaces with the coordinate values.

FIG. 3 is a block diagram briefly showing a configuration of a system for automatically extracting a 3D object based on a drone shot image according to an embodiment of the present invention, and FIG. 4 is a diagram A flowchart showing a method of automatically extracting a 3D object.

The camera device 110 of the drone device 100 according to the embodiment of the present invention includes an image processing unit 116, a control unit 117, a roller motor 118, a rotation motor 119, and a wireless module 119a .

The image processing unit 116 processes the two-dimensional images in various directions, which are images taken by the plurality of camera units 113, and transmits them to the control unit 117.

The control unit 117 controls the rotation of the camera unit 113 to control the photographing direction of the camera unit 113 by transmitting a control signal for controlling the driving of the camera unit 113 to the rotation motor 119.

The control unit 117 controls the roller motor 118 coupled to the rail fixing roller 112 to provide a rotational force to the rail fixing roller 112 so that the camera unit 113 moves along the arc shaped frame 111 .

The wireless module 119a transmits a two-dimensional image, which is a video image captured by the camera unit 113, to the 3D modeling / transformation unit 200 through a wireless communication network.

The EXIF information includes a camera maker, a camera model, and an image editor together with image data. The EXIF information includes an image file format, Exif Version, Shoot Datetime, Image Size, Exposure Time, Shutter Speed, Shooting Date, and Date of Shooting Date and Time, Exposure Program, Focal Length, F-Number, and whether to use the flash.

The 3D modeling conversion apparatus 200 according to an embodiment of the present invention includes an image input unit 210, a feature point extraction unit 220, a direction vector calculation unit 230, a spatial coordinate calculation unit 240, a mesh model generation unit 250 A mesh data optimizing unit 260, an image superimposing unit 270, an object generating unit 280, and a rendering unit 290.

The image input unit 210 receives a two-dimensional image including EXIF information in various directions from each camera apparatus 110 installed on the lower surface of the drones 100 (S100).

The image input unit 210 obtains a two-dimensional image including the EXIF information by routing the object several times around the object through the path setting method after acquiring the two-dimensional image after setting the object. At this time, a marker is set around the object to minimize errors that may occur in aerial photographing.

The image of the object is superimposed to minimize loss in generation of the shadow map data.

The feature point extracting unit 220 extracts a plurality of feature points for calculating three-dimensional coordinates as spatial coordinates from a two-dimensional image including EXIF information.

The direction vector calculation unit 230 calculates a direction vector associated with a position difference between a plurality of feature points.

The space coordinate calculation unit 240 calculates the three-dimensional space coordinates of each of the plurality of feature points using the distance between the plurality of feature points and the direction vector.

The three-dimensional shape made up of three-dimensional spatial coordinates is made up of point cloud data composed of points.

The mesh model generation unit 250 triangulates on the three-dimensional image using the feature points of the two-dimensional image having the EXIF information, and acquires a plurality of point cloud data by sampling the points on the surface of the mesh shape through triangulation.

The mesh model generation unit 250 triangulates the extracted feature points of each image on a three-dimensional plane using a Delaunay triangulation algorithm.

The mesh model generation unit 250 generates a partial set by sampling a part of the plurality of acquired point cloud data, and then reconstructs a curved surface using an ensemble algorithm to generate mesh type surface data.

The mesh model generation unit 250 generates a matching image by matching the point cloud data calculated from the spatial coordinate calculation unit 240 with the image obtained by photographing a predetermined region.

Here, the predetermined area indicates a place including at least one object.

The mesh model generation unit 250 generates face mesh data by using the vertex data obtained through the superimposed image and the point cloud data when the point cloud data and the image obtained by photographing the predetermined region are matched with each other Thereby generating a plurality of mesh data by optimizing (S102). Here, the vertex data represents data defining a texture plane.

The mesh data optimizer 260 applies a tessellation setting for dividing the curved surface of the mesh data into a plurality of surfaces to find the best value of the plane sharing setting and the curve sharing setting. Tessellation is a technique used in the field of three-dimensional image processing that divides an object into a plurality of planes.

This best value is the value on which mesh data is based on data optimization while maintaining its shape and properties.

The process of finding the best value is important because, if the number of mesh data is too large, there may be a problem in conversion when 3D rendering for visualization and content formation in various fields are made in the future. .

The image superimposing unit 270 performs a function of superimposing the point cloud data on the matching image to generate a superimposed image. The image superimposing unit 270 can generate the superimposed image using the camera matrix obtained when the point cloud data is matched with the image photographed by the camera unit 113. [

The image superimposing unit 270 performs orthographic projection of the camera image using a graphic library and perspective projection of the point cloud data using the camera matrix obtained when the camera image and the point cloud data are matched, Can be generated.

The image superimposing unit 270 can not only reconstruct the shape of an object or a scene but also store an image resulting from superimposition of various images and convert them into a common format for use.

The image superimposing unit 270 can generate the superimposed image using the camera matrix obtained when the point cloud data is matched with the image captured by the camera unit 113 and use the generated superimposed image when generating the mesh data.

The object generation unit 280 generates occlusion data by summarizing the mesh surface data acquired by the mesh model generation unit 250 and then generates occlusion data on the surface of the mesh surface data (Virtual point light) to generate shadow map data for each object (S104). The Radiosity method is a method that uses these shadow maps to calculate the pixel values in the camera view more precisely.

The rendering unit 290 renders the 3D model using the radiosity method using the shadow map data (S106). As a result of 3D modeling, the shape information of each object is stored based on a mesh or a point.

The embodiments of the present invention described above are not implemented only by the apparatus and / or method, but may be implemented through a program for realizing functions corresponding to the configuration of the embodiment of the present invention, a recording medium on which the program is recorded And such an embodiment can be easily implemented by those skilled in the art from the description of the embodiments described above.

While the present invention has been particularly shown and described with reference to exemplary embodiments thereof, it is to be understood that the invention is not limited to the disclosed exemplary embodiments, It belongs to the scope of right.

100: Dron device 102: Flying robot body
102a: propeller 103: quadrotor part
103a: propeller 110: camera device
111: arc-shaped frame 112: roller for fixing the rail
113: camera unit 114: internal gear
115: driving gear 116: image processor
117: control unit 118: roller motor
119: Rotation motor 119a: Wireless module
200: 3D modeling conversion device 210:
220: feature point extracting unit 230: direction vector calculating unit
240: Space coordinate calculation unit 250: Mesh model generation unit
260: mesh data optimizing unit 270: image overlapping unit
280: mesh data optimizing unit 290: rendering unit

Claims (9)

A plurality of camera devices are installed on the lower side of the flying robot body and the flying robot body to capture a target in various directions and generate a two-dimensional image including an exchangeable image file format (EXIF) ; And
Dimensional image including the EXIF information from the camera device of the drone apparatus, extracts a plurality of feature points for calculating three-dimensional coordinates, acquires point cloud data from the extracted feature points, And generates a plurality of mesh data which are mesh-shaped surface data by optimizing the vertex data, generates occlusion data by summarizing the generated mesh data, And a 3D modeling conversion device for generating shadow map data by using occlusion data.
The method according to claim 1,
Wherein the 3D modeling conversion apparatus comprises:
An image input unit for receiving a two-dimensional image including EXIF information in various directions from each camera device installed on a lower surface of the drones;
A feature point extraction unit for extracting a plurality of feature points for calculating three-dimensional coordinates as spatial coordinates from a two-dimensional image including the EXIF information;
Dimensional points of the two-dimensional image having the EXIF information are triangulated on a three-dimensional plane using a Delaunay triangulation algorithm, points are sampled on a mesh-shaped surface through triangulation to obtain a plurality of point cloud data, A mesh model generation unit for generating mesh data by restoring a curved surface using an ensemble algorithm after sampling a part of a plurality of acquired point cloud data to generate a subset;
After obtaining the occlusion data of the obtained mesh type data, the occlusion data is modeled into a virtual light source using occlusion data on the surface of the mesh type surface data, An object generation unit for generating shadow map data for an object; And
And a rendering unit that renders a 3D model using a radiosity method using the shadow map data. The system according to claim 1,
The method according to claim 1,
The camera device of the drone device includes a hemispherical arc-shaped frame and a plurality of camera parts coupled to the arc-shaped frame by using a rail fixing roller, And automatically capturing a 3D object based on a drone shot image, which is characterized in that the object is photographed in various directions while moving along the line.
The method of claim 3,
Wherein the camera device has an internal gear coupled to the inside and a rotational motor coupled to the meshed driving gear of the internal gear and providing a rotational force to the driving gear,
The camera device controls the camera unit to control the photographing direction by transmitting a control signal for controlling the driving of the camera unit to the rotation motor, and controls the roller motor coupled to the rail fixing roller, Wherein the control unit controls the camera unit to move along the arc-shaped frame by providing a rotational force so as to automatically extract the 3D object based on the droneshot image.
3. The method of claim 2,
Further comprising an image superimposition unit for superimposing the superimposed image using the camera matrix obtained by matching the point cloud data with the image captured by the camera unit.
Extracting a plurality of feature points for receiving a two-dimensional image including an exchangeable image file format (EXIF) information from a camera device of a drone device and calculating three-dimensional coordinates;
Acquiring point cloud data from the extracted minutiae to optimize vertex data defining a texture plane to generate a plurality of mesh data that are mesh-shaped surface data; And
Generating occlusion data by summarizing the generated mesh data and generating shadow map data using occlusion data on the surface of the mesh data. A method for automatically extracting a 3D object based on.
The method according to claim 6,
Wherein the generating of the plurality of mesh data comprises:
Dimensional points of the two-dimensional image having the EXIF information are triangulated on a three-dimensional plane using a Delaunay triangulation algorithm, points are sampled on a mesh-shaped surface through triangulation to obtain a plurality of point cloud data, And generating surface data of the mesh type by reconstructing a surface using an ensemble algorithm after sampling a part of the plurality of acquired point cloud data to generate a subset, How to automatically extract 3D objects.
The method according to claim 6,
A method of automatically extracting a 3D object based on a drone shot image, the method comprising: rendering a 3D model using a Radiosity method using shadow map data.
The method according to claim 6,
And generating a superimposed image using a camera matrix obtained when the point cloud data is matched with the image captured by the camera unit and using the generated superimposed image to generate the mesh data. How to Automatically Extract.
KR1020150173967A 2015-12-08 2015-12-08 System and Method for Extracting Automatically 3D Object Based on Drone Photograph Image KR101754599B1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
KR1020150173967A KR101754599B1 (en) 2015-12-08 2015-12-08 System and Method for Extracting Automatically 3D Object Based on Drone Photograph Image

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
KR1020150173967A KR101754599B1 (en) 2015-12-08 2015-12-08 System and Method for Extracting Automatically 3D Object Based on Drone Photograph Image

Publications (2)

Publication Number Publication Date
KR20170067373A true KR20170067373A (en) 2017-06-16
KR101754599B1 KR101754599B1 (en) 2017-07-07

Family

ID=59278386

Family Applications (1)

Application Number Title Priority Date Filing Date
KR1020150173967A KR101754599B1 (en) 2015-12-08 2015-12-08 System and Method for Extracting Automatically 3D Object Based on Drone Photograph Image

Country Status (1)

Country Link
KR (1) KR101754599B1 (en)

Cited By (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2019093692A1 (en) * 2017-11-09 2019-05-16 삼성전자 주식회사 Method and electronic device for controlling unmanned aerial vehicle comprising camera
KR20190068100A (en) * 2017-12-08 2019-06-18 김종일 Camera module mounted on flying object
KR102078696B1 (en) * 2019-07-19 2020-02-19 주식회사 대성이엔씨 System and method for managing facility based on 3d model, and a recording medium having computer readable program for executing the method
KR20200040960A (en) * 2018-10-10 2020-04-21 한국전력공사 Drone system
KR102103464B1 (en) * 2019-07-19 2020-04-22 주식회사 대성이엔씨 System and method for restoring facility design data, and a recording medium having computer readable program for executing the method
KR20200067286A (en) * 2018-12-03 2020-06-12 한국가스안전공사 3D scan and VR inspection system of exposed pipe using drone
WO2020139466A1 (en) * 2018-12-28 2020-07-02 Intel Corporation Unmanned aerial vehicle light flash synchronization
CN112468731A (en) * 2020-11-27 2021-03-09 广州富港生活智能科技有限公司 Automatic shooting system and automatic shooting-based article analysis and display ordering system
KR20210037883A (en) * 2019-09-30 2021-04-07 김시우 Management server for manufacturing of three dimensional model
CN113160410A (en) * 2021-04-19 2021-07-23 云南云能科技有限公司 Real scene three-dimensional refined modeling method and system
KR20210115245A (en) * 2020-03-12 2021-09-27 이용 Intelligent dam management system based on digital twin
KR20210115246A (en) * 2020-03-12 2021-09-27 이용 Integral maintenance control method and system for managing dam safety based on 3d modelling
KR20210140805A (en) * 2020-05-15 2021-11-23 한국에너지기술연구원 Device for evaluating energy performance of existing building and method thereof
KR102347972B1 (en) * 2021-08-18 2022-01-07 주식회사 아이지아이에스 panoramic vision suppling systems
KR20220081482A (en) 2020-12-09 2022-06-16 주식회사 씨에스아이비젼 3d terrain information analyzer using deep learning
KR20220123901A (en) * 2021-03-02 2022-09-13 네이버랩스 주식회사 Method and system for generating high-definition map based on aerial images captured from unmanned air vehicle or aircraft
US11880990B2 (en) 2020-09-21 2024-01-23 Samsung Electronics Co., Ltd. Method and apparatus with feature embedding

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR102001143B1 (en) * 2017-11-06 2019-07-17 울산과학기술원 Apparatus for 3d printer on drone and method for controlling there of
KR102073738B1 (en) 2018-05-30 2020-03-02 주식회사 공간정보 The Connection System for Remote Sensing data and Cloud-Based Platform
KR102152720B1 (en) 2018-06-20 2020-09-07 ㈜시스테크 Photographing apparatus and method for 3d modeling
KR102027093B1 (en) 2019-07-02 2019-11-04 백승원 Data conversion apparatus, method and application for 3-dimensional scan data
KR102616520B1 (en) * 2021-08-31 2023-12-27 단국대학교 산학협력단 System for processing 3d image and method for generating 3d model using the same

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB2527736A (en) * 2014-05-05 2016-01-06 Spillconsult Ltd Tethered aerial platform and aerial observation system

Cited By (19)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2019093692A1 (en) * 2017-11-09 2019-05-16 삼성전자 주식회사 Method and electronic device for controlling unmanned aerial vehicle comprising camera
KR20190068100A (en) * 2017-12-08 2019-06-18 김종일 Camera module mounted on flying object
KR20200040960A (en) * 2018-10-10 2020-04-21 한국전력공사 Drone system
KR20200067286A (en) * 2018-12-03 2020-06-12 한국가스안전공사 3D scan and VR inspection system of exposed pipe using drone
WO2020139466A1 (en) * 2018-12-28 2020-07-02 Intel Corporation Unmanned aerial vehicle light flash synchronization
US10884415B2 (en) 2018-12-28 2021-01-05 Intel Corporation Unmanned aerial vehicle light flash synchronization
KR102078696B1 (en) * 2019-07-19 2020-02-19 주식회사 대성이엔씨 System and method for managing facility based on 3d model, and a recording medium having computer readable program for executing the method
KR102103464B1 (en) * 2019-07-19 2020-04-22 주식회사 대성이엔씨 System and method for restoring facility design data, and a recording medium having computer readable program for executing the method
KR20210037883A (en) * 2019-09-30 2021-04-07 김시우 Management server for manufacturing of three dimensional model
KR20210115246A (en) * 2020-03-12 2021-09-27 이용 Integral maintenance control method and system for managing dam safety based on 3d modelling
KR20210115245A (en) * 2020-03-12 2021-09-27 이용 Intelligent dam management system based on digital twin
KR20210140805A (en) * 2020-05-15 2021-11-23 한국에너지기술연구원 Device for evaluating energy performance of existing building and method thereof
US11880990B2 (en) 2020-09-21 2024-01-23 Samsung Electronics Co., Ltd. Method and apparatus with feature embedding
CN112468731A (en) * 2020-11-27 2021-03-09 广州富港生活智能科技有限公司 Automatic shooting system and automatic shooting-based article analysis and display ordering system
KR20220081482A (en) 2020-12-09 2022-06-16 주식회사 씨에스아이비젼 3d terrain information analyzer using deep learning
KR20220123901A (en) * 2021-03-02 2022-09-13 네이버랩스 주식회사 Method and system for generating high-definition map based on aerial images captured from unmanned air vehicle or aircraft
CN113160410A (en) * 2021-04-19 2021-07-23 云南云能科技有限公司 Real scene three-dimensional refined modeling method and system
KR102347972B1 (en) * 2021-08-18 2022-01-07 주식회사 아이지아이에스 panoramic vision suppling systems
WO2023022317A1 (en) * 2021-08-18 2023-02-23 주식회사 아이지아이에스 System for providing panoramic image

Also Published As

Publication number Publication date
KR101754599B1 (en) 2017-07-07

Similar Documents

Publication Publication Date Title
KR101754599B1 (en) System and Method for Extracting Automatically 3D Object Based on Drone Photograph Image
US11830163B2 (en) Method and system for image generation
KR101314120B1 (en) Three-dimensional urban modeling apparatus and three-dimensional urban modeling method
CN108876926B (en) Navigation method and system in panoramic scene and AR/VR client equipment
KR20180067908A (en) Apparatus for restoring 3d-model and method for using the same
JP5093053B2 (en) Electronic camera
US11689808B2 (en) Image synthesis system
JP6616967B2 (en) Map creation apparatus and map creation method
CN112729260B (en) Surveying system and surveying method
KR102200866B1 (en) 3-dimensional modeling method using 2-dimensional image
US11212510B1 (en) Multi-camera 3D content creation
Wendel et al. Automatic alignment of 3D reconstructions using a digital surface model
JP2017201261A (en) Shape information generating system
KR101799351B1 (en) Automatic photographing method of aerial video for rendering arbitrary viewpoint, recording medium and device for performing the method
WO2018056802A1 (en) A method for estimating three-dimensional depth value from two-dimensional images
CN110036411B (en) Apparatus and method for generating electronic three-dimensional roaming environment
KR102107465B1 (en) System and method for generating epipolar images by using direction cosine
KR102494479B1 (en) Augmented reality occlusion producing system using the data of the positioning space information aquired by drone
WO2020119572A1 (en) Shape inferring device, shape inferring method, program, and recording medium
Inoue et al. Post-Demolition landscape assessment using photogrammetry-based diminished reality (DR)
KR20220141636A (en) Virtual object producing system using the data of the positioning space information aquired by drone
US11682175B2 (en) Previsualization devices and systems for the film industry
WO2023047799A1 (en) Image processing device, image processing method, and program
US20220067969A1 (en) Motion capture calibration using drones
Rezvan et al. Critical Examination of 3D Building Modelling through UAV Frame and Video Imaging

Legal Events

Date Code Title Description
E701 Decision to grant or registration of patent right