WO2023008647A1 - 3d data conversion and use method for high speed 3d rendering - Google Patents

3d data conversion and use method for high speed 3d rendering Download PDF

Info

Publication number
WO2023008647A1
WO2023008647A1 PCT/KR2021/014075 KR2021014075W WO2023008647A1 WO 2023008647 A1 WO2023008647 A1 WO 2023008647A1 KR 2021014075 W KR2021014075 W KR 2021014075W WO 2023008647 A1 WO2023008647 A1 WO 2023008647A1
Authority
WO
WIPO (PCT)
Prior art keywords
data
gui
module
class
information
Prior art date
Application number
PCT/KR2021/014075
Other languages
French (fr)
Korean (ko)
Inventor
김동원
임세영
송기원
Original Assignee
(주)그래피카
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by (주)그래피카 filed Critical (주)그래피카
Publication of WO2023008647A1 publication Critical patent/WO2023008647A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/005General purpose rendering architectures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T1/00General purpose image data processing
    • G06T1/20Processor architectures; Processor configuration, e.g. pipelining
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/18Image warping, e.g. rearranging pixels individually
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2200/00Indexing scheme for image data processing or generation, in general
    • G06T2200/28Indexing scheme for image data processing or generation, in general involving image processing hardware

Definitions

  • the present invention relates to high-speed processing of 3D rendering used in a GUI, and relates to supporting display of complex and large-capacity 3D data on a display screen of a vehicle at high speed.
  • 1 is a configuration diagram of the conventional stereoscopic image display device.
  • the conventional stereoscopic image display device includes a display unit 100, a barrier 100', a scan driver 200, a data driver 300, a light source 400, a light source controller 500, and a timing controller 600. and a data conversion unit 700 .
  • the light source 400 is formed as a planar light source and is formed on the rear side of the display unit 100, but for convenience, it is shown that the light source 400 is formed on the lower side of the display unit 100.
  • the display unit 100 includes a plurality of scan lines (not shown) for transmitting selection signals, a plurality of data lines (not shown) formed to intersect with the plurality of scan lines and transmitting data signals (not shown), and It includes a plurality of sub-pixels (not shown) formed at intersections of data lines.
  • a red sub-pixel for red (R) display, a green sub-pixel for green (G) display, and a blue sub-pixel for blue (B) display form one pixel.
  • the plurality of pixels of the display unit 100 are pixels corresponding to images for the left eye (hereinafter, referred to as 'pixels for the left eye') and pixels corresponding to images for the right hand (hereinafter, referred to as 'pixels for the right hand').
  • the pixels for the left eye and the pixels for the right eye are formed such that they are repeatedly arranged.
  • the pixels for the left eye and the pixels for the right eye may be arranged to be repeated in parallel with each other to form a stripe shape or a zigzag shape.
  • the arrangement of pixels for the left eye and pixels for the right eye may be suitably changed according to the barrier 100'.
  • the barrier 100' is disposed on one surface of the display unit 100, and includes opaque areas (not shown) and transparent areas (shown) formed to correspond to the arrangement method of pixels for the left eye and pixels for the right eye of the display unit 100. did not).
  • the barrier 100' separates and provides a left eye image and a right eye image projected from pixels for the left eye and pixels for the right eye of the display unit 100 in directions for the left eye and right eye of the viewer, respectively, using opaque regions and transparent regions. .
  • the opaque regions and transparent regions of the barrier 100 ′ may be formed in a stripe shape or a zigzag shape according to an arrangement method of pixels for the left eye and calcination for the right eye of the display unit 100 . also.
  • a left eye pixel and a right eye pixel can be viewed through a section cut in the II' direction of the display unit 100 and the barrier 100'. It shows how to observe a three-dimensional image through.
  • the display unit 100 includes a plurality of left-eye pixels 150 and a plurality of right-eye pixels 160 that are repeatedly arranged
  • the barrier 100' includes a plurality of left-eye pixels 150 and a plurality of right-eye pixels 160 .
  • An opaque region 150' and a transparent region 160' are repeatedly arranged in parallel in the same direction as the arrangement direction of the pixels 160.
  • the pixels 150 for the left eye of the display unit 100 project the left eye image onto the left eye 180 through the transparent area 160', and the pixels 160 for the right eye of the display unit 100 are transparent to the barrier 100'.
  • the right eye image is projected onto the right eye 170 through the area 160'.
  • the opaque region 150' of the barrier 100' allows the pixels 150 for the left eye and the calciner 160 for the right eye of the display unit 100 to project images to the left and right eyes through the transparent region 160', respectively. to form a light projection path.
  • the image for the left eye projected from the pixel 150 for the left eye is formed as an image having a predetermined disparity with respect to the image for the right eye
  • the image for the right eye projected from the pixel 160 for the right eye has a predetermined disparity with respect to the image for the left eye. It is formed as an image having a disparity of Therefore, when the observer recognizes the image for the left eye from the pixel 150 for the left eye and the image for the right eye projected from the pixel 160 for the right eye by the observer's left and right eyes, respectively, it is the same as viewing a stereoscopic object through the left and right eyes. You get the depth information and feel the three-dimensional effect.
  • the scan driver 200 sequentially generates selection signals in response to the control signal Sg output from the timing controller 600 and applies them to the scan lines S1 to Sn of the display unit 100, respectively.
  • the data driver 300 converts the applied stereoscopic image data into an analog data voltage to be applied to the data lines D1 to Dm of the display unit 100, and responds to the control signal Sd input from the timing controller 600. Then, the converted analog data voltage is applied to the data lines D1 to Dm.
  • the light source 400 includes red (R), green (G), and blue (B) light emitting diodes (not shown), respectively corresponding to red (R), green (G), and blue (B). Light is output to the display unit 100 .
  • Red (R), green (G), and blue (B) light emitting diodes of the light source 500 output light to the R subpixel, G subpixel, and B subpixel of the display unit 100 , respectively.
  • the light source controller 500 controls the lighting timing of the light emitting diode of the light source 500 in response to the control signal Sb output from the timing controller 600 .
  • the timing control unit ( 600) can be synchronized by the control signal provided by.
  • the timing controller 600 corresponds to the horizontal synchronization signal (Hsync) and the vertical synchronization signal (Vsync) input from the outside and the stereoscopic image signal data input from the data conversion unit 700, and generates the stereoscopic image signal data and the generated control signal.
  • Sg, Sd, and Sb are supplied to the scan driver 200, the data driver 300, and the light source controller 500, respectively.
  • the data conversion unit 700 converts input data DATA into 3D image data and transmits it to the timing controller 600 .
  • the data (DATA) input to the data conversion unit 700 is data including 3D image contents (hereinafter, referred to as '3D image data'), and the stereoscopic image data is the calcination for the left eye of the display unit 100. and left eye image data and right eye image data respectively corresponding to pixels for the right eye.
  • 3D image data includes coordinate information (ie, X coordinate and Y coordinate) and color information corresponding to the corresponding coordinate.
  • the color information includes color information or texture coordinate values.
  • the data conversion unit 700 that converts 3D image data for a flat image into stereoscopic image data may be implemented in the form of a graphic accelerator chip or the like.
  • the data conversion unit 700 includes a geometric engine unit 710, a rendering engine unit 720, and a frame memory unit 730.
  • the rendering engine unit 720 includes a start X coordinate calculation unit 721, an X coordinate increase unit 722, a start value calculation unit 723, a color information increase unit 724, and a memory control unit 725. .
  • the starting X coordinate calculation unit 721 receives a left/right eye selection signal and a starting X coordinate (hereinafter referred to as 'starting X coordinate') for a predetermined line (ie, having a constant Y coordinate), and selects the left eye
  • a start X coordinate corresponding to the signal or a start X coordinate corresponding to the right eye selection signal is generated and transmitted to the X coordinate increasing unit 722 .
  • the X-coordinate increasing unit 722 increases the X-coordinate by 2 based on the starting X-coordinate transmitted from the starting X-coordinate calculating unit 721 and outputs the X-coordinate.
  • the X coordinate is increased by 2 in order to store only the coordinate values written (stored) in the frame memory unit 730 in the frame memory unit 730 .
  • the start value calculation unit 723 determines the color information start value (ie, color information start value or texture coordinate start value) for the left/right eye selection signal and a predetermined line (ie, a constant Y coordinate) (hereinafter referred to as 'color'). (referred to as 'information start value') is input, and a color information start value corresponding to the left eye selection signal or a color information start value corresponding to the right eye selection signal is generated and transmitted to the color information increasing unit 724 .
  • the color information increasing unit 724 includes a start X coordinate, an end X coordinate (referring to an end X coordinate for a given line, hereinafter the same), a color information start value, and a color information end value for a given line. (That is, the color information start value or texture coordinate end value) (hereinafter referred to as 'color information end value') is input to calculate the color information increment value, and the color information increment value is added to the color information start value to generate color information. do.
  • the distance between the start X coordinate and the end X coordinate is calculated using the input start X coordinate and the end X coordinate
  • the color information start value and color information end value are calculated using the input color information start value and color information end value.
  • the color information increment value (i.e., color information increment value, texture coordinate increment value) is finally calculated in consideration of these two calculated values and the X coordinate increment by 2 in the X coordinate increasing unit 722. That is, since the X coordinate increases by 2 in the X coordinate increaser 722, the color information increaser 724 uses the distance between the start X coordinate and the end X coordinate and the value difference between the start value of the color information and the end value of the color information. The color information increment is finally calculated by doubling the calculated increment.
  • the color information increasing unit 724 generates and outputs color information by adding the calculated color field information increment value to the color information starting value.
  • the memory control unit 725 receives a predetermined Y coordinate, X coordinate output from the X coordinate increasing unit 722 and color information output from the color information increasing unit 724, and receives the input Y coordinate and X coordinate. Based on , color information is controlled to be stored in the frame memory unit 730 .
  • the memory controller 725 continuously generates a write enable signal (W) to write the frame memory unit 730. ), color information can be stored at high speed.
  • the prior art generally uses a 3D modeling tool such as 3D MAX, Maya, etc. to construct a 3D screen to create a text-based 3D output in FBX, OBJ, or DAE format, loads it from content, and displays it at a designated location.
  • the 3D output created above consists of the point coordinates of the model, image information to be displayed on each side, the position and angle of the viewer, the position and intensity of light, and animation designation values, and 3D expression requires a lot of computation. It is generally supported in an environment where a GPU exists rather than a CPU-based environment, and a standardized graphic library such as OpenGL ES is used for GPU use.
  • the file when a screen representation request for a product created using a 3D modeling tool occurs, the file must be read and analyzed, processed, and processed so that it can be applied to a graphic library such as OpenGL ES.
  • the FBX, OBJ, and DAE formats similar to XML processing, extract coordinate information, surface information, camera information, light information, and animation information necessary for 3D expression through string analysis.
  • the OpenGL ES function is called so that the 3D data can be processed by the GPU using a graphic library that supports 3D expression, such as OpenGL ES, and the three-dimensional image is displayed on the screen through the OpenGL ES function. that can be displayed in the form
  • the standard GUI applied to the prior art configured as above is used by analyzing 3D data in real time and converting it into a hierarchical structure of the corresponding GUI engine. It takes a long time for the data to be output. This is a problem in that the 3D information to be displayed on the screen is not immediately output when the engine is started on the digital instrument panel of a vehicle using an existing GUI engine, but is delayed to be output on the screen several seconds after the engine is started.
  • 3D modeling data is analyzed to extract coordinate information, Mesh, gaze angle information, Camera, light direction and intensity information, and animation, movement information of each model. Since the process goes through the string analysis step, it is time consuming and therefore requires a long loading time for 3D rendering.
  • the present invention for solving the problems of the prior art is to convert text-based 3D modeling data into a format used in a graphics library such as a GUI engine structure and OpenGL ES and convert it into a binary to enable fast loading and fast rendering processing it is for
  • 3D data conversion and use method for 3D high-speed rendering of the present invention having the above object has the step of analyzing the 3D data structure and converting it into a hierarchical structure of a GUI engine, the step of binarizing the converted 3D data, and the binary The step of reading the converted 3D data for use in GUI contents and the step of outputting the read 3D binary data to the screen using the function of the Graphica Library stored in OpenGL ES (Open Graphics Library for Embedded System) that is characterized by
  • the 3D data conversion and use method for 3D high-speed rendering of the present invention configured as described above has the effect of enabling smooth and realistic 3D expression without delay by quickly processing 3D data generated by the user.
  • 1 is a configuration diagram of a conventional stereoscopic image display device
  • Figure 3 is a 3D data conversion and use system configuration diagram for 3D high-speed rendering of the present invention
  • 4 is a basic structure diagram of 3D Data applied to the present invention.
  • the 3D data conversion and use method for 3D high-speed rendering of the present invention analyzes the 3D data structure received by the converter, converts it into a hierarchical structure of the GUI engine, and transmits the converted 3D data to an importer to make it binary.
  • the importer converts the received converted 3D data into binaries and stores them in the GUI content module, drives the GUI Engine module, reads the binaryized 3D data previously stored in the GUI content module, and converts the read out binaryized 3D data Transmitting to the GUI Engine module (S12), and the GUI Engine module generating a GUI Class based on the received binary 3D Data, storing the generated GUI Class in GUI contents, and transmitting it to the OpenGL ES module (S13 ), and generating Command Parameters using standard Graphiecs Library functions such as OpenGL ES while traversing classes in the GUI Class received by the OpenGL ES module and transmitting the generated Command Parameters to the GPU (S14), It is characterized in that it comprises a step (S15) of processing the Command Parameter of the GUI Class and transmitting it to the display unit to provide it.
  • 3D Data is a 3D File Format published by AutoDesk such as FBX and DAE, and the coordinates of the model, scene, and camera , Light, and Animation information
  • Scene is the top of the node hierarchy and has one Root Node, and this Root Node can include 3D-related Mesh information, Light information, and Camera information.
  • the Mesh information may have Animation, Effect, and Texture information
  • the Root Node may inspect the type of the child while traversing the children, and the types of possible children include Mesh, Light, and Camera.
  • the child type is a mesh type
  • mesh data such as vertex coordinates, normal coordinates, UV coordinates, translate, rotate, scale, animation information, texture information, and effect information are included.
  • Information is included, and Camera information may include location information of an object viewed by the camera, Viewpoint, and Fov information.
  • the GUI engine is Layer ⁇ Scene ⁇ Sprite ... ⁇ It has the same structure as the Shape.
  • a layer has one scene, a scene can include N sprites, and a sprite can have N child sprites and shapes.
  • Sprite has Scene or Sprite as its upper display object and can have Transform information, Camera information, Animation information and Shape.
  • Shape has Sprite as its upper display object and can have Mesh information, Effect information (fresnel, phong, projection, shape), and Texture information.
  • the 3D data structure for expressing 3D data using a GUI engine should be applied to a graphics library such as OpenGL ES and converted into a GUI engine hierarchical structure using a converter for quick data processing. If the converter applied to the present invention is described above, since the formats for FBX, OBJ, and DAE are open, the 3D Scene Root Node can be obtained through the analysis of the corresponding 3D Modeling Data, and the child connected to the 3D Scene Root Node By traversing nodes, information such as Mesh, Light, and Camera can be obtained.
  • 3D Modeling Data are configured in a one-to-one correspondence with the GUI Class used in the GUI engine
  • a GUI Scene is created, and after reading 3D Data through File I/O, 3D Scene from 3D Data
  • the root node is taken and a GUI sprite is created.
  • the first GUI sprite created above is a root sprite, and the name of the 3D scene root node, transform information, mesh information, and animation information can be imported and stored in the root sprite.
  • Converter writes the above operation as a recursive function and calls this recursive function if there are children in 3D Data, so that GUI sprites that are created secondly can become child sprites.
  • the child sprite is a mesh type through type checking, a GUI shape is created, and the mesh data of 3D data is stored in the shape. To create and save instances of Effect Class and Texture Class. Also, if the type of Child Sprite is Camera, a GUI Shape is created, and an instance of a GUI Camera Class is created in the Shape, and name information, target information, Viewpoint information, and Far information are saved.
  • the step of binarizing the 3D data converted in Fig. 2 (S12) and the step of reading the binaryized 3D data for use in GUI contents (S13) are executed by the importer, and the importer is executed by the converter.
  • the calculated properties can be largely divided into Root, Creator, and Instance.
  • the File Header and Scene Root of Binary File are classified as Root, and all child Class Children (instances in case of reusing exception objects) included in the Scene are classified as Creator, and when reusing already input/output objects, the corresponding object In the case of registering an object through its name, it is classified as an instance. In case the object has a name, the name is classified as a hash value, and everything else is distinguished by having each property hash value.
  • the first information stored in the binary file is Header information.
  • the first id stored in the binary file stores ClassID and the position from ClassID to Version Path in offset,
  • id, offset, overload File Header is classified as root, and the hash value of the root is stored in id, and the position occupied from id to overload is stored in offset,
  • Name id Name offset, overload, Name Lenth, Name str:
  • the file name information is classified as Name, the hash value of Name is stored in id, the position up to id Name str is stored in offset, and the length information and file name of the file name are stored. save,
  • Version id Version offset, overload, Major, Minor, Patch:
  • the hash value of the version is stored in id, the position occupied from id to patch is stored in offset, and the version information of the GUI engine corresponding to Export is stored. Same as 1 below.
  • Version is the same as the GUI engine version and is information to know if it is compatible through version customization.
  • the next step is to save Scene information to a binary file.
  • Scene is the parent of Root Sprite in the GUI engine and the highest parent in the hierarchical structure.
  • Root Property- - id, offset Total 8bytes are initialized to 0.
  • Nength is 4 bytes. A string occupies as many bytes as length.
  • Version Property - Initialize a total of 9 bytes of id, offset, and overload to 0.
  • Scene is a root property in a hierarchical structure, so a total of 9 bytes of id, offset, and overload are initialized to 0.
  • 3 is a configuration diagram of a 3D data conversion and use system for 3D high-speed rendering according to the present invention.
  • the 3D data conversion and use system for 3D high-speed rendering of the present invention analyzes the received 3D data structure, converts it into a GUI engine hierarchy, and transmits the changed GUI engine hierarchy to the importer (100) And, the 3D data converted to the GUI engine hierarchical structure received from the converter is binarized and stored in the GUI content module 250, and the binaryized 3D data is read from the GUI content module 250 and transmitted to the GUI engine module.
  • the OpenGL ES module 400 receives and stores the GUI class from the GUI engine, generates command parameters while traversing the classes, and transmits the generated command parameters to the GPU, and controls to receive command parameters from the OpenGL ES module and display them on the display It is characterized in that it consists of a GPU 500 to perform and a Display 600 to express the received Command Parameters.
  • 3D Data applied to the present invention is a basic structure diagram of 3D Data applied to the present invention.
  • a scene is at the top of the node hierarchy, has one root node, and this root node can include 3D related mesh information, light information, and camera information.
  • the Mesh information may have Animation, Effect, and Texture information
  • the Root Node may inspect the type of the child while traversing the children, and the types of possible children include Mesh, Light, and Camera.
  • mesh data such as vertex coordinates, normal coordinates, UV coordinates, translate, rotate, scale, each animation information, texture information, and effect information are included.
  • the camera information includes location information of the object viewed by the camera, viewpoint, and Fov information.
  • the present invention configured as described above allows the user to quickly receive the desired information as a 3D image by quickly expressing the image required for the instrument panel in a system requiring an instrument panel such as a car as a 3D image, so that the vehicle, drone, ship, aircraft and It can be widely used in the same device.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Graphics (AREA)
  • Processing Or Creating Images (AREA)

Abstract

A 3D data conversion and use method for high speed 3D rendering, according to the present invention, comprises the steps of: analyzing a 3D data structure and converting same into a hierarchical structure of a GUI engine; binarizing converted 3D data; reading the binarized 3D data to use same in a GUI content; and registering and processing, for the read 3D binary data, the point coordinates of a 3D model, a camera, light, a texture to be shown on the surface of the 3D model, etc. by using a graphics library function stored in an Open Graphics Library for Embedded Systems (OpenGL ES), and then outputting the read 3D binary data to a screen.

Description

3D 고속 렌더링을 위한 3D 데이터 변환 및 사용 방법How to convert and use 3D data for high-speed 3D rendering
본 발명은 GUI에 사용되는 3D 렌더링을 고속으로 처리하는 것에 관한 것으로 복잡하고 대용량의 3D Data를 고속으로 차량의 디스플레이 화면에 표시할 수 있도록 지원하는 것에 관한 것이다.The present invention relates to high-speed processing of 3D rendering used in a GUI, and relates to supporting display of complex and large-capacity 3D data on a display screen of a vehicle at high speed.
본 발명관 관련된 종래 기술은 대한민국 등록특허 제10-0932977호(2009. 12. 21. 공고)에 게시되어 있는 것이다. 도 1은 상기 종래의 입체 영상 표시 장치의 구성도이다. 상기도 1에서 종래의 입체 영상 표시 장치는 표시부(100), 배리어(100'), 주사 구동부(200), 데이터 구동부(300), 광원(400), 광원 제어부(500), 타이밍 제어부(600) 및 데이터 변환부(700)를 포함한다. 광원(400)은 면 광원으로 형성되어 표시부(100)의 후면에 형성되나, 편의상 광원(400)이 표시부(100)의 하단에 형성되는 것으로 나타내었다. 또한, 표시부(100)는 선택 신호를 전달하는 복수의 주사선(도시하지 않았음), 복수의 주사선과 절여되어 교차하도록 형성되고 데이터 신호를 전달하는 복수의 데이터선(도시하지 않았음) 및 주사선과 데이터선의 교차점에 형성된 복수의 부화소(도시하지 않았음)를 포함한다. 본 실시예에서는 적색(R) 표시를 위한 적색 부화소, 녹색(G) 표시를 위한 녹색 부화소 및 청색(B) 표시를 위한 청색 부화소가 함께 하나의 화소를 형성하는 것으로 가정한다. 또한, 본 실시예에서는 표시부(100)의 복수의 화소들은 좌안용 영상에 대응하는 화소(이하, '좌안용 화소'라 함) 및 우완용 영상에 대응하는 화소(이하, '우완용 화소'라 함)를 포함한다. 여기서, 좌안용 화소 및 우안용 화소는 서로 반복되어 배열되도록 형성된다. 구체적으로 좌안용 화소 및 우안용 화소는 서로 평행하게 반복되도록 배열되어 스트라이프 형태 또는 지그 재그 형태를 형성할 수 있다. 이러한 좌안용 화소 및 우안용 화소의 배열은 배리어(100')에 따라 적합하게 변경될 수 있다. 배리어(100')는 표시부(100)의 어느 일면에 배치되며, 표시부(100)의 좌안용 화소 및 우안용 화소의 배열 방법에 대응하도록 형성된 불투명 영역(도시하지 않았음)들과 투명 영역(도시하지 않았음)들을 포함한다. 배리어(100')는 불투명 영역들과 투명 영역들을 이용하여 표시부(100)의 좌안용 화소 및 우안용 화소에서 각각 투사되는 좌안 영상과 우안 영상을 각각 관찰자의 좌안 방향과 우안 방향으로 분리하여 제공한다. 배리어(100')의 불투명 영역들과 투명 영역들은 표시부(100)의 좌안용 화소 및 우안용 하소의 배열 방법에 따라 스트라이프 형태 또는 지그재그 형태로 형성될 수 있다. 또한. 표시부(100) 및 배리어(100')를 통해 관찰자가 입체 영상을 느끼는 방법을 설명하면 표시부(100) 및 배리어(100')의 I-I' 방향으로 절단한 단면을 통해 관찰자가 좌안 화소 및 우안 화소를 통해 입체 영상을 관찰하는 모습을 보여준다. 또한 표시부(100)는 반복하여 배열된 복수의 좌안용 화소(150) 및 복수의 우안용 화소(160)를 포함하며, 배리어(100')는 복수의 좌안용 화소(150) 및 복수의 우안용 화소(160)의 배열 방향과 동일한 방향으로 반복하여 평행하게 배열된 불투명 영역(150')과 투명 영역(160')을 포함한다. 표시부(100)의 좌안용 화소(150)는 투명 영역(160')을 통해 좌안 영상을 좌안(180)에 투사하며, 표시부(100)의 우안용 화소(160)는 배리어(100')의 투명 영역(160')을 통해 우안 영상을 우안(170)에 투사한다. 배리어(100')의 불투명 영역(150')은 표시부(100)의 좌안용 화소(150) 및 우안용 하소(160)가 투명 영역(160')을 통해 각각 좌안 및 우안에 영상을 투사할 수 있도록 광 투사 경로를 형성한다. 또한, 좌안용 화소(150)로부터 투사되는 좌안용 영상은 우안용 영상에 대하여 소정의 디스패러티를 갖는 영상으로 형성되며, 우안용 화소(160)로부터 투사되는 우안용 영상은 좌안용 영상에 대하여 소정의 디스패러티를 갖는 영상으로 형성된다. 따라서, 관찰자는 좌안용 화소(150)로부터 좌안용 영상 및 우안용 화소(160)로부터 투사되는 우안용 영상을 각각 관찰자의 좌안 및 우안에서 인식할 때, 실체 입체 대상물을 좌안 우안을 통해 보는 것과 같은 깊이 정보를 얻게 되어 입체감을 느끼게 된다. 또한, 주사 구동부(200)는 타이밍 제어부(600)로부터 출력되는 제어 신호(Sg)에 응답하여 선택신호를 순차적으로 생성하여 표시부(100)의 주사선(S1~Sn)에 각각 인가한다. 데이터 구동부(300)는 인가되는 입체 영상 데이터를 표시부(100)의 데이터선(D1~Dm)에 인가하기 위한 아날로그데이터 전압으로 변환하고, 타이밍 제어부(600)로부터 입력되는 제어 신호(Sd)에 응답하여 변환된 아날로그 데이터 전압을 데이터선(D1~Dm)에 인가한다. 광원(400)은 적색(R), 녹색(G), 청색(B)의 발광 다이오드(도시하지 않았음)를 포함하며, 각각 적색(R), 녹색(G), 청색(B)에 해당하는 광을 표시부(100)에 출력한다. 광원(500)의 적색(R), 녹색(G), 청색(B)의 발광 다이오드는 각각 표시부(100)의 R 부화소, G 부화소 및 B 부화소로 광을 출력한다. 광원 제어부(500)는 타이밍 제어부(600)로부터 출력되는 제어 신호(Sb)에 응답하여 광원(500)의 발광 다이오드의 점등 시기를 제어한다. 이때, 데이터 구동부(300)로부터 데이터 신호를 데이터선에 공급하는 기간과 광원 제어부(500)에 의해 적색(R), 녹색(G), 청색(B)의 발광 다이오드를 점등하는 기간은 타이밍 제어부(600)에 의해 제공되는 제어 신호에 의해 동기 될 수 있다. 타이밍 제어부(600)는 외부로부터 입력되는 수평 동기 신호(Hsync) 및 수직 동기 신호(Vsync), 데이터 변환부(700)로부터 입력되는 입체 영상 신호 데이터에 대응하여 , 입체 영상 신호 데이터 및 생성된 제어 신호(Sg, Sd, Sb)를 각각 주사 구동부(200), 데이터 구동부(300) 및 광원 제어부(500)에 공급한다. 데이터 변환부(700)는 입력되는 데이터(DATA)를 입체 영상 데이터로 변환하여 타이밍 제어부(600)에 전달한다. 본 실시예에서는 데이터 변환부(700)에 입력되는 데이터(DATA)는 3D 영상 콘텐츠를 포함하는 데이터(이하, '3D 영상 데이터'라 함)이며, 입체 영상 데이터는 표시부(100)의 좌안용 하소 및 우안용 화소에 각각 대응하는 좌안 영상 데이터 및 우안 영상 데이터를 포함한다. 또한, 본 실시예에서는 3D 영상 데이터는 좌표 정보(즉, X좌표 및 Y좌표) 및 해당 좌표에 대응하는 색상 정보를 포함한다. 여기서, 색상 정보는 색 정보 또는 텍스처 좌표 값 등을 포함한다. 한편, 이와 같이 평면 영상용 3D 영상 데이터를 입체 영상 데이터로 변환하는 데이터 변환부(700)는 그래픽 가속칩 등의 형태로 구현될 수 있다. 이하에서는 데이터 변환부(700)를 지오매트릭 엔진부(710), 렌더링 엔진부(720) 및 프레임 메모리부(730)를 포함한다. 또한 상기 렌더링 엔진부(720)는 시작 X좌표 계산부(721), X좌표 증가부(722), 시작 값 계산부(723), 색상 정보 증가부(724) 및 메모리 제어부(725)를 포함한다. 상기 시작 X좌표 계산부(721)는 좌/우안 선택 신호 및 소정의 라인(즉, 일정한 Y 좌표를 가짐)에 대한 시작 X좌표(이하, '시작 X 좌표'라 함)를 입력받아, 좌안 선택 신호에 대응하는 시작 X좌표 또는 우안 선택 신호에 대응하는 시작 X좌표를 생성하여 X 좌표 증가부(722)에 전송한다. 또한, X좌표 증가부(722)는 시작 X좌표 계산부(721)로부터 전송되는 시작 X좌표를 기준으로 2씩 증가시키면서 X좌표를 출력한다. 즉, 본 종래기술의 실시예에서는 프레임 메모리부(730)에 쓰여지는(저장되는) 좌표에 대한 값만을 프레임 메모리부(730)에 저장하기 위해 X좌표를 2씩 증가시킨다. 또한, 시작 값 계산부(723)는 좌/우안 선택 신호 및 소정의 라인(즉, 일정한 Y좌표)에 대한 색상 정보 시작 값(즉, 색정보 시작 값 또는 텍스처 좌표 시작 값)(이하, '색상 정보 시작 값'이라 함)을 입력받아, 좌안 선택 신호에 대응하는 색상 정보 시작 값 또는 우안 선택 신호에 대응하는 색상 정보 시작 값을 생성하여 색상 정보 증가부(724)에 전송한다. 또한, 색상 정보 증가부(724)는 시작 X좌표, 끝 X좌표(소정의 한 라인에 대한 끝 X 좌표를 말함, 이하 동일함), 색상정보 시작 값 및 소정의 한 라인에 대한 색상 정보 끝 값(즉, 색 정보 시작 값 또는 텍스처 좌표 끝 값)(이하, '색상 정보 끝 값'이라 함)을 입력받아 색상 정보 증가치를 계산하며, 이 색상 정보 증가치를 색상 정보 시작값에 더하여 색상 정보를 생성한다. 여기서, 입력받은 시작 X좌표 및 끝 X좌표를 이용하여 시작 X좌표와 끝 X좌표간의 거리를 계산하고, 입력받은 색상 정보 시작 값 및 색상 정보 끝 값을 이용하여 색상 정보 시작 값과 색상 정보 끝 값간의 값 차이를 계산하며, 이 두 계산 값과 X 좌표 증가부(722)에서 X 좌표가 2씩 증가하는 것을 고려하여 최종적으로 색상 정보 증가치(즉, 색 정보 증가치, 텍스터 좌표 증가치)를 계산한다. 즉, X 좌표 증가부(722)에서는 X 좌표가 2씩 증가하므로, 색상 정보 증가부(724)는 시작 X 좌표와 끝 X 좌표간의 거리 및 색상 정보 시작 값과 색상 정보 끝 값간의 값 차이를 이용하여 계산된 증가치에 2배하여, 최종적으로 색상 정보 증가치를 계산한다. 그리고, 색상 정보 증가부(724)는 색상 정보 시작 값에 상기 계산된 색장 정보 증가치를 더하여 색상 정보를 생성하여 출력한다. 여기서, 시작 X 좌표와 끝 X 좌표간의 거리 및 색상 정보 시작 값과 색상 정보 끝 값간의 값 차이를 이용하여 증가치를 계산하는 방법은 당업자라면 상기 설명을 바탕으로 용이하게 알 수 있으므로 이하 구체적 설명은 생략한다. 다음으로, 메모리 제어부(725)는 소정의 Y 좌표, X 좌표 증가부(722)로부터 출력되는 X 좌표 및 색상 정보 증가부(724)로부터 출력되는 색상 정보를 입력받으며, 입력받은 Y 좌표, X 좌표를 기초로 하여 색상 정보를 프레임 메모리부(730)에 저장되도록 제어한다. 이때, 입력받은 Y 좌표, X 좌표 및 색상 정보는 프레임 메모리부(730)에 쓰일(저장될) 값들이므로, 메모리 제어부(725)는 쓰기 활성화 신호(W)를 연속적으로 생성하여 프레임 메모리부(730)에 고속으로 색상 정보가 저장될 수 있다. The prior art related to the present invention is posted in Republic of Korea Patent Registration No. 10-0932977 (2009. 12. 21. notice). 1 is a configuration diagram of the conventional stereoscopic image display device. 1, the conventional stereoscopic image display device includes a display unit 100, a barrier 100', a scan driver 200, a data driver 300, a light source 400, a light source controller 500, and a timing controller 600. and a data conversion unit 700 . The light source 400 is formed as a planar light source and is formed on the rear side of the display unit 100, but for convenience, it is shown that the light source 400 is formed on the lower side of the display unit 100. In addition, the display unit 100 includes a plurality of scan lines (not shown) for transmitting selection signals, a plurality of data lines (not shown) formed to intersect with the plurality of scan lines and transmitting data signals (not shown), and It includes a plurality of sub-pixels (not shown) formed at intersections of data lines. In this embodiment, it is assumed that a red sub-pixel for red (R) display, a green sub-pixel for green (G) display, and a blue sub-pixel for blue (B) display form one pixel. In addition, in this embodiment, the plurality of pixels of the display unit 100 are pixels corresponding to images for the left eye (hereinafter, referred to as 'pixels for the left eye') and pixels corresponding to images for the right hand (hereinafter, referred to as 'pixels for the right hand'). including) Here, the pixels for the left eye and the pixels for the right eye are formed such that they are repeatedly arranged. Specifically, the pixels for the left eye and the pixels for the right eye may be arranged to be repeated in parallel with each other to form a stripe shape or a zigzag shape. The arrangement of pixels for the left eye and pixels for the right eye may be suitably changed according to the barrier 100'. The barrier 100' is disposed on one surface of the display unit 100, and includes opaque areas (not shown) and transparent areas (shown) formed to correspond to the arrangement method of pixels for the left eye and pixels for the right eye of the display unit 100. did not). The barrier 100' separates and provides a left eye image and a right eye image projected from pixels for the left eye and pixels for the right eye of the display unit 100 in directions for the left eye and right eye of the viewer, respectively, using opaque regions and transparent regions. . The opaque regions and transparent regions of the barrier 100 ′ may be formed in a stripe shape or a zigzag shape according to an arrangement method of pixels for the left eye and calcination for the right eye of the display unit 100 . also. When explaining how an observer feels a 3D image through the display unit 100 and the barrier 100', a left eye pixel and a right eye pixel can be viewed through a section cut in the II' direction of the display unit 100 and the barrier 100'. It shows how to observe a three-dimensional image through. In addition, the display unit 100 includes a plurality of left-eye pixels 150 and a plurality of right-eye pixels 160 that are repeatedly arranged, and the barrier 100' includes a plurality of left-eye pixels 150 and a plurality of right-eye pixels 160 . An opaque region 150' and a transparent region 160' are repeatedly arranged in parallel in the same direction as the arrangement direction of the pixels 160. The pixels 150 for the left eye of the display unit 100 project the left eye image onto the left eye 180 through the transparent area 160', and the pixels 160 for the right eye of the display unit 100 are transparent to the barrier 100'. The right eye image is projected onto the right eye 170 through the area 160'. The opaque region 150' of the barrier 100' allows the pixels 150 for the left eye and the calciner 160 for the right eye of the display unit 100 to project images to the left and right eyes through the transparent region 160', respectively. to form a light projection path. In addition, the image for the left eye projected from the pixel 150 for the left eye is formed as an image having a predetermined disparity with respect to the image for the right eye, and the image for the right eye projected from the pixel 160 for the right eye has a predetermined disparity with respect to the image for the left eye. It is formed as an image having a disparity of Therefore, when the observer recognizes the image for the left eye from the pixel 150 for the left eye and the image for the right eye projected from the pixel 160 for the right eye by the observer's left and right eyes, respectively, it is the same as viewing a stereoscopic object through the left and right eyes. You get the depth information and feel the three-dimensional effect. In addition, the scan driver 200 sequentially generates selection signals in response to the control signal Sg output from the timing controller 600 and applies them to the scan lines S1 to Sn of the display unit 100, respectively. The data driver 300 converts the applied stereoscopic image data into an analog data voltage to be applied to the data lines D1 to Dm of the display unit 100, and responds to the control signal Sd input from the timing controller 600. Then, the converted analog data voltage is applied to the data lines D1 to Dm. The light source 400 includes red (R), green (G), and blue (B) light emitting diodes (not shown), respectively corresponding to red (R), green (G), and blue (B). Light is output to the display unit 100 . Red (R), green (G), and blue (B) light emitting diodes of the light source 500 output light to the R subpixel, G subpixel, and B subpixel of the display unit 100 , respectively. The light source controller 500 controls the lighting timing of the light emitting diode of the light source 500 in response to the control signal Sb output from the timing controller 600 . At this time, the period during which the data signal is supplied from the data driver 300 to the data line and the period during which the light source control unit 500 turns on the red (R), green (G), and blue (B) light emitting diodes are the timing control unit ( 600) can be synchronized by the control signal provided by. The timing controller 600 corresponds to the horizontal synchronization signal (Hsync) and the vertical synchronization signal (Vsync) input from the outside and the stereoscopic image signal data input from the data conversion unit 700, and generates the stereoscopic image signal data and the generated control signal. (Sg, Sd, and Sb) are supplied to the scan driver 200, the data driver 300, and the light source controller 500, respectively. The data conversion unit 700 converts input data DATA into 3D image data and transmits it to the timing controller 600 . In this embodiment, the data (DATA) input to the data conversion unit 700 is data including 3D image contents (hereinafter, referred to as '3D image data'), and the stereoscopic image data is the calcination for the left eye of the display unit 100. and left eye image data and right eye image data respectively corresponding to pixels for the right eye. In addition, in this embodiment, 3D image data includes coordinate information (ie, X coordinate and Y coordinate) and color information corresponding to the corresponding coordinate. Here, the color information includes color information or texture coordinate values. Meanwhile, the data conversion unit 700 that converts 3D image data for a flat image into stereoscopic image data may be implemented in the form of a graphic accelerator chip or the like. Hereinafter, the data conversion unit 700 includes a geometric engine unit 710, a rendering engine unit 720, and a frame memory unit 730. In addition, the rendering engine unit 720 includes a start X coordinate calculation unit 721, an X coordinate increase unit 722, a start value calculation unit 723, a color information increase unit 724, and a memory control unit 725. . The starting X coordinate calculation unit 721 receives a left/right eye selection signal and a starting X coordinate (hereinafter referred to as 'starting X coordinate') for a predetermined line (ie, having a constant Y coordinate), and selects the left eye A start X coordinate corresponding to the signal or a start X coordinate corresponding to the right eye selection signal is generated and transmitted to the X coordinate increasing unit 722 . In addition, the X-coordinate increasing unit 722 increases the X-coordinate by 2 based on the starting X-coordinate transmitted from the starting X-coordinate calculating unit 721 and outputs the X-coordinate. That is, in this prior art embodiment, the X coordinate is increased by 2 in order to store only the coordinate values written (stored) in the frame memory unit 730 in the frame memory unit 730 . In addition, the start value calculation unit 723 determines the color information start value (ie, color information start value or texture coordinate start value) for the left/right eye selection signal and a predetermined line (ie, a constant Y coordinate) (hereinafter referred to as 'color'). (referred to as 'information start value') is input, and a color information start value corresponding to the left eye selection signal or a color information start value corresponding to the right eye selection signal is generated and transmitted to the color information increasing unit 724 . In addition, the color information increasing unit 724 includes a start X coordinate, an end X coordinate (referring to an end X coordinate for a given line, hereinafter the same), a color information start value, and a color information end value for a given line. (That is, the color information start value or texture coordinate end value) (hereinafter referred to as 'color information end value') is input to calculate the color information increment value, and the color information increment value is added to the color information start value to generate color information. do. Here, the distance between the start X coordinate and the end X coordinate is calculated using the input start X coordinate and the end X coordinate, and the color information start value and color information end value are calculated using the input color information start value and color information end value. The color information increment value (i.e., color information increment value, texture coordinate increment value) is finally calculated in consideration of these two calculated values and the X coordinate increment by 2 in the X coordinate increasing unit 722. That is, since the X coordinate increases by 2 in the X coordinate increaser 722, the color information increaser 724 uses the distance between the start X coordinate and the end X coordinate and the value difference between the start value of the color information and the end value of the color information. The color information increment is finally calculated by doubling the calculated increment. The color information increasing unit 724 generates and outputs color information by adding the calculated color field information increment value to the color information starting value. Here, a method for calculating an increment value using the distance between the start X coordinate and the end X coordinate and the value difference between the color information start value and the color information end value can be easily known to those skilled in the art based on the above description, so a detailed description thereof will be omitted. do. Next, the memory control unit 725 receives a predetermined Y coordinate, X coordinate output from the X coordinate increasing unit 722 and color information output from the color information increasing unit 724, and receives the input Y coordinate and X coordinate. Based on , color information is controlled to be stored in the frame memory unit 730 . At this time, since the input Y coordinate, X coordinate, and color information are values to be written (stored) in the frame memory unit 730, the memory controller 725 continuously generates a write enable signal (W) to write the frame memory unit 730. ), color information can be stored at high speed.
또한 종래기술은 3D 화면 구성을 위하여 일반적으로 3D MAX, Maya 등과 같은 3D Modeling 도구를 사용하여 Text 기반 FBX, OBJ, DAE형식의 3D 산출물을 생성하고 이를 컨텐츠에서 로딩하여 지정된 위치에 표시하게 되는 것이다. 상기에서 생성된 3D 산출물은 모델의 점 좌표, 각각의 면에 표시될 이미지 정보, 바라보는 이의 위치와 각도, 빛의 위치 및 강도, 애니메이션 지정 값들로 구성되는 것이고 3D 표현은 연산량이 매우 많이 필요하므로 CPU 기반 환경보다는 일반적으로 GPU가 존재하는 환경에서 지원되며, GPU 사용을 위하여 OpenGL ES와 같은 표준화된 그래픽 라이브러리를 사용하는 것이다. 상기에서와 같이 3D Modeling 도구를 사용하여 생성한 산출물에 대한 화면 표현 요청이 발생하게 되면 해당 파일을 읽어 들여 OpenGL ES와 같은 그래픽 라이브러리에 적용할 수 있도록 분석 및 처리, 가공되어야 하며, 일반적으로 알려진 3D 형식인 FBX, OBJ, DAE 형식은 XML 처리와 유사하게 문자열 분석을 통해 3D 표현에 필요한 좌표 정보, 면 정보, 카메라 정보, 빛 정보, 애니메이션 정보를 추출하는 것이다. 상기와 같이 3D 데이터에 대한 분석이 완료되면 OpenGL ES와 같이 3D 표현을 지원하는 그래픽 라이브러리를 사용하여 3D Data가 GPU에서 처리될 수 있도록 OpenGL ES 함수를 호출하고 상기 OpenGL ES 함수를 통해 화면에 입체감 있는 형태로 표시될 수 있게 되는 것이다.In addition, the prior art generally uses a 3D modeling tool such as 3D MAX, Maya, etc. to construct a 3D screen to create a text-based 3D output in FBX, OBJ, or DAE format, loads it from content, and displays it at a designated location. The 3D output created above consists of the point coordinates of the model, image information to be displayed on each side, the position and angle of the viewer, the position and intensity of light, and animation designation values, and 3D expression requires a lot of computation. It is generally supported in an environment where a GPU exists rather than a CPU-based environment, and a standardized graphic library such as OpenGL ES is used for GPU use. As described above, when a screen representation request for a product created using a 3D modeling tool occurs, the file must be read and analyzed, processed, and processed so that it can be applied to a graphic library such as OpenGL ES. The FBX, OBJ, and DAE formats, similar to XML processing, extract coordinate information, surface information, camera information, light information, and animation information necessary for 3D expression through string analysis. When the analysis of the 3D data is completed as described above, the OpenGL ES function is called so that the 3D data can be processed by the GPU using a graphic library that supports 3D expression, such as OpenGL ES, and the three-dimensional image is displayed on the screen through the OpenGL ES function. that can be displayed in the form
상기와 같이 구성된 종래 기술에 적용되는 기준 GUI는 3D 데이터를 실시간으로 분석하여 해당 GUI 엔진의 계층구조로 변환하여 사용하고 있는 것으로 상기 GUI 엔진의 계층구조로 변환하는 과정은 매 실행시 마다 반복되어 3D 데이터가 출력되기까지 오랜시간이 소요되는 것이다. 이는 기존 GUI 엔진을 사용하는 자동차의 디지털 계기판에서 시동을 거는 순간 화면에 출력되어야 할 3D 정보들이 시동과 즉시 출력되지 아니하고 시동을 걸고 수초 후에 화면에 출력되는 지연 현상이 발생하는 문제가 있는 것이다. 또한 3D 렌더링을 위해서는 3D Modeling Data를 분석하여 좌표 정보인 Mesh, 시선 각도 정보인 Camera, 빛의 방향, 강도 정보인 Light, 각 모델의 이동 정보인 Animation을 추출하는데 상기와 같은 3D 정보(Data) 추출과정은 문자열 분석 단계를 거치게 되므로 시간 소모가 큰 것이고 따라서 3D 렌더링에 긴 로딩 시간을 필요로 하는 것이다. 또한, 3D 데이터가 추출되면 OpenGL ES와 같은 그래릭 라이브러리에 해당 정보를 입력 사용하여야 하나 3D Modeling Data와 그래릭 라이브러리(Graphics Library)에서의 입력 형태가 상이하므로 3D Modeling Data의 변환이 필요하며 따라서 변환에 긴 시간이 소요되는 것이다. 상기와 같이 3D Data의 분석, 변환 적용에 긴 시간이 소요되므로 자동차의 디지털 계기판의 경우 시동을 걸고 디지털 계기판에 화면이 출력되기까지 긴 시간이 걸릴 수 있는 것이다. 따라서 상기 종래 기술의 문제점을 해결하기 위한 본 발명은 텍스트 기반의 3D Modeling Data를 GUI 엔진 구조 및 OpenGL ES와 같은 Graphics Library에서 사용하는 형식으로 변환하고 Binary화하여 빠른 로딩과 빠른 렌더링 처리를 가능하도록 하기 위한 것이다.The standard GUI applied to the prior art configured as above is used by analyzing 3D data in real time and converting it into a hierarchical structure of the corresponding GUI engine. It takes a long time for the data to be output. This is a problem in that the 3D information to be displayed on the screen is not immediately output when the engine is started on the digital instrument panel of a vehicle using an existing GUI engine, but is delayed to be output on the screen several seconds after the engine is started. In addition, for 3D rendering, 3D modeling data is analyzed to extract coordinate information, Mesh, gaze angle information, Camera, light direction and intensity information, and animation, movement information of each model. Since the process goes through the string analysis step, it is time consuming and therefore requires a long loading time for 3D rendering. In addition, when 3D data is extracted, the corresponding information must be input and used in a graphics library such as OpenGL ES, but since the input form in the 3D Modeling Data and the Graphics Library is different, conversion of the 3D Modeling Data is required. will take a long time. As described above, it takes a long time to analyze and convert 3D data, so in the case of a digital instrument panel of a car, it may take a long time to start the engine and display the screen on the digital instrument panel. Therefore, the present invention for solving the problems of the prior art is to convert text-based 3D modeling data into a format used in a graphics library such as a GUI engine structure and OpenGL ES and convert it into a binary to enable fast loading and fast rendering processing it is for
상기와 같은 목적을 가진 본 발명 3D 고속 렌더링을 위한 3D 데이터 변환 및 사용 방법은 3D Data 구조를 분석하여 GUI 엔진의 계층 구조로 변환하는 단계와, 변환된 3D Data를 바이너리화하는 단계와, 비이너리화한 3D Data를 GUI 컨텐츠에서 사용하기 위해 읽어드리는 단계와, 읽어드린 3D 바이너리 Data를 OpenGL ES(Open Graphics Library for Embeded System)에 저장된 Graphica Library의 함수를 사용하여 화면에 출력하는 단계를 포함하여 이루어지는 것을 특징으로 하는 것이다.3D data conversion and use method for 3D high-speed rendering of the present invention having the above object has the step of analyzing the 3D data structure and converting it into a hierarchical structure of a GUI engine, the step of binarizing the converted 3D data, and the binary The step of reading the converted 3D data for use in GUI contents and the step of outputting the read 3D binary data to the screen using the function of the Graphica Library stored in OpenGL ES (Open Graphics Library for Embedded System) that is characterized by
상기와 같이 구성된 본 발명 3D 고속 렌더링을 위한 3D 데이터 변환 및 사용 방법은 사용자가 생성하는 3D Data를 신속히 처리하여 지연없이 부드럽고 현실감 있는 3D 표현을 가능하게 하는 효과가 있는 것이다.The 3D data conversion and use method for 3D high-speed rendering of the present invention configured as described above has the effect of enabling smooth and realistic 3D expression without delay by quickly processing 3D data generated by the user.
도 1은 종래의 입체 영상 표시 장치의 구성도,1 is a configuration diagram of a conventional stereoscopic image display device;
도 2는 본 발명 3D 고속 렌더링을 위한 3D 데이터 변환 및 사용 방법 제어 흐름도,2 is a control flow chart of 3D data conversion and use method for 3D high-speed rendering of the present invention;
도 3은 본 발명 3D 고속 렌더링을 위한 3D 데이터 변환 및 사용 시스템 구성도,Figure 3 is a 3D data conversion and use system configuration diagram for 3D high-speed rendering of the present invention;
도 4는 본 발명에 적용되는 3D Data 기본 구조도이다.4 is a basic structure diagram of 3D Data applied to the present invention.
상기와 같은 목적을 가진 본 발명 3D 고속 렌더링을 위한 3D 데이터 변환 및 사용 방법 및 이를 이용한 3D 고속 렌더링을 위한 3D 데이터 변환 및 사용 시스템을 도 2 내지 도 4를 기초로 하여 설명하면 다음과 같다.The method for converting and using 3D data for 3D high-speed rendering and the system for converting and using 3D data for 3D high-speed rendering using the same according to the present invention having the above object will be described based on FIGS. 2 to 4 as follows.
도 2는 본 발명 3D 고속 렌더링을 위한 3D 데이터 변환 및 사용 방법 제어 흐름도이다. 상기도 2에서 본 발명 3D 고속 렌더링을 위한 3D 데이터 변환 및 사용 방법은 컨버터가 수신되는 3D Data 구조를 분석하여 GUI 엔진의 계층 구조로 변환하고 변환된 3D 데이터를 Importer로 전송하여 바이너리화하도록 하는 단계(S11)와, Importer가 수신된 변환된 3D Data를 바이너리화하여 GUI 컨텐츠 모듈에 저장하고 GUI Engine 모듈 구동 후 GUI 컨텐츠 모듈에 기저장된 바이너리화된 3D 데이터를 독출하고 독출된 바이너리화된 3D 데이터를 GUI Engine 모듈로 전송하는 단계(S12)와, GUI Engine 모듈이 수신된 바이너리화된 3D Data를 기초로 GUI Class를 생성하고 생성된 GUI Class를 GUI 컨텐츠에 저장하고 OpenGL ES 모듈로 전송하는 단계(S13)와, OpenGL ES 모듈이 수신된 GUI Class에서 Class를 순회하면서 OpenGL ES와 같은 표준 Graphiecs Library 함수를 이용 Command Parameter를 생성하고 생성된 Command Parameter를 GPU로 전송하는 단계(S14)와, GPU가 수신된 GUI Class의 Command Parameter를 처리하고 디스플레이부로 전송하여 제공하도록 하는 단계(S15)를 포함하여 이루어지는 것을 특징으로 하는 것이다. 2 is a control flow chart of a 3D data conversion and use method for 3D high-speed rendering according to the present invention. 2, the 3D data conversion and use method for 3D high-speed rendering of the present invention analyzes the 3D data structure received by the converter, converts it into a hierarchical structure of the GUI engine, and transmits the converted 3D data to an importer to make it binary. (S11), the importer converts the received converted 3D data into binaries and stores them in the GUI content module, drives the GUI Engine module, reads the binaryized 3D data previously stored in the GUI content module, and converts the read out binaryized 3D data Transmitting to the GUI Engine module (S12), and the GUI Engine module generating a GUI Class based on the received binary 3D Data, storing the generated GUI Class in GUI contents, and transmitting it to the OpenGL ES module (S13 ), and generating Command Parameters using standard Graphiecs Library functions such as OpenGL ES while traversing classes in the GUI Class received by the OpenGL ES module and transmitting the generated Command Parameters to the GPU (S14), It is characterized in that it comprises a step (S15) of processing the Command Parameter of the GUI Class and transmitting it to the display unit to provide it.
또한, 상기에서 3D Data 구조를 분석하여 GUI 엔진의 계층 구조로 변환하는 단계(S11)를 구체적으로 설명하면 3D Data는 FBX, DAE 등 AutoDesk 사에서 공개한 3D File Format으로 Model의 좌표, 장면, Camera, Light, Animation 정보들을 포함하고 있는 것으로 3D Data 기본 구조에서 Scene은 노드 계층 구조의 최상위이며, 하나의 Root Node를 갖고, 이 Root Node에는 3D 관련 Mesh 정보, Light 정보, Camera 정보가 포함될 수 있는 것이다. 또한 상기 Mesh 정보에는 Animation, Effect, Texture 정보를 가질 수 있고, Root Node는 자식을 순회하며 자식의 유형을 검사할 수 있으며 가능한 자식의 타입은 Mesh, Light, Camera가 있는 것이다. 또한, 자식의 타입이 Mesh 타입이면 Mesh Data인 Vertex 좌표, Normal 좌표, UV 좌표, Translate, Rotate, Scale, 각 Animation 정보, Texture 정보, Effect 정보가 포함되고 Light 정보에는 빛 발생 지점의 좌표 값, 색상 정보가 포함되고, Camera 정보에는 Camera가 바라보는 물체의 위치정보, Viewpoint, Fov 정보가 포함될 수 있는 것이다. In addition, if the step (S11) of analyzing the 3D data structure and converting it to the hierarchical structure of the GUI engine above is described in detail, 3D Data is a 3D File Format published by AutoDesk such as FBX and DAE, and the coordinates of the model, scene, and camera , Light, and Animation information, and in the 3D Data basic structure, Scene is the top of the node hierarchy and has one Root Node, and this Root Node can include 3D-related Mesh information, Light information, and Camera information. . In addition, the Mesh information may have Animation, Effect, and Texture information, and the Root Node may inspect the type of the child while traversing the children, and the types of possible children include Mesh, Light, and Camera. In addition, if the child type is a mesh type, mesh data such as vertex coordinates, normal coordinates, UV coordinates, translate, rotate, scale, animation information, texture information, and effect information are included. Information is included, and Camera information may include location information of an object viewed by the camera, Viewpoint, and Fov information.
또한, 3D Data를 GUI 엔진에서 실행하기 위한 GUI 계층 구조를 설명하면 GUI 엔진은 Layer → Scene → Sprite … → Shape와 같은 구조로 되어 있는 것이다. 상기에서 Layer는 하나의 Scene을 가지며 Scene에는 N 개의 Sprite를 포함할 수 있으며, Sprite는 N개의 Child Sprite 및 Shape를 가질수 있는 것이다. 또한, Sprite는 Scene 또는 Sprite를 상위 Display Object로 가지며 Transform 정보, Camera 정보, Animation 정보와 Shape를 가질수 있는 것이다. 또한 Shape는 Sprite를 상위 Display Objest로 가지며 Mesh 정보, Effect 정보(fresnel, phong, projection, shape), Texture정보를 가질 수 있는 것이다.In addition, when explaining the GUI hierarchy for executing 3D Data in the GUI engine, the GUI engine is Layer → Scene → Sprite … → It has the same structure as the Shape. In the above, a layer has one scene, a scene can include N sprites, and a sprite can have N child sprites and shapes. In addition, Sprite has Scene or Sprite as its upper display object and can have Transform information, Camera information, Animation information and Shape. In addition, Shape has Sprite as its upper display object and can have Mesh information, Effect information (fresnel, phong, projection, shape), and Texture information.
따라서 GUI 엔진을 사용하여 3D Data를 표현하기 위한 3D Data 구조는 OpenGL ES와 같은 Graphics Library에 적용하고 신속한 데이터 처리를 위하여 Converter를 이용 GUI 엔진 계층 구조로 변환하여야 하는 것이다. 상기에서 본 발명에 적용되는 Converter에 대하여 설명하면 FBX, OBJ, DAE에 대한 형식은 공개되어 있으므로 해당 3D Modeling Data 분석을 통하여 3D Scene Root Node를 획득할 수 있으며 상기 3D Scene Root Node에 연결되어 있는 자식 Node를 순회하면 Mesh, Light, Camera와 같은 정보를 획득할 수 있게되는 것이다. 즉 3D Modeling Data의 내용을 GUI 엔진에서 사용하는 GUI Class와 일대일 대응되도록 구성하는 것으로 구체적으로 설명하면, 먼저 GUI Scene을 생성하는 것으로 File I/O을 통해 3D Data을 읽어온 후에 3D Data에서 3D Scene Root Node를 가져와 GUI Sprite를 생성하는 것이다. 상기에서 첫번째 생성하는 GUI Sprite는 Root Sprite로서 상기 Root Sprite에는 3D Scene Root Node의 Name, Transform 정보, Mesh 정보, Animation 정보를 가져와 저장할 수 있으며, 상기와 같이 첫번째로 생성한 GUI Sprite는 GUI Scene에 설정 저장하는 것이다. Converter에서는 위 작업을 재귀함수로 작성하여 3D Data에 자식이 있으면 이 재귀함수를 호출하도록 함으로써 두번째 생성되는 GUI Sprite 부터는 Child Sprite이 되도록 할 수 있는 것이다. 상기 Child Sprite은 타입 검사를 통하여 Mesh 타입이면 GUI Shape를 생성하고, Shape에 3D Data의 Mesh Data를 저장하는 것으로 저장하는 정보는 GUI의 Mesh Class(정점정보, Normal 정보, UV 정보), Animation Class, Effect Class, Texture Class의 Instance를 생성하여 저장하는 것이다. 또한, Child Sprite의 타입이 Camera인 경우 GUI Shape를 생성하고 Shape에 GUI Camera Class의 Instance를 생성 후 이름정보, 타겟정보, Viewpoint 정보, Far 정보를 저장하는 것이다. Therefore, the 3D data structure for expressing 3D data using a GUI engine should be applied to a graphics library such as OpenGL ES and converted into a GUI engine hierarchical structure using a converter for quick data processing. If the converter applied to the present invention is described above, since the formats for FBX, OBJ, and DAE are open, the 3D Scene Root Node can be obtained through the analysis of the corresponding 3D Modeling Data, and the child connected to the 3D Scene Root Node By traversing nodes, information such as Mesh, Light, and Camera can be obtained. In other words, to explain in detail that the contents of 3D Modeling Data are configured in a one-to-one correspondence with the GUI Class used in the GUI engine, first, a GUI Scene is created, and after reading 3D Data through File I/O, 3D Scene from 3D Data The root node is taken and a GUI sprite is created. The first GUI sprite created above is a root sprite, and the name of the 3D scene root node, transform information, mesh information, and animation information can be imported and stored in the root sprite. is to save Converter writes the above operation as a recursive function and calls this recursive function if there are children in 3D Data, so that GUI sprites that are created secondly can become child sprites. If the child sprite is a mesh type through type checking, a GUI shape is created, and the mesh data of 3D data is stored in the shape. To create and save instances of Effect Class and Texture Class. Also, if the type of Child Sprite is Camera, a GUI Shape is created, and an instance of a GUI Camera Class is created in the Shape, and name information, target information, Viewpoint information, and Far information are saved.
또한 상기도 2에서 변환된 3D Data를 바이너리화하는 단계(S12)와, 바이너리화한 3D Data를 GUI 컨텐츠에서 사용하기 위해 읽어드리는 단계(S13)는 Importer에 의하여 실행되는 것으로 상기 Importer는 Converter에 의하여 산출된 Property(3D Data Scene Root Node의 자식들을 모두 포함한다)를 크게 Root, Creator, Instance로 구분할 수 있는 것이다. Binary File의 File Header와 Scene Root가 Root로 구분하고, Scene에 포함된 모든 하위 Class Child(예외 객체를 재사용하는 경우는 Instance)는 Creator로 구분하고, 이미 입력/출력된 객체를 재사용할 때 해당 객체의 이름을 통해 객체를 등록하는 경우는 Instance로 구분하며, 객체가 이름을 가지는 경우에는 Name을 Hash 값으로 구분하고 그 외에는 모두 각각의 Property Hash 값을 가지면서 구분되는 것이다.In addition, the step of binarizing the 3D data converted in Fig. 2 (S12) and the step of reading the binaryized 3D data for use in GUI contents (S13) are executed by the importer, and the importer is executed by the converter. The calculated properties (including all children of the 3D Data Scene Root Node) can be largely divided into Root, Creator, and Instance. The File Header and Scene Root of Binary File are classified as Root, and all child Class Children (instances in case of reusing exception objects) included in the Scene are classified as Creator, and when reusing already input/output objects, the corresponding object In the case of registering an object through its name, it is classified as an instance. In case the object has a name, the name is classified as a hash value, and everything else is distinguished by having each property hash value.
또한 3D Binary Data 생성에 대하여 구체적으로 설명하면In addition, if you explain in detail about the creation of 3D Binary Data
먼저 바이너리 파일에 처음으로 저장되는 정보는 Header 정보이다.First, the first information stored in the binary file is Header information.
id, offset : 바이너리 파일에 처음으로 저장되는 id는 ClassID와 ClassID ~ Version Path까지 차지하는 위치를 offset에 저장하고,id, offset: The first id stored in the binary file stores ClassID and the position from ClassID to Version Path in offset,
id, offset, overload : File Header는 Root로 구분하며 Root의 Hash 값을 id에 저장하고 id~ overload까지 차지하는 위치를 offset에 저장하고,id, offset, overload: File Header is classified as root, and the hash value of the root is stored in id, and the position occupied from id to overload is stored in offset,
Name id, Name offset, overload, Name Lenth, Name str : 파일 이름 정보는 Name으로 구분하고 Name의 hash 값을 id에 저장하고 id Name str까지의 위치를 offset에 저장하고 파일이름의 길이정보와 파일 이름을 저장하고,Name id, Name offset, overload, Name Lenth, Name str: The file name information is classified as Name, the hash value of Name is stored in id, the position up to id Name str is stored in offset, and the length information and file name of the file name are stored. save,
Version id, Version offset, overload, Major, Minor, Patch : Version의 hash 값을 id에 저장하고 id ~ patch까지 차지하는 위치를 offset에 저장하고 Export와 대응되는 GUI 엔진의 Version 정보를 저장하는 것으로 그림으로 나타내면 아래 1과 같은 것이다.Version id, Version offset, overload, Major, Minor, Patch: The hash value of the version is stored in id, the position occupied from id to patch is stored in offset, and the version information of the GUI engine corresponding to Export is stored. Same as 1 below.
[아래 1][Bottom 1]
Figure PCTKR2021014075-appb-I000001
Figure PCTKR2021014075-appb-I000001
또한 위 아래 1의 그림에서 Name, 문자열을 저장하는 경우 아래와 같이 하는 것이다.Also, in the case of saving the Name and string in the figure 1 above and below, do the following.
id : Name의 hash 값,id: hash value of Name,
offset : id ~ Name str 까지의 offset,offset: offset from id to Name str,
overload : 0,overload: 0;
Name Length : 13,Name Length : 13,
Name str : "abcdefghijklm"Name str: "abcdefghijklm"
Version은 GUI 엔진의 버전과 동일하며 Version 맞춤을 통하여 호환되는지 알기 위한 정보인 것이다.Version is the same as the GUI engine version and is information to know if it is compatible through version customization.
Header File을 바이너리 파일에 저장한 후 다음으로 Scene 정보를 바이너리 파일에 저장하는 것을 설명하면,After saving the Header File to a binary file, the next step is to save Scene information to a binary file.
Scene은 GUI 엔진 상 Root Sprite의 부모이며 계층 구조상 최상위 부모이다.Scene is the parent of Root Sprite in the GUI engine and the highest parent in the hierarchical structure.
Scene에서부터 Root Sprite, Camera, Light, SceneProperty, Shape, Mesh …, 부모에서 자식으로 내려가면서 모든 정보를 바이너리 파일에 아래 2와 같이 저장하는 것이다.From Scene, Root Sprite, Camera, Light, SceneProperty, Shape, Mesh... , going down from parent to child, all information is stored in a binary file as shown in 2 below.
[아래 2][Bottom 2]
Figure PCTKR2021014075-appb-I000002
Figure PCTKR2021014075-appb-I000002
또한 바이너리 파일에 Export 하는 시나리오를 바탕으로 설명하면,Also, if you explain based on the scenario exported to a binary file,
1. File Header을 저장하는(Root Property로 분류한다) 것으로1. To store File Header (classified as Root Property)
1). Root Property- - id, offset 총 8bytes를 0으로 초기화한다. One). Root Property- - id, offset Total 8bytes are initialized to 0.
2). Name Property - id, offset, overload 총 9bytes를 0으로 초기화한다. 2). Name Property - id, offset, overload Initializes a total of 9 bytes to 0.
3). Name Length, Name 문자열을 저장한다. Nength는 4 bytes 문자열은 length 만큼 byte를 차지하는 것이다. 3). Save Name Length, Name string. Nength is 4 bytes. A string occupies as many bytes as length.
4). 2). 에서 초기화 했던 시작 위치로 이동한 후 Name Property의 hash 값, Name Property가 차지하는 위치 overload 값을 저장한다. 4). 2). After moving to the starting position initialized in , the hash value of the Name Property and the position overload value occupied by the Name Property are saved.
5). Version Property - id, offset, overload 총 9bytes를 0으로 초기화 한다. 5). Version Property - Initialize a total of 9 bytes of id, offset, and overload to 0.
6). 버전 정보를 저장한다. id : Version Property hash 값, offset 값, Major : 0, Minor : 9, Patch : 0, 6). Save version information. id : Version Property hash value, offset value, Major : 0, Minor : 9, Patch : 0,
7). 5). 에서 초기화했던 시작 위치로 이동시킨 후 Version Property의 hash 값, Name Property가 차지하는 위치, overload 값을 저장한다. 7). 5). After moving to the starting position initialized in , save the hash value of the Version Property, the position occupied by the Name Property, and the overload value.
8). 1). 에서 초기화했던 시작 위치로 이동시킨 후 Root Property를 저장한다. File Header Class ID(0xFF584754)와 Root Property의 마지막 offset이 저장된다. 8). One). After moving to the starting position initialized in , save the root property. The File Header Class ID (0xFF584754) and the last offset of the Root Property are stored.
2. Scene을 저장하는 경우를 설명하면(Root Property로 분류하고 Hash 값은 File Header의 id와 다르게 "Scene"을 Hash한 값이다.2. Explaining the case of saving a scene (classified as a root property, and the hash value is a hash value of "Scene", different from the id of the File Header.
1). Scene - id, offset 총 8bytes를 0으로 초기화 한다. One). Scene - Initialize a total of 8 bytes of id and offset to 0.
2). Scene은 계층 구조상 Root Property 이므로 id, offset, overload 총 9bytes를 0으로 초기화한다. 2). Scene is a root property in a hierarchical structure, so a total of 9 bytes of id, offset, and overload are initialized to 0.
3). 이제 Scene에 포함된 Property를 아래 3 이미지에서 표한한 방식으로 저장하는 것이다. 3). Now, the property included in the scene is saved in the way shown in the 3 images below.
[아래 3 이미지][3 images below]
Figure PCTKR2021014075-appb-I000003
Figure PCTKR2021014075-appb-I000003
Figure PCTKR2021014075-appb-I000004
Figure PCTKR2021014075-appb-I000004
Figure PCTKR2021014075-appb-I000005
Figure PCTKR2021014075-appb-I000005
4). File에 Writing 되는 모든 값은 컴퓨터에서 인식 가능한 Byte, Interger, Float, Double 형태의 Binary로 저장되는 것이다. 4). All values written to the file are saved as binary in the form of Byte, Interger, Float, and Double that can be recognized by the computer.
5). Property 값들을 순회하면서 File에 모두 Write하게 되면 3D Binary Data 생성을 아래 4 이미지와 같이 완료하게 되는 것이다. 5). If all are written to the file while traversing the property values, the creation of 3D Binary Data will be completed as shown in the 4 images below.
[아래 4][Bottom 4]
Figure PCTKR2021014075-appb-I000006
Figure PCTKR2021014075-appb-I000006
도 3은 본 발명 3D 고속 렌더링을 위한 3D 데이터 변환 및 사용 시스템 구성도이다. 상기도 3에서 본 발명 3D 고속 렌더링을 위한 3D 데이터 변환 및 사용 시스템은 수신되는 3D Data에 대하여 구조를 분석하여 GUI 엔진의 계층구조로 변환하고 변화된 GUI 엔진 계층 구조를 Importer로 전송하는 Converter(100)와, Converter로부터 수신된 GUI 엔진 계층 구조로 변환된 3D Data를 바이너리화하고 GUI 컨텐츠 모듈(250)에 저장하며, GUI 컨텐츠 모듈(250)에서 바이너리화된 3D Data를 독출하여 GUI 엔진 모듈로 전송하는 Importer(200)와, Importer로부터 수신된 바이너리화된 3D Data를 기초로 GUI Class를 생성하고 생성된 GUI Class를 GUI 컨텐츠 모듈(250)에 저장하고 OpenGL ES모듈로 전송하는 GUI Engine(300)과, GUI Engine로부터 GUI Class를 수신하여 저장하고 Class를 순회하면서 Command Parameter를 생성하고 생성된 Command Parameter GPU로 전송하는 OpenGL ES 모듈(400)과, 상기 OpenGL ES 모듈로부터 Command Parameter를 수신하여 Dispaly에 표출하도록 제어하는 GPU(500) 및 수신된 Command Parameter 들을 표출하는 Display(600)로 구성된 것을 특징으로 하는 것이다.3 is a configuration diagram of a 3D data conversion and use system for 3D high-speed rendering according to the present invention. 3, the 3D data conversion and use system for 3D high-speed rendering of the present invention analyzes the received 3D data structure, converts it into a GUI engine hierarchy, and transmits the changed GUI engine hierarchy to the importer (100) And, the 3D data converted to the GUI engine hierarchical structure received from the converter is binarized and stored in the GUI content module 250, and the binaryized 3D data is read from the GUI content module 250 and transmitted to the GUI engine module. An importer (200) and a GUI engine (300) that creates a GUI class based on the binaryized 3D data received from the importer, stores the generated GUI class in the GUI content module (250), and transmits it to the OpenGL ES module; The OpenGL ES module 400 receives and stores the GUI class from the GUI engine, generates command parameters while traversing the classes, and transmits the generated command parameters to the GPU, and controls to receive command parameters from the OpenGL ES module and display them on the display It is characterized in that it consists of a GPU 500 to perform and a Display 600 to express the received Command Parameters.
도 4는 본 발명에 적용되는 3D Data 기본 구조도이다. 상기도 4에서 본 발명에 적용되는 3D Data의 기본 구조에서 Scene은 노드 계층 구조의 최상위이며, 하나의 Root Node를 갖고 이 Root Node에는 3D 관련 Mesh 정보, Light 정보, Camera 정보가 포함될 수 있는 것이다. 또한 상기 Mesh 정보에는 Animation, Effect, Texture 정보를 가질 수 있고, Root Node는 자식을 순회하며 자식의 유형을 검사할 수 있으며, 가능한 자식의 타입은 Mesh, Light, Camera 가 있는 것이다. 또한 자식의 타입이 Mesh 타입이면 Mesh Data인 Vertex 좌표, Normal 좌표, UV 좌표, Translate, Rotate, Scale, 각 Animation 정보, Texture 정보, Effect 정보가 포함되고, Light 정보에는 빛 발생 지점의 좌표 값, 색상 정보가 포함되고, Camera 정보에는 Camera가 바라보는 물체의 위치정보, Viewpoint, Fov 정보가 포함되는 것임을 나타내고 있는 것이다. 4 is a basic structure diagram of 3D Data applied to the present invention. In the basic structure of 3D data applied to the present invention in FIG. 4, a scene is at the top of the node hierarchy, has one root node, and this root node can include 3D related mesh information, light information, and camera information. In addition, the Mesh information may have Animation, Effect, and Texture information, and the Root Node may inspect the type of the child while traversing the children, and the types of possible children include Mesh, Light, and Camera. In addition, if the child type is a mesh type, mesh data such as vertex coordinates, normal coordinates, UV coordinates, translate, rotate, scale, each animation information, texture information, and effect information are included. information is included, and the camera information includes location information of the object viewed by the camera, viewpoint, and Fov information.
상기와 같이 구성된 본 발명은 자동차와 같은 계기판이 필요한 시스템에서 계기판에 필요한 이미지를 3D 이미지로 신속하게 표출하도록 함으로써 사용자는 원하는 정보를 3D 이미지로 신속하게 제공받을 수 있으므로 차량, 드론, 선박, 항공기와 같은 기기에 널리 사용될 수 있는 것이다.The present invention configured as described above allows the user to quickly receive the desired information as a 3D image by quickly expressing the image required for the instrument panel in a system requiring an instrument panel such as a car as a 3D image, so that the vehicle, drone, ship, aircraft and It can be widely used in the same device.

Claims (7)

  1. 자동차 계기판에서 3D 고속 렌더링을 위한 3D 데이터 변환 및 사용 방법에 있어서,In the method of converting and using 3D data for 3D high-speed rendering in an automobile instrument panel,
    상기 3D 고속 렌더링을 위한 3D 데이터 변환 및 사용 방법은,The 3D data conversion and use method for the 3D high-speed rendering,
    컨버터가 수신되는 3D Data 구조를 분석하여 GUI 엔진의 계층 구조로 변환하고 변환된 3D 데이터를 Importer로 전송하여 바이너리화하도록 하는 단계(S11)와;The converter analyzes the received 3D data structure, converts it into a hierarchical structure of the GUI engine, and transmits the converted 3D data to an importer to make it binary (S11);
    Importer가 수신된 변환된 3D Data를 바이너리화하여 GUI 컨텐츠 모듈에 저장하고 GUI Engine 모듈 구동 후 GUI 컨텐츠 모듈에 기저장된 바이너리화된 3D 데이터를 독출하고 독출된 바이너리화된 3D 데이터를 GUI Engine 모듈로 전송하는 단계(S12)와;The importer converts the received converted 3D data into binaries and stores them in the GUI content module, runs the GUI Engine module, reads the binaryized 3D data previously stored in the GUI content module, and transmits the read binaryized 3D data to the GUI Engine module. Step (S12) and;
    GUI Engine 모듈이 수신된 바이너리화된 3D Data를 기초로 GUI Class를 생성하고 생성된 GUI Class를 GUI 컨텐츠에 저장하고 OpenGL ES 모듈로 전송하는 단계(S13)와;The GUI engine module generates a GUI class based on the received binary 3D data, stores the generated GUI class in GUI contents, and transmits the GUI content to the OpenGL ES module (S13);
    OpenGL ES 모듈이 수신된 GUI Class에서 Class를 순회하면서 Graphics Library 함수를 이용 Command Parameter를 생성하고 생성된 Command Parameter를 GPU로 전송하는 단계(S14);The OpenGL ES module traverses the classes in the received GUI class, creates a command parameter using the graphics library function, and transmits the generated command parameter to the GPU (S14);
    및 GPU가 수신된 GUI Class의 Command Parameter를 처리하고 디스플레이부로 전송하여 제공하도록 하는 단계(S15)를 포함하여 이루어지는 것을 특징으로 하는 3D 고속 렌더링을 위한 3D 데이터 변환 및 사용 방법.And 3D data conversion and use method for 3D high-speed rendering characterized in that it comprises a step (S15) of processing the Command Parameter of the GUI Class received by the GPU and transmitting it to the display unit.
  2. 제1항에 있어서,According to claim 1,
    상기 3D 데이터는,The 3D data,
    FBX, DAE 등 AutoDEsk 사에서 공개한 3D File Format으로 Model의 좌표, 장면, Camera, Light, Animation 정보들을 포함하는 것을 특징으로 하는 3D 고속 렌더링을 위한 3D 데이터 변환 및 사용 방법.3D data conversion and usage method for 3D high-speed rendering characterized by including model coordinates, scene, camera, light, and animation information in a 3D file format released by AutoDEsk, such as FBX and DAE.
  3. 제1항에 있어서,According to claim 1,
    상기 3D 데이터의 기본 구조는,The basic structure of the 3D data,
    최상위에 Scene을 가지고 하위에 Root Node를 가지며,It has a scene at the top and a root node at the bottom,
    상기 Root Node는 3D 관련 Mesh 정보, Light 정보, Camera 정보가 포함될 수 있는 것을 특징으로 하는 3D 고속 렌더링을 위한 3D 데이터 변환 및 사용 방법.The Root Node is a method of converting and using 3D data for 3D high-speed rendering, characterized in that it can include 3D-related Mesh information, Light information, and Camera information.
  4. 제1항에 있어서,According to claim 1,
    GUI 엔진의 계층구조는,The hierarchical structure of the GUI engine is
    Layer → Scene → Sprite … → Shape와 같은 구조인 것을 특징으로 하는 3D 고속 렌더링을 위한 3D 데이터 변환 및 사용 방법.Layer → Scene → Sprite … → 3D data conversion and usage method for 3D high-speed rendering characterized by having the same structure as Shape.
  5. 제4항에 있어서,According to claim 4,
    상기 Layer는 하나의 Scene을 가지고, Scene은 N개의 Sprite를 포함할 수 있으며, Sprite는 N개의 Child Sprite 및 Shape를 가질수 있는 것을 특징으로 하는 3D 고속 렌더링을 위한 3D 데이터 변환 및 사용 방법.3D data conversion and use method for 3D high-speed rendering, characterized in that the layer has one scene, the scene may include N sprites, and the sprite may have N child sprites and shapes.
  6. 자동차 계기판에서 3D 고속 렌더링을 위한 3D 데이터 변환 및 사용 시스템에 있어서,In a 3D data conversion and use system for 3D high-speed rendering in an automobile instrument panel,
    상기 3D 고속 렌더링을 위한 3D 데이터 변환 및 사용 시스템은,The 3D data conversion and use system for the 3D high-speed rendering,
    3D Data에 대하여 구조를 분석하여 GUI 엔진의 계층구조로 변환하고 변화된 GUI 엔진 계층 구조를 Importer로 전송하는 Converter(100)와;Converter 100 which analyzes the structure of 3D data, converts it into a GUI engine hierarchical structure, and transmits the changed GUI engine hierarchical structure to an importer;
    Converter로부터 수신된 GUI 엔진 계층 구조로 변환된 3D Data를 바이너리화하고 GUI 컨텐츠 모듈(250)에 저장하며, GUI 컨텐츠 모듈(250)에서 바이너리화된 3D Data를 독출하여 GUI 엔진 모듈로 전송하는 Importer(200)와;An importer that converts the 3D data received from the converter into the GUI engine hierarchical structure into binaries, stores them in the GUI content module 250, reads the binaryized 3D data from the GUI content module 250, and transmits it to the GUI engine module ( 200) and;
    Importer로부터 수신된 바이너리화된 3D Data를 기초로 GUI Class를 생성하고 생성된 GUI Class를 GUI 컨텐츠 모듈(250)에 저장하고 OpenGL ES모듈로 전송하는 GUI Engine 모듈(300)과;a GUI engine module 300 that generates a GUI class based on the binaryized 3D data received from the importer, stores the generated GUI class in the GUI content module 250, and transmits the generated GUI class to the OpenGL ES module;
    GUI Engine 모듈로부터 GUI Class를 수신하여 저장하고 Class를 순회하면서 Command Parameter를 생성하고 생성된 Command Parameter GPU로 전송하는 OpenGL ES 모듈(400)과;An OpenGL ES module 400 that receives and stores the GUI Class from the GUI Engine module, generates Command Parameters while traversing the Classes, and transmits the generated Command Parameters to the GPU;
    상기 OpenGL ES 모듈로부터 Command Parameter를 수신하여 Dispaly에 표출하도록 제어하는 GPU(500);a GPU 500 that receives command parameters from the OpenGL ES module and controls them to be displayed on Display;
    및 수신된 Command Parameter 들을 표출하는 Display(600)로 구성된 것을 특징으로 하는 3D 고속 렌더링을 위한 3D 데이터 변환 및 사용 시스템.And 3D data conversion and use system for 3D high-speed rendering, characterized in that consisting of a display (600) for expressing the received Command Parameters.
  7. 제6항에 있어서,According to claim 6,
    상기 컨버터는,The converter is
    3D Modeling Data의 내용을 GUI 엔진에서 사용하며 GUI Class에서 제공하는 3D 관련 Class인 Mesh Class, Camera Class, Light Class, Texture Class, Animation Class의 Instance를 생성하는 저장하는 것을 특징으로 하는 3D 고속 렌더링을 위한 3D 데이터 변환 및 사용 시스템.For 3D high-speed rendering characterized by the creation and storage of instances of Mesh Class, Camera Class, Light Class, Texture Class, and Animation Class, which are 3D-related classes provided by the GUI engine and used by the GUI engine, the contents of 3D modeling data. 3D data conversion and use system.
PCT/KR2021/014075 2021-07-29 2021-10-13 3d data conversion and use method for high speed 3d rendering WO2023008647A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
KR1020210100012A KR102635694B1 (en) 2021-07-29 2021-07-29 3D Data Transformation and Using Method for 3D Express Rendering
KR10-2021-0100012 2021-07-29

Publications (1)

Publication Number Publication Date
WO2023008647A1 true WO2023008647A1 (en) 2023-02-02

Family

ID=85087798

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/KR2021/014075 WO2023008647A1 (en) 2021-07-29 2021-10-13 3d data conversion and use method for high speed 3d rendering

Country Status (2)

Country Link
KR (1) KR102635694B1 (en)
WO (1) WO2023008647A1 (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20050080650A (en) * 2004-02-10 2005-08-17 삼성전자주식회사 Method and apparatus for high speed visualization of depth image-based 3d graphic data
US20120050300A1 (en) * 2004-07-08 2012-03-01 Stragent, Llc Architecture For Rendering Graphics On Output Devices Over Diverse Connections
KR20160120101A (en) * 2015-04-07 2016-10-17 엘지전자 주식회사 Vehicle terminal and control method thereof
KR101738434B1 (en) * 2016-02-19 2017-05-24 (주)컨트릭스랩 3d server providing 3d model data preprocessing and streaming, and method thereof
KR20200111976A (en) * 2019-03-20 2020-10-05 금오공과대학교 산학협력단 Method of Configure Lightweight Files for 3D geometry Visualization

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20130133319A (en) * 2012-05-23 2013-12-09 삼성전자주식회사 Apparatus and method for authoring graphic user interface using 3d animations
KR102140504B1 (en) * 2020-01-16 2020-08-03 (주)그래피카 Digital Instrument Display Method of Vehicle and Apparatus thereof

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20050080650A (en) * 2004-02-10 2005-08-17 삼성전자주식회사 Method and apparatus for high speed visualization of depth image-based 3d graphic data
US20120050300A1 (en) * 2004-07-08 2012-03-01 Stragent, Llc Architecture For Rendering Graphics On Output Devices Over Diverse Connections
KR20160120101A (en) * 2015-04-07 2016-10-17 엘지전자 주식회사 Vehicle terminal and control method thereof
KR101738434B1 (en) * 2016-02-19 2017-05-24 (주)컨트릭스랩 3d server providing 3d model data preprocessing and streaming, and method thereof
KR20200111976A (en) * 2019-03-20 2020-10-05 금오공과대학교 산학협력단 Method of Configure Lightweight Files for 3D geometry Visualization

Also Published As

Publication number Publication date
KR102635694B1 (en) 2024-02-13
KR20230018170A (en) 2023-02-07

Similar Documents

Publication Publication Date Title
EP0575346B1 (en) Method and apparatus for rendering graphical images
KR100790892B1 (en) Method and apparatus to render 3d graphics data for enhancing image quality of transparent object
WO2012132237A1 (en) Image drawing device for drawing stereoscopic image, image drawing method, and image drawing program
CN107833262A (en) Graphic system and graphics processor
CN114897754B (en) Generating new frames using rendered content and non-rendered content from previous perspectives
TW209288B (en)
KR20190129095A (en) Mixed Reality System with Multi-Source Virtual Content Synthesis and Method of Creating Virtual Content Using Them
CN110765620A (en) Aircraft visual simulation method, system, server and storage medium
WO2019088699A1 (en) Image processing method and device
WO2016137080A1 (en) Three-dimensional character rendering system using general purpose graphic processing unit, and processing method thereof
WO2023008647A1 (en) 3d data conversion and use method for high speed 3d rendering
WO2011081283A1 (en) Virtual external view display device for a transporter
US5880735A (en) Method for and apparatus for transparency conversion, image processing system
KR102041320B1 (en) Precision-Location Based Optimized 3D Map Delivery System
US20230306880A1 (en) Display medium, processing apparatus, program, and computer-readable recording medium provided with program recorded thereon
US9536343B2 (en) Three-dimensional image generation apparatus and three-dimensional image generation method
WO2012173304A1 (en) Graphical image processing device and method for converting a low-resolution graphical image into a high-resolution graphical image in real time
WO2022054580A1 (en) Display medium, processing device, and program
CN114764850A (en) Virtual-real fusion simulation system of semi-physical simulation cabin based on VST technology
WO2016010246A1 (en) 3d image display device and method
CN113223146A (en) Data labeling method and device based on three-dimensional simulation scene and storage medium
US8854394B2 (en) Processing method and apparatus therefor and image compositing method and apparatus therefor
WO2019216468A1 (en) Mobile viewing system
US20060109270A1 (en) Method and apparatus for providing calligraphic light point display
WO2012128421A1 (en) 3d graphic model spatial rendering device and 3d rendering method

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21952016

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE