KR20170066844A - Apparatus for processing graphics and operating method thereof, and terminal including same - Google Patents

Apparatus for processing graphics and operating method thereof, and terminal including same Download PDF

Info

Publication number
KR20170066844A
KR20170066844A KR1020150172997A KR20150172997A KR20170066844A KR 20170066844 A KR20170066844 A KR 20170066844A KR 1020150172997 A KR1020150172997 A KR 1020150172997A KR 20150172997 A KR20150172997 A KR 20150172997A KR 20170066844 A KR20170066844 A KR 20170066844A
Authority
KR
South Korea
Prior art keywords
image frame
rendering
lod
module
scene
Prior art date
Application number
KR1020150172997A
Other languages
Korean (ko)
Inventor
박진홍
기선호
Original Assignee
엘지전자 주식회사
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 엘지전자 주식회사 filed Critical 엘지전자 주식회사
Priority to KR1020150172997A priority Critical patent/KR20170066844A/en
Publication of KR20170066844A publication Critical patent/KR20170066844A/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/005General purpose rendering architectures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2210/00Indexing scheme for image generation or computer graphics
    • G06T2210/36Level of detail

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Graphics (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Telephone Function (AREA)

Abstract

A graphics processing apparatus according to an embodiment of the present invention includes a focal length extraction module for extracting a focal length of a first image frame, a depth map extraction module for extracting a depth map of the first image frame, A rendering LoD generation module for generating a rendering level of detail (LoD) for a second image frame, which is the next frame of the first image frame, and a scene change module for generating a scene change between the first image frame and the second image frame And a rendering module that performs rendering on the second image frame by applying the rendering LoD.

Description

TECHNICAL FIELD [0001] The present invention relates to a graphic processing apparatus, a method of operating the same, and a terminal including the graphic processing apparatus,

An embodiment according to the concept of the present invention relates to a graphics processing apparatus, and more particularly, to a graphics processing apparatus capable of performing rendering by applying different rendering levels of detail (LoD) to regions or pixels in an image frame, And a terminal including the same.

A graphics processing unit (or a graphics processing unit (GPU)) included in a terminal such as a PC, a notebook computer, a smart phone, a tablet PC, or the like performs graphics processing for generating a 2D or 3D image, and performs image processing operations such as rendering.

Rendering is a process of creating an image from a scene. It is also used as a method of adding stereoscopic effect and realism to a two-dimensional image in consideration of information such as light source, position, color, shadow, density change,

The quality of the image frame displayed through the display may vary depending on the rendering level of detail (LoD). The higher the rendering LoD, the higher the quality of the image frame, and the lower the rendering LoD the lower the quality of the image frame. On the other hand, the higher the rendering LoD, the higher the power consumption of the graphics processing apparatus and the lower the processing speed.

The time for which one image frame is displayed to the user is very short, and accordingly, the area viewed by the user can be limited to a specific area of the image frame, for example, an object displayed in the center area or the focus area. According to the conventional rendering method, since the same rendering LoD is applied to the entire image frame, it can be inefficient when a high rendering LoD is applied to all the areas.

According to an aspect of the present invention, there is provided a graphics processing apparatus capable of performing rendering on an image frame by applying different rendering LoDs for each region or each pixel in an image frame, and an operation method thereof.

A graphics processing apparatus according to an embodiment of the present invention includes a focal length extraction module for extracting a focal length of a first image frame, a depth map extraction module for extracting a depth map of the first image frame, A rendering LoD generation module for generating a rendering level of detail (LoD) for a second image frame, which is the next frame of the first image frame, and a scene change module for generating a scene change between the first image frame and the second image frame And a rendering module that performs rendering on the second image frame by applying the rendering LoD.

The focal length extraction module and the depth map extraction module may extract the focal length and the depth map from the first image frame being rendered by the rendering module.

According to one embodiment, the graphics processing apparatus further comprises a blur area detection module for detecting a blur area from the rendered first image frame, wherein the rendering LoD generation module is configured to calculate the focal distance, , And generate the rendering LoD based on the blur area.

According to one embodiment, the graphics processing apparatus further comprises a representative depth value extracting module for dividing the depth map into a plurality of tiles and extracting a representative depth value of each of the plurality of divided tiles, The generating module may generate the rendering LoD based on the focal length and the representative depth value of each of the plurality of tiles.

The representative depth value of each of the plurality of tiles may be an average value, a middle value, or a mode value of depth values included in each of the plurality of tiles.

According to one embodiment, the graphics processing apparatus further comprises a scene change detection module for comparing scene identities between the first image frame and the second image frame, wherein the rendering module, If the scene of the frame and the scene of the second image frame are the same, the rendering of the second image frame may be performed by applying the generated rendering LoD.

The case where the scene of the first image frame and the scene of the second image frame are the same may include a case where the degree of change between the scene of the first image frame and the scene of the second image frame is lower than the reference degree .

A method of operating a graphics processing apparatus according to an embodiment of the present invention includes extracting a focal length of a first image frame, extracting a depth map of the first image frame, Generating a rendering level of detail (LoD) for a second image frame that is a next frame of the first image frame, and based on a scene change between the first image frame and the second image frame, And performing rendering on the second image frame by applying LoD.

A terminal according to an exemplary embodiment of the present invention includes a graphics processing apparatus that renders a first image frame, a display unit that displays a first image frame that is rendered, and a controller that controls the graphics processing apparatus and the display unit, The graphic processing apparatus includes a focal distance extracting module for extracting a focal distance of the first image frame, a depth map extracting module for extracting a depth map of the first image frame, A rendering LoD generation module for generating a rendering LoD for a second image frame, which is the next frame of one image frame, and a rendering LoD generation module for applying the rendering LoD based on a scene change between the first image frame and the second image frame, And a rendering module that performs rendering for the second image frame.

The graphics processing apparatus according to the embodiment of the present invention can lower the rendering LoD for a region blurred beyond the focal distance in the image frame or for a region with low perceived interest and thus can reduce the power consumption of the graphics processing apparatus and improve the processing speed It is effective.

1 is a schematic block diagram of a terminal according to an embodiment of the present invention.
2A and 2B are schematic block diagrams of a graphics processing apparatus according to an embodiment of the present invention.
FIG. 3 is a flowchart for explaining the operation of the graphic processing apparatus shown in FIG. 2A.
Figure 4 is an illustration of a depth map extracted from an image frame being rendered.
5 and 6 are diagrams illustrating an operation of setting a rendering LoD based on a focal length and a depth map by the graphics processing apparatus according to the embodiment of the present invention.
7 is a flowchart for explaining another embodiment of the operation of setting the rendering LoD based on the focal length and depth map by the graphics processing apparatus according to the embodiment of the present invention.
8 is an exemplary view showing the operation of the graphic processing apparatus shown in Fig.
Figs. 9 to 11 show the operation of the graphic processing apparatus shown in Fig. 2A in more detail.
12 is a flowchart for explaining the operation of the graphic processing apparatus shown in FIG. 2B.
13 to 14 show the operation of the graphic processing apparatus shown in Fig. 2B in more detail.

Hereinafter, embodiments of the present invention will be described in detail with reference to the accompanying drawings, wherein like reference numerals are used to designate identical or similar elements, and redundant description thereof will be omitted. The suffix "module" and " part "for the components used in the following description are given or mixed in consideration of ease of specification, and do not have their own meaning or role. In the following description of the embodiments of the present invention, a detailed description of related arts will be omitted when it is determined that the gist of the embodiments disclosed herein may be blurred. It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory and are intended to provide further explanation of the invention as claimed. , ≪ / RTI > equivalents, and alternatives.

Terms including ordinals, such as first, second, etc., may be used to describe various elements, but the elements are not limited to these terms. The terms are used only for the purpose of distinguishing one component from another.

It is to be understood that when an element is referred to as being "connected" or "connected" to another element, it may be directly connected or connected to the other element, . On the other hand, when an element is referred to as being "directly connected" or "directly connected" to another element, it should be understood that there are no other elements in between.

The singular expressions include plural expressions unless the context clearly dictates otherwise.

In the present application, the terms "comprises", "having", and the like are used to specify that a feature, a number, a step, an operation, an element, a component, But do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, or combinations thereof.

The terminal described in this specification may be a mobile phone, a smart phone, a laptop computer, a digital broadcasting terminal, a personal digital assistant (PDA), a portable multimedia player (PMP), a navigation device, a slate PC, A tablet PC, an ultrabook, a wearable device (e.g., a smartwatch, a smart glass, a head mounted display (HMD), etc.) .

However, it will be readily apparent to those skilled in the art that the configuration according to the embodiments described herein may be applied to fixed terminals such as a digital TV, a desktop computer, a digital signage, and the like, .

Hereinafter, embodiments of the present invention will be described in detail with reference to the drawings attached hereto.

1 is a schematic block diagram of a terminal according to an embodiment of the present invention.

The terminal 100 includes a wireless communication unit 110, an input unit 120, a sensing unit 140, an output unit 150, an interface unit 160, a memory 170, a control unit 180, and a power supply unit 190 . ≪ / RTI > The components shown in FIG. 1 are not essential to the implementation of the terminal, so that the terminal described herein may have more or fewer components than the components listed above.

The wireless communication unit 110 may be connected between the terminal 100 and the wireless communication system, between the terminal 100 and another terminal 100, between the terminal 100 and an external device, 0.0 > 100 < / RTI > and an external server. In addition, the wireless communication unit 110 may include one or more modules for connecting the terminal 100 to one or more networks.

The wireless communication unit 110 may include at least one of a broadcast receiving module 111, a mobile communication module 112, a wireless Internet module 113, a short distance communication module 114, and a location information module 115 .

The input unit 120 includes a camera 121 or an image input unit for inputting a video signal, a microphone 122 for inputting an audio signal, an audio input unit, a user input unit 123 for receiving information from a user A touch key, a mechanical key, and the like). The voice data or image data collected by the input unit 120 may be analyzed and processed by a user's control command.

The sensing unit 140 may include one or more sensors for sensing at least one of information in the terminal, surrounding environment information surrounding the terminal, and user information. For example, the sensing unit 140 may include a proximity sensor 141, an illumination sensor 142, a touch sensor, an acceleration sensor, a magnetic sensor, A G-sensor, a gyroscope sensor, a motion sensor, an RGB sensor, an infrared sensor, a finger scan sensor, an ultrasonic sensor, A microphone 226, a battery gauge, an environmental sensor (for example, a barometer, a hygrometer, a thermometer, a radiation detection sensor, A thermal sensor, a gas sensor, etc.), a chemical sensor (e.g., an electronic nose, a healthcare sensor, a biometric sensor, etc.). Meanwhile, the terminal disclosed in this specification can combine and utilize information sensed by at least two of the sensors.

The output unit 150 includes at least one of a display unit 151, an acoustic output unit 152, a haptic module 153, and a light output unit 154 to generate an output related to visual, auditory, can do. The display unit 151 may have a mutual layer structure with the touch sensor or may be integrally formed to realize a touch screen. The touch screen functions as a user input unit 123 that provides an input interface between the terminal 100 and a user and can provide an output interface between the terminal 100 and a user.

The interface unit 160 serves as a path for communication with various types of external devices connected to the terminal 100. The interface unit 160 is connected to a device having a wired / wireless headset port, an external charger port, a wired / wireless data port, a memory card port, And may include at least one of a port, an audio I / O port, a video I / O port, and an earphone port. In the terminal 100, corresponding to the connection of the external device to the interface unit 160, it is possible to perform appropriate control related to the connected external device.

In addition, the memory 170 stores data supporting various functions of the terminal 100. The memory 170 may store a plurality of application programs or applications that are driven by the terminal 100, data for operation of the terminal 100, and commands. At least some of these applications may be downloaded from an external server via wireless communication. Also, at least some of these application programs may reside on the terminal 100 from the time of departure for the basic functions (e.g., call incoming, outgoing, message receiving, originating functions) of the terminal 100. Meanwhile, the application program may be stored in the memory 170, installed on the terminal 100, and operated by the control unit 180 to perform the operation (or function) of the terminal.

The control unit 180 typically controls the overall operation of the terminal 100, in addition to the operations associated with the application program. The control unit 180 may process or process signals, data, information, and the like input or output through the above-mentioned components, or may drive an application program stored in the memory 170 to provide or process appropriate information or functions to the user.

In addition, the controller 180 may control at least some of the components illustrated in FIG. 1 in order to drive an application program stored in the memory 170. In addition, the controller 180 can operate at least two of the components included in the terminal 100 in combination with each other for driving the application program.

The control unit 180 may include a graphic processing unit (or a graphic processing unit (GPU) 1, the control unit 180 includes the graphic processing unit 200. However, the graphic processing unit 200 may be implemented separately from the control unit 180 according to an embodiment of the present invention, And may be connected to the control unit 180.

The graphic processing apparatus 200 may perform a rendering operation for outputting the graphic data through the display unit 151 as an image. The rendering operation may be an operation of generating an image by performing a process such as modeling, texture mapping, illumination, and shading based on the graphic data. Since the rendering corresponds to a known technique, a detailed description will be omitted.

The power supply unit 190 receives external power and internal power under the control of the controller 180 and supplies power to the components included in the terminal 100. The power supply unit 190 includes a battery, which may be an internal battery or a replaceable battery.

At least some of the components may operate in cooperation with each other to implement a method of operation, control, or control of a terminal according to various embodiments described below. In addition, the operation, control, or control method of the terminal may be implemented on the terminal by driving at least one application program stored in the memory 170. [

Hereinafter, the components listed above will be described in more detail with reference to FIG. 1, before explaining various embodiments implemented through the terminal 100 as described above.

First, referring to the wireless communication unit 110, the broadcast receiving module 111 of the wireless communication unit 110 receives broadcast signals and / or broadcast-related information from an external broadcast management server through a broadcast channel. The broadcast channel may include a satellite channel and a terrestrial channel. More than one broadcast receiving module may be provided to the terminal 100 for simultaneous broadcast reception or broadcast channel switching for at least two broadcast channels.

The mobile communication module 112 may be a mobile communication module or a mobile communication module such as a mobile communication module or a mobile communication module that uses technology standards or a communication method (e.g., Global System for Mobile communication (GSM), Code Division Multi Access (CDMA), Code Division Multi Access 2000 (Enhanced Voice-Data Optimized or Enhanced Voice-Data Only), Wideband CDMA (WCDMA), High Speed Downlink Packet Access (HSDPA), High Speed Uplink Packet Access (HSUPA), Long Term Evolution And an external terminal, or a server on a mobile communication network established according to a long term evolution (AR), a long term evolution (AR), or the like.

The wireless signal may include various types of data depending on a voice call signal, a video call signal or a text / multimedia message transmission / reception.

The wireless Internet module 113 is a module for wireless Internet access, and may be built in or externally attached to the mobile terminal 100. The wireless Internet module 113 is configured to transmit and receive a wireless signal in a communication network according to wireless Internet technologies.

Wireless Internet technologies include, for example, wireless LAN (WLAN), wireless fidelity (Wi-Fi), wireless fidelity (Wi-Fi) Direct, DLNA (Digital Living Network Alliance), WiBro Interoperability for Microwave Access, High Speed Downlink Packet Access (HSDPA), High Speed Uplink Packet Access (HSUPA), Long Term Evolution (LTE) and Long Term Evolution-Advanced (LTE-A) Transmits and receives data according to at least one wireless Internet technology, including Internet technologies not listed above.

The wireless Internet module 113 for performing a wireless Internet connection through the mobile communication network in view of wireless Internet access by WiBro, HSDPA, HSUPA, GSM, CDMA, WCDMA, LTE, LTE- May be understood as a kind of mobile communication module 112.

The short-range communication module 114 is for short-range communication, and includes Bluetooth ™, Radio Frequency Identification (RFID), Infrared Data Association (IrDA), Ultra Wideband (UWB) (Near Field Communication), Wi-Fi (Wireless-Fidelity), Wi-Fi Direct, and Wireless USB (Wireless Universal Serial Bus) technology. The short-range communication module 114 may communicate with the terminal 100 through a wireless communication network, between the terminal 100 and an external device, or between the terminal 100 and another mobile terminal External server) can be supported. The short-range wireless communication network may be a short-range wireless personal area network.

The short-range communication module 114 may detect (or recognize) another mobile terminal or an external device capable of communicating around the mobile terminal 100. The control unit 180 may transmit at least a part of the data processed in the terminal 100 to the short distance communication module 114 or the short distance communication module 114 in the case where the other mobile terminal or the external device is an authorized device for communicating with the terminal 100 according to the present invention. To another mobile terminal or an external device. Accordingly, the user of another mobile terminal can use the data processed in the terminal 100 through another mobile terminal. For example, according to this, when a telephone is received in the terminal 100, the user performs a telephone conversation via another mobile terminal, or when a message is received in the terminal 100, It is possible to confirm the message, and the opposite operation is also possible.

The position information module 115 is a module for obtaining the position (or current position) of the terminal 100, and representative examples thereof include a Global Positioning System (GPS) module or a Wireless Fidelity (WiFi) module. For example, when the terminal 100 utilizes a GPS module, it can acquire the position of the terminal 100 using a signal transmitted from the GPS satellite. As another example, when the terminal 100 utilizes a Wi-Fi module, it acquires the position of the terminal 100 based on information of a wireless access point (AP) that transmits or receives a wireless signal with the Wi-Fi module can do. Optionally, the location information module 115 may replace or additionally perform any of the other modules of the wireless communication unit 110 to obtain data regarding the location of the terminal 100. The position information module 115 is a module used to obtain the position (or the current position) of the terminal 100, and is not limited to a module that directly calculates or acquires the position of the terminal 100. [

The input unit 120 is for inputting image information (or signal), audio information (or signal), data, or information input from a user. For inputting image information, A plurality of cameras 121 may be provided. The camera 121 processes image frames such as still images or moving images obtained by the image sensor in the video communication mode or the photographing mode. The processed image frame may be displayed on the display unit 151 or stored in the memory 170. [ A plurality of cameras 121 provided in the terminal 100 may be arranged to have a matrix structure and a plurality of cameras 121 having various angles or foci may be provided to the terminal 100 through the camera 121 having a matrix structure, Can be input. In addition, the plurality of cameras 121 may be arranged in a stereo structure to acquire a left image and a right image for realizing a stereoscopic image.

The microphone 122 processes the external acoustic signal into electrical voice data. The processed voice data can be utilized variously according to a function (or an application program being executed) being performed in the terminal 100. Meanwhile, the microphone 122 may be implemented with various noise reduction algorithms for eliminating noise generated in receiving an external sound signal.

The user input unit 123 is for receiving information from a user. When information is inputted through the user input unit 123, the controller 180 can control the operation of the terminal 100 to correspond to the input information. The user input unit 123 may include a mechanical input unit (or a mechanical key such as a button located on the front, rear or side of the terminal 100, a dome switch, a jog wheel, Switches, etc.) and touch-based input means. For example, the touch-type input means may comprise a virtual key, a soft key or a visual key displayed on the touch screen through software processing, And a touch key disposed on the touch panel. Meanwhile, the virtual key or the visual key can be displayed on a touch screen having various forms, for example, a graphic, a text, an icon, a video, As shown in FIG.

The sensing unit 140 senses at least one of information in the terminal 100, surrounding environment information surrounding the terminal 100, and user information, and generates a corresponding sensing signal. The control unit 180 may control the driving or operation of the terminal 100 or may perform data processing, function or operation related to the application program installed in the terminal 100 based on the sensing signal. Representative sensors among various sensors that may be included in the sensing unit 140 will be described in more detail.

First, the proximity sensor 141 refers to a sensor that detects the presence of an object approaching a predetermined detection surface, or the presence of an object in the vicinity of the detection surface, without mechanical contact by using electromagnetic force or infrared rays. The proximity sensor 141 may be disposed in the inner region of the terminal 100 or the proximity sensor 141 near the touch screen.

Examples of the proximity sensor 141 include a transmission type photoelectric sensor, a direct reflection type photoelectric sensor, a mirror reflection type photoelectric sensor, a high frequency oscillation type proximity sensor, a capacitive proximity sensor, a magnetic proximity sensor, and an infrared proximity sensor. In the case where the touch screen is electrostatic, the proximity sensor 141 can be configured to detect the proximity of the object with a change of the electric field along the proximity of the object having conductivity. In this case, the touch screen (or touch sensor) itself may be classified as a proximity sensor.

On the other hand, for convenience of explanation, the act of recognizing that the object is located on the touch screen in proximity with no object touching the touch screen is referred to as "proximity touch & The act of actually touching an object on the screen is called a "contact touch. &Quot; The position at which the object is closely touched on the touch screen means a position where the object corresponds to the touch screen vertically when the object is touched. The proximity sensor 141 can sense the proximity touch and the proximity touch pattern (e.g., proximity touch distance, proximity touch direction, proximity touch speed, proximity touch time, proximity touch position, proximity touch movement state, etc.) . Meanwhile, the control unit 180 processes data (or information) corresponding to the proximity touch operation and the proximity touch pattern sensed through the proximity sensor 141 as described above, and further provides visual information corresponding to the processed data It can be output on the touch screen. Furthermore, the control unit 180 may control the terminal 100 so that different operations or data (or information) are processed according to whether the touch to the same point on the touch screen is a proximity touch or a contact touch.

The touch sensor uses a touch (or touch input) applied to the touch screen (or the display unit 151) by using at least one of various touch methods such as a resistance film type, a capacitive type, an infrared type, an ultrasonic type, Detection.

For example, the touch sensor may be configured to convert a change in a pressure applied to a specific portion of the touch screen or a capacitance generated in a specific portion to an electrical input signal. The touch sensor may be configured to detect a position, an area, a pressure at the time of touch, a capacitance at the time of touch, and the like where a touch object touching the touch screen is touched on the touch sensor. Here, the touch object may be a finger, a touch pen, a stylus pen, a pointer, or the like as an object to which a touch is applied to the touch sensor.

Thus, when there is a touch input to the touch sensor, the corresponding signal (s) is sent to the touch controller. The touch controller processes the signal (s) and transmits the corresponding data to the controller 180. Thus, the control unit 180 can know which area of the display unit 151 is touched or the like. Here, the touch controller may be a separate component from the control unit 180, and may be the control unit 180 itself.

On the other hand, the control unit 180 may perform different controls or perform the same control according to the type of the touch object touching the touch screen (or a touch key provided on the touch screen). Whether to perform different controls or to perform the same control depending on the type of the touch object can be determined according to the current state of the terminal 100 or an application program being executed.

On the other hand, the touch sensors and the proximity sensors discussed above can be used independently or in combination to provide a short touch (touch), a long touch, a multi touch, a drag touch ), Flick touch, pinch-in touch, pinch-out touch, swipe touch, hovering touch, and the like. Touch can be sensed.

The ultrasonic sensor can recognize the position information of the object to be sensed by using ultrasonic waves. Meanwhile, the controller 180 can calculate the position of the wave generating source through the information sensed by the optical sensor and the plurality of ultrasonic sensors. The position of the wave source can be calculated using the fact that the light is much faster than the ultrasonic wave, that is, the time when the light reaches the optical sensor is much faster than the time the ultrasonic wave reaches the ultrasonic sensor. More specifically, the position of the wave generating source can be calculated using the time difference with the time when the ultrasonic wave reaches the reference signal.

The camera 121 includes at least one of a camera sensor (for example, a CCD, a CMOS, etc.), a photo sensor (or an image sensor), and a laser sensor.

The camera 121 and the laser sensor may be combined with each other to sense a touch of the sensing object with respect to the three-dimensional stereoscopic image. The photosensor can be laminated to the display element, which is adapted to scan the movement of the object to be detected proximate to the touch screen. More specifically, the photosensor mounts photo diodes and TRs (Transistors) in a row / column and scans the contents loaded on the photosensor using an electrical signal that varies according to the amount of light applied to the photo diode. That is, the photo sensor performs coordinate calculation of the object to be sensed according to the amount of change of light, and position information of the object to be sensed can be obtained through the calculation.

The display unit 151 displays (outputs) information to be processed by the terminal 100. For example, the display unit 151 may display execution screen information of an application program driven by the terminal 100, or UI (User Interface) and GUI (Graphic User Interface) information according to the execution screen information.

The sound output unit 152 may output audio data received from the wireless communication unit 110 or stored in the memory 170 in a call signal reception mode, a call mode or a recording mode, a voice recognition mode, a broadcast reception mode, The sound output unit 152 also outputs sound signals related to functions (e.g., call signal reception sound, message reception sound, and the like) performed in the terminal 100. [ The audio output unit 152 may include a receiver, a speaker, a buzzer, and the like.

The haptic module 153 generates various tactile effects that the user can feel. As a typical example of the haptic effect generated by the haptic module 153, as the vibration, the haptic module 153 may include at least one vibration motor for generating vibration. The type of vibration such as the intensity, pattern, and position of the vibration generated in the haptic module 153 can be controlled by the user's selection or the setting of the control unit. For example, the haptic module 153 may synthesize and output different vibrations or sequentially output the vibrations.

In addition to vibration, the haptic module 153 may be configured to perform various functions such as a pin arrangement vertically moving with respect to the contact skin surface, a spraying force or suction force of the air through the injection port or the suction port, a touch on the skin surface, And various tactile effects such as an effect of reproducing a cold sensation using an endothermic or exothermic element can be generated.

The haptic module 153 can transmit the tactile effect through the direct contact, and the tactile effect can be felt by the user through the muscles of the finger or arm. The haptic module 153 may include two or more haptic modules 153 according to the configuration of the mobile terminal 100.

The light output unit 154 outputs a signal for notifying the occurrence of an event using the light of the light source of the terminal 100. Examples of events that occur in the terminal 100 may include message reception, call signal reception, missed call, alarm, schedule notification, email reception, information reception through an application, and the like.

The signal output by the light output unit 154 is implemented as the terminal 100 emits light of a single color or a plurality of colors to the front or rear surface. The signal output may be terminated by the terminal 100 detecting the event confirmation of the user.

The interface unit 160 serves as a path for communication with all external devices connected to the terminal 100. The interface unit 160 receives data from an external device or supplies power to each component in the terminal 100 or allows data in the terminal 100 to be transmitted to an external device. For example, a port for connecting a device equipped with a wired / wireless headset port, an external charger port, a wired / wireless data port, a memory card port, an audio I / O port, a video I / O port, an earphone port, and the like may be included in the interface unit 160.

The identification module is a chip for storing various information for authenticating the usage right of the terminal 100 and includes a user identification module (UIM), a subscriber identity module (SIM) a universal subscriber identity module (USIM), and the like. Devices with identification modules (hereinafter referred to as "identification devices") can be manufactured in a smart card format. Accordingly, the identification device can be connected to the terminal 100 through the interface unit 160. [

The interface unit 160 may be a path through which power from the cradle is supplied to the terminal 100 when the terminal 100 is connected to an external cradle or various command signals input from the cradle by the user And may be a passage to be transmitted to the terminal 100. The various command signals input from the cradle or the power source can be operated as a signal for recognizing that the terminal 100 is correctly mounted on the cradle.

The memory 170 may store a program for the operation of the controller 180 and temporarily store input / output data (e.g., a phone book, a message, a still image, a moving picture, etc.). The memory 170 may store data related to vibrations and sounds of various patterns that are output upon touch input on the touch screen.

The memory 170 may be a flash memory type, a hard disk type, a solid state disk type, an SDD type (Silicon Disk Drive type), a multimedia card micro type ), Card type memory (e.g., SD or XD memory), random access memory (RAM), static random access memory (SRAM), read-only memory (ROM), electrically erasable programmable read memory, a programmable read-only memory (PROM), a magnetic memory, a magnetic disk, and / or an optical disk. The terminal 100 may operate in association with a web storage that performs storage functions of the memory 170 on the Internet.

Meanwhile, as described above, the control unit 180 controls the operations related to the application program and the general operation of the terminal 100. [ For example, when the state of the terminal 100 satisfies a set condition, the control unit 180 can execute or release a lock state that restricts input of a user's control command to applications.

In addition, the control unit 180 performs control and processing related to voice communication, data communication, video call, or the like, or performs pattern recognition processing to recognize handwriting input or drawing input performed on the touch screen as characters and images, respectively . Further, the controller 180 may control any one or a plurality of the above-described components in order to implement various embodiments described below on the terminal 100 according to the present invention.

According to a hardware implementation, the control unit 180 may be implemented as an integrated circuit (IC), a CPU, a microprocessor, an application processor, or a mobile application processor.

The power supply unit 190 receives external power and internal power under the control of the controller 180 and supplies power necessary for operation of the respective components. The power supply unit 190 includes a battery, the battery may be an internal battery configured to be chargeable, and may be detachably coupled to the terminal body for charging or the like.

In addition, the power supply unit 190 may include a connection port, and the connection port may be configured as an example of an interface 160 through which an external charger for supplying power for charging the battery is electrically connected.

As another example, the power supply unit 190 may be configured to charge the battery in a wireless manner without using the connection port. In this case, the power supply unit 190 may use at least one of an inductive coupling method based on the magnetic induction phenomenon and a magnetic resonance coupling based on the electromagnetic resonance phenomenon from an external wireless power transmission apparatus Power can be delivered.

Next, a communication system that can be implemented through the terminal 100 according to the present invention will be described.

First, the communication system may use different wireless interfaces and / or physical layers. For example, wireless interfaces that can be used by a communication system include Frequency Division Multiple Access (FDMA), Time Division Multiple Access (TDMA), Code Division Multiple Access (CDMA) ), Universal mobile telecommunication systems (UMTS) (in particular Long Term Evolution (LTE), Long Term Evolution-Advanced (LTE-A)), Global System for Mobile Communications May be included.

Hereinafter, for the sake of convenience of description, the description will be limited to CDMA. However, it is apparent that the present invention can be applied to all communication systems including an OFDM (Orthogonal Frequency Division Multiplexing) wireless communication system as well as a CDMA wireless communication system.

A CDMA wireless communication system includes at least one terminal, at least one base station (BS) (also referred to as a Node B or Evolved Node B), at least one base station controller (BSCs) And a Mobile Switching Center (MSC). The MSC is configured to be coupled to a Public Switched Telephone Network (PSTN) and BSCs. The BSCs may be paired with the BS via a backhaul line. The backhaul line may be provided according to at least one of E1 / T1, ATM, IP, PPP, Frame Relay, HDSL, ADSL or xDSL. Thus, a plurality of BSCs may be included in a CDMA wireless communication system.

Each of the plurality of BSs may comprise at least one sector, and each sector may comprise an omnidirectional antenna or an antenna pointing to a particular direction of radial emission from the BS. In addition, each sector may include two or more antennas of various types. Each BS may be configured to support a plurality of frequency assignments, and a plurality of frequency assignments may each have a specific spectrum (e.g., 1.25 MHz, 5 MHz, etc.).

The intersection of sector and frequency assignment may be referred to as a CDMA channel. The BS may be referred to as a base station transceiver subsystem (BTSs). In this case, a combination of one BSC and at least one BS may be referred to as a "base station ". The base station may also indicate a "cell site ". Alternatively, each of the plurality of sectors for a particular BS may be referred to as a plurality of cell sites.

A Broadcasting Transmitter (BT) transmits a broadcast signal to terminals operating in the system. The broadcast receiving module 111 shown in FIG. 1 is provided in the terminal 100 to receive a broadcast signal transmitted by the BT.

In addition, a CDMA wireless communication system may be associated with a Global Positioning System (GPS) for identifying the location of the terminal 100. The satellite aids in locating the terminal 100. Useful location information may be obtained by two or more satellites. Here, the position of the terminal 100 can be tracked using all the techniques capable of tracking the location as well as the GPS tracking technology. Also, at least one of the GPS satellites may optionally or additionally be responsible for satellite DMB transmissions.

The location information module 115 included in the terminal 100 is for detecting, computing or identifying the location of the terminal 100. The location information module 115 includes a global positioning system (GPS) module and a wireless fidelity (WiFi) module . Optionally, the location information module 115 may replace or additionally perform any of the other modules of the wireless communication unit 110 to obtain data regarding the location of the terminal 100.

The GPS module 115 calculates distance information and accurate time information from three or more satellites and then applies trigonometry to the calculated information to accurately calculate three-dimensional current position information according to latitude, longitude, and altitude . At present, a method of calculating position and time information using three satellites and correcting an error of the calculated position and time information using another satellite is widely used. In addition, the GPS module 115 can calculate speed information by continuously calculating the current position in real time. However, it is difficult to accurately measure the position of the mobile terminal by using the GPS module in the shadow area of the satellite signal as in the room. Accordingly, a WPS (WiFi Positioning System) can be utilized to compensate the positioning of the GPS system.

The WiFi Positioning System (WPS) is a wireless communication system that uses a WiFi module included in the mobile terminal 100 and a wireless access point (wireless AP) that transmits or receives a wireless signal with the WiFi module, Is a technology for tracking a location, and means a location locating technology based on a WLAN (Wireless Local Area Network) using WiFi.

The WiFi location tracking system may include a Wi-Fi location server, a mobile terminal 100, a wireless AP connected to the mobile terminal 100, and a database in which certain wireless AP information is stored.

The terminal 100 connected to the wireless AP can transmit a location information request message to the Wi-Fi location server.

The Wi-Fi location server extracts information of the wireless AP connected to the terminal 100 based on the location information request message (or signal) of the terminal 100. [ The information of the wireless AP connected to the terminal 100 may be transmitted to the Wi-Fi position location server through the terminal 100 or may be transmitted from the wireless AP to the Wi-Fi location server.

The information of the wireless AP to be extracted based on the location information request message of the MS 100 includes a MAC Address, a Service Set Identification (SSID), a Received Signal Strength Indicator (RSSI), a Reference Signal Received Power (RSRP) Received Quality, Channel Information, Privacy, Network Type, Signal Strength, and Noise Strength.

As described above, the Wi-Fi location server can receive the information of the wireless AP connected to the terminal 100 and extract the wireless AP information corresponding to the wireless AP to which the mobile terminal is connected from the pre-established database. In this case, the information of any wireless APs stored in the database includes at least one of MAC address, SSID, channel information, privacy, network type, radius coordinates of the wireless AP, building name, Available), the address of the AP owner, telephone number, and the like. At this time, in order to remove the wireless AP provided using the mobile AP or the illegal MAC address in the positioning process, the Wi-Fi location server may extract only a predetermined number of wireless AP information in order of RSSI.

Thereafter, the Wi-Fi position location server may extract (or analyze) the location information of the terminal 100 using at least one wireless AP information extracted from the database. And compares the received information with the received wireless AP information to extract (or analyze) the location information of the terminal 100.

As a method for extracting (or analyzing) the position information of the terminal 100, a Cell-ID method, a fingerprint method, a triangulation method, and a landmark method can be utilized.

The Cell-ID method is a method for determining the location of the wireless AP having the strongest signal strength among the neighboring wireless AP information collected by the terminal as the location of the terminal. Although the implementation is simple, it does not cost extra and it can acquire location information quickly, but there is a disadvantage that positioning accuracy is lowered when the installation density of the wireless AP is low.

The fingerprint method collects signal strength information by selecting a reference position in a service area, and estimates the position based on the signal strength information transmitted from the mobile terminal based on the collected information. In order to use the fingerprint method, it is necessary to previously convert the propagation characteristics into a database.

The triangulation method is a method of calculating the position of the terminal based on the coordinates of at least three wireless APs and the distance between the terminals. In order to measure the distance between the terminal and the wireless AP, the signal intensity is converted into distance information, the time of arrival (ToA) of the wireless signal, the time difference of arrival (TDoA) Angle of Arrival (AoA) or the like can be used.

The landmark method is a method of measuring the position of a terminal using a landmark transmitter that knows the location.

Various algorithms can be utilized as a method for extracting (or analyzing) the location information of the terminal.

The extracted location information of the terminal 100 is transmitted to the terminal 100 through the Wi-Fi location server, so that the terminal 100 can acquire the location information.

The terminal 100 may be connected to at least one wireless AP to obtain location information. At this time, the number of wireless APs required to acquire the location information of the terminal 100 can be variously changed according to the wireless communication environment in which the terminal 100 is located.

In the following, various embodiments may be embodied in a recording medium readable by a computer or similar device using, for example, software, hardware, or a combination thereof.

2A and 2B are schematic block diagrams of a graphics processing apparatus according to an embodiment of the present invention.

Referring to FIG. 2A, the graphics processing apparatus 200A may include a rendering module 210 and a rendering LoD (level of detail) setting module 220A.

The rendering module 210 may perform a rendering operation on the graphic data GD of a specific frame to generate an image IM to be output through the display unit 151. [ Rendering may mean processing an image (IM) by processing graphic data (GD) including an arrangement of figures, a viewpoint, a texture mapping, illumination, shading information and the like. The rendering has been already known, and a detailed description will be omitted.

The quality of the image IM to be output through the display unit 151 may vary according to the level of detail of the rendering LoD. For example, when rendering with high rendering LoD for the graphic data GD, the quality of the image IM may also be high. On the other hand, the power consumption of the graphic processing apparatus 200 may increase, and the processing speed may be lowered.

Conventionally, when an image IM is generated by the rendering module 210, the same rendering LoD may be applied to all areas of the image IM. For example, the rendering module 210 may generate an image IM according to the same rendering LoD, regardless of the clear and blurred regions of the image IM. It is not necessary to apply a high rendering LoD in a blurred region out of focus in the image (IM), or in a region with low cognitive interest or attention due to out of focus. However, as described above, Since the same rendering LoD is applied, it may be inefficient in terms of power consumption and processing speed.

The rendering LoD setting module 220A included in the graphic processing apparatus 200A according to the embodiment of the present invention determines whether or not the next image of the specific image frame is generated based on various information included in the graphic data GD of the specific image frame You can set the rendering LoD for the frame. In particular, the rendering LoD setting module 220A can set a different rendering LoD for each region of the image frame.

The rendering LoD setting module 220A may include a focal length extraction module 230, a depth map extraction module 240, a rendering LoD generation module 250, and a scene change detection module 270A.

The focal length extraction module 230 may extract a focus distance while the rendering data 210 is being rendered by the rendering module 210. [ For example, in an image to be rendered, an area at a position corresponding to the focal distance is displayed clearly, and an area corresponding to a position outside the predetermined distance from the focal distance can be displayed blurred. Therefore, the focal length is extracted by the focal length extraction module 230, and a clear area and a blurred area of the image to be rendered can be distinguished based on the extracted focal length.

According to the embodiment, even in the case of an image in which a fuzzy region does not exist, since the region corresponding to the focal distance has a higher interest or attention than the region not corresponding to the focal distance, a different rendering LoD is set for each region There may be a need to extract the focal lengths for.

For example, in OpenGL ES (OpenGL for Embedded Terminals) standard, the focal distance extraction module 230 uses an MV (model view) value of an MVP (application programming interface (e.g. glLookAt API) .

The depth map extraction module 240 may extract a depth map during the rendering of the graphic data GD1 or GD2 by the rendering module 210. [ The depth map may include a depth value for each pixel of the image IM.

The depth map extraction module 240 may extract the depth map from the image being rendered using various known methods. For example, in an OpenGL ES environment, the depth map extraction module 240 extracts a depth map through an API (e.g., glDepthFunc) associated with the depth map, or extracts a depth buffer for each of the pixels, The depth map can be extracted from the depth map.

According to an embodiment, the rendering LoD setting module 220A may further include a representative depth value extraction module 242. [

The representative depth value extracting module 242 may divide the depth map into a plurality of tiles and extract a representative depth value for each of the plurality of divided tiles. Each of the plurality of tiles may include depth values of a plurality of pixels.

For example, the representative depth value extraction module 242 may extract a representative depth value for each of the plurality of tiles by calculating an average value, a middle value, or a mode value of depth values included in each of the plurality of tiles have.

The rendering LoD generation module 250 performs rendering on each of the pixels of the image IM based on the focal length extracted from the focal length extraction module 230 and the depth map extracted from the depth map extraction module 240. [ LoD can be generated.

According to an embodiment, the rendering LoD generation module 250 may generate the rendering LoD by further using information on the blur area detected from the blur area detection module 232, or may generate the rendering LoD from the representative depth value extraction module 242 And the rendering LoD can be generated using the representative depth values of each of the extracted tiles.

According to an embodiment, the rendering LoD setting module 220A may further include a blur area detection module 260. [ The blur area detection module 260 may directly detect the blur area from the rendered image IM to improve the accuracy when the rendering RLD generation module 250 generates a rendering RLD for each area.

For example, the blur area detection module 260 detects an outline or an edge of an object or the like in the image IM using an outline or an edge detection algorithm, and detects a sharp area and a blurred (blurred) area based on the detection result. The blur area can be detected by dividing the area.

The blur area detection module 260 may also convert the image IM to a frequency domain and detect a blur area consisting only of low frequency components in the image IM that has been converted to the frequency domain. In addition, the blur area detection module 260 may use a power spectrum based detection method that detects a blur area using an energy distribution according to a color change between pixels of the image IM. In addition to the above examples, various known methods of detecting blur areas in an image may be used.

The scene change detection module 270A can detect a scene change of a current image frame and a next image frame to determine whether to apply the generated rendering LoD in a rendering operation of a next image frame.

For example, if the scene change detection module 270A determines that the scene of the graphic data GD1 corresponding to the current image frame and the scene of the graphic data GD2 corresponding to the next image frame are the same or the scene change degree is lower than the reference level , The rendering LoD generated by the rendering LoD generation module 250 may be applied to the rendering of the next image frame. On the other hand, when the scene of the graphic data GD1 corresponding to the current image frame is different from the scene of the graphic data GD2 corresponding to the next image frame, or when the degree of change is larger than the reference level, The rendering LoD may not be applied at the time of rendering the next image frame.

Operations of the modules 230 to 270A included in the rendering LoD setting module 220A will be described later in detail with reference to the drawings.

Referring to FIG. 2B, the graphic processing apparatus 200B shown in FIG. 2B may be substantially the same as the graphic processing apparatus 200A shown in FIG. 2A except for the scene change detection module 270B.

The scene change detection module 260B detects the scene change GD1 corresponding to the current image frame and the scene GD2 corresponding to the next image frame before generating the render LoD from the currently rendered image frame IM1 Can be judged whether they are the same or not.

If the scene is the same or the degree of change is lower than the reference level, the scene change detection module 260B may cause the rendering LoD setting module 220B to generate and set a rendering LoD for the next image frame. On the other hand, if the scene is different or the degree of change is larger than the reference level, the scene change detection module 260B can prevent the rendering LoD setting module 220B from performing an operation of generating and setting a rendering LoD for the next image frame have.

FIG. 3 is a flowchart for explaining the operation of the graphic processing apparatus shown in FIG. 2A.

Referring to FIGS. 2A and 3, the graphics processing apparatus 200 may extract the focal length of the first image frame (S100). Specifically, the rendering LoD setting module 220A or 220B (hereinafter collectively referred to as 220) included in the graphics processing device 200 or the graphics processing device 200 is a module for rendering the first image frame being rendered by the rendering module 210 The focal distance can be extracted.

The graphics processing apparatus 200 may extract a depth map of the first image frame (S120). The graphics processing unit 200 or the rendering LoD setting module 220 may extract the depth map from the first image frame being rendered by the rendering module 210.

The graphics processing apparatus 200 may generate a rendering LoD of a second image frame, which is the next frame of the first image frame, based on the extracted focal length and depth map (S140). Specifically, the graphics processing unit 200 or the rendering LoD setting module 220 may generate a rendering LoD for each of the pixels of the second image frame based on the extracted focal length and depth map.

For example, the rendering LoD for a pixel that corresponds to a distance equal to the focal length or has a depth value corresponding to a distance that the difference from the focal distance is less than the reference value may be high. On the other hand, the rendering LoD for a pixel having a depth value corresponding to a distance greater than the reference value for the difference from the focal length may be low. According to an embodiment, the reference value may comprise a plurality of reference values, whereby the rendering LoD may be subdivided.

The steps S120 to S140 will be described in detail with reference to FIGS. 4 to 8. FIG.

Figure 4 is an illustration of a depth map extracted from an image frame being rendered.

4A and 4B, the depth map extraction module 240 of the graphic processing apparatus 200 or the graphic processing apparatus 200 may extract an image frame 400 The depth map 401 can be extracted.

The graphics processing unit 200 may extract the depth map 401 through an API associated with the depth map or extract the depth map 401 from the depth buffer for the image frame 400. [ And the depth map 401 can be extracted using various other known methods.

The depth map 401 shown in FIG. 4 (b) is an example of a depth map of an image type displayed on the display unit 151 for convenience of explanation. It is to be understood that the terminal 100 including the display unit 200 does not display the depth map 401 on the display unit 151.

The depth map 401 may include a depth value for each of the pixels of the image frame 400. For example, the depth value of a pixel close to the viewpoint may be low, and the depth map of the image form may display a bright color. On the other hand, the depth value of a pixel distant from the viewpoint may be high, and may be displayed in dark color in the depth map of the image form.

5 and 6 are diagrams illustrating an operation of setting a rendering LoD based on a focal length and a depth map by the graphics processing apparatus according to the embodiment of the present invention.

5, the rendering LoD generation module 250 included in the graphic processing apparatus 200 or the graphics processing apparatus may generate a rendering image based on the focal distance FD and the depth map extracted from the image frame being rendered, You can create a rendering LoD (apply for the next image frame).

For example, it is assumed that the range of the depth values of the pixels included in the depth map is 0 to 1 (assuming that the depth value of the closest pixel is 0 and the depth value of the furthest pixel is 1) ) Corresponds to 0.3, the graphic processing apparatus 200 divides the range of depth values around the focal length FD into a plurality of areas, sets the rendering LoD for each of the divided areas .

If the rendering LoD has three levels of H, M, and L, the graphics processing apparatus 200 may determine whether the depth value area is equal to or adjacent to the depth value corresponding to the focal length FD (For example, a range between 0.2 and 0.4), the rendering LoD can be set to image H. The graphic processing apparatus 200 can set a rendering LoD to a predetermined area (for example, an area between 0.1 and 0.2 and an area between 0.4 and 0.6) adjacent to the area in which the rendering LoD is set at the image H, And a rendering LoD can be set to a lower (L) region for other regions (for example, an area between 0 and 0.1 and an area between 0.6 and 1).

According to the embodiment, the range of each area (or reference values for distinguishing each area) can be freely changed according to the setting of the graphic processing apparatus 200 and the position of the focal length FD.

Referring to FIG. 6, the graphic processing apparatus 200 may generate rendering LoD information to be applied to each pixel of the next image frame, based on the set rendering LoD and the depth map. The graphics processing apparatus 200 can generate rendering LoD information for each of the pixels of the depth map 402 using the rendering LoD according to the depth value set in FIG.

6, the rendering LoD for the pixels included in the first area 403 may be applied as phase H and the rendering LoD for pixels included in the second area 404 may be applied as M ), And the rendering LoD for the pixels included in the third area 405 may be applied as a low (L).

That is, the rendering LoD of the pixels having the depth value corresponding to the focal length FD and the depth value adjacent thereto is set high, and the rendering LoD of the pixels having the depth value having a large difference from the depth value corresponding to the focal length FD Can be set low. As described above, since it is common that objects at focal distances far from the focal distance in an image frame are displayed with a low interest in attention or attention and are blurred, by setting the rendering LoD of pixels representing such objects low, It is possible to reduce power consumption and prevent degradation of processing performance.

7 is a flowchart for explaining another embodiment of the operation of setting the rendering LoD based on the focal length and depth map by the graphics processing apparatus according to the embodiment of the present invention. In particular, the embodiment shown in FIG. 7 may be for the case where the graphics processing unit 200 includes the representative depth value extraction module 242. [

Referring to FIG. 7, the graphic processing apparatus 200 or the representative depth value extracting module 242 may divide the depth map extracted from the image frame being rendered into a plurality of tiles (S142). Each of the plurality of tiles may have the same size, have a rectangular shape, and may be arranged in a matrix form.

The graphic processing apparatus 200 may extract representative depth values of each of the plurality of divided tiles (S144). As described above with reference to FIG. 2A, the graphics processing apparatus 200 calculates an average value, a middle value, or a mode value of depth values for a plurality of pixels included in each of the plurality of tiles, The representative depth value can be extracted.

The graphic processing apparatus 200 can set the rendering LoD of each of the plurality of tiles based on the extracted focal length and the representative depth value (S146). Accordingly, the rendering LoD of the plurality of pixels included in the specific tile can be set equal to each other.

Steps S142 to S146 will be described in detail with reference to FIG.

8 is an exemplary view showing the operation of the graphic processing apparatus shown in Fig.

Referring to FIG. 8A, the graphics processing apparatus 200 may divide the depth map 410 into a plurality of tiles. The tile may include depth values of a plurality of pixels.

The graphics processing apparatus 200 may extract the representative depth value of each of the plurality of tiles. For example, the representative depth value of the first tile 411 is 0.3, the representative depth value of the second tile 412 is 0.55, and the representative depth value of the third tile 413 is 0.9.

The graphics processing apparatus 200 may generate rendering LoD information for each of the plurality of tiles based on the extracted focal length FD and the representative depth value. Referring to FIGS. 5 and 8B, the rendering LoD of the first tile 411 may be set to the image H and the rendering LoD of the second tile 412 may be set to the middle M And the rendering LoD of the third tile 413 may be set to a low (L).

In the case of generating the rendering LoD information for each of the pixels according to the embodiment shown in FIG. 6, the accuracy of the generated rendering LoD information may be high, but the size of the rendering LoD information may be excessively large. On the other hand, in the case of generating the rendering LoD information for each of the tiles according to the embodiment shown in FIG. 8, the accuracy of the generated rendering LoD information may be lower than the accuracy of the rendering LoD information shown in FIG. Smaller size can be more efficient. Also, the accuracy and size of the rendering LoD information may be adjusted by adjusting the number of tiles to be divided as needed.

3 will be described again.

The graphics processing apparatus 200 may compare scene identities between the first image frame and the second image frame (S160). For example, the graphic processing apparatus 200 may compare the feature extracted from the graphic data of the first image frame and the feature extracted from the graphic data of the second image frame, or may determine whether each scene is identical or not based on correlation, Can be determined.

If it is determined that the scene of the first image frame is identical to the scene of the second image frame (YES in S160), the graphics processing apparatus 200 determines whether or not the second image frame Rendering can be performed (S180). The case where each scene is the same may include a case where the degree of change between scenes is lower than the reference level.

On the other hand, when it is determined that the scene of the first image frame is not the same as the scene of the second image frame (NO in S160), the graphics processing apparatus 200 does not apply the rendering LoD generated in operation S300, The rendering of the second image frame may be performed according to the method (S200). That is, the graphic processing apparatus 200 can perform rendering by applying the same rendering LoD to all the areas of the second image frame.

Figs. 9 to 11 show the operation of the graphic processing apparatus shown in Fig. 2A in more detail.

9 to 11 are provided for convenience of description of the operation of the modules included in the graphic processing apparatus 200, and the operating points of the modules may be different from each other.

9, when the graphic data GD1 of the first image frame IM1 is input to the graphic processing device 200A, the rendering module 210 renders the graphic data GD1 to generate a first image frame And transmits the generated first image frame IM1 to the display unit 151 through the frame buffer or the display control device.

The scene change detection module 270A temporarily stores the graphic data GD1 of the first image frame or detects the graphic data GD1 to detect a scene change between the first image frame IM1 and the second image frame, The information related to the scene of the first image frame can be extracted and stored.

Each of the focal distance extracting module 230 and the depth map extracting module 240 extracts a focal distance and a depth map from the first image frame being rendered and outputs the extracted focal distance FD1 and the depth map DM1 To the LoD generation module 250. When the graphic processing apparatus 200A further includes the representative depth value extraction module 242, the representative depth value extraction module 242 divides the depth map DM1 into a plurality of tiles, The representative depth value for each of the tiles can be extracted. The representative depth value extraction module 242 may transmit the depth map DM1 including the representative depth values to the rendering LoD generation module 250. [

According to an embodiment, when the graphics processing unit 200A further includes a blur area detection module 260, the blur area detection module 260 detects the blur area from the rendered first image frame IM1, And transmits the information BR1 on the blurred area to the rendering RLD generation module 250. [

The rendering LoD generation module 250 generates a second image frame based on the received focal length FD1 and the depth map DM1 or the focal distance FD1, the depth map DM1, and the blur area information BR1 (LOD1) for the rendering LoD to be set for the rendering LoD. The information LOD1 for the rendering LoD may include a rendering LoD for each of a plurality of pixels of the second image frame or a rendering LoD for each of a plurality of tiles of the second image frame.

10, when the graphic data GD2 of the second image frame IM2 is input to the graphic processing device 200A, the scene change detection module 270A detects the graphic data GD1 of the stored first image frame, Or the information related to the scene of the first image frame and the information related to the scene of the second image frame or the graphic data GD2 of the second image frame.

If the detection result scene is not changed (the same or the degree of change is lower than the reference level), the scene change detection module 270A sets information LOD1 on the rendering LoD generated by the rendering ROD generation module 250 To the rendering module 210, a signal or instruction (CS) to be applied at the time of rendering the two image frames.

11, the rendering module 210 renders the graphic data GD2 based on the rendering LoD included in the information LOD1 about the rendering LoD, and displays the rendered second image frame IM2 on the display unit 151).

9, each of the focal distance extraction module 230 and the depth map extraction module 240 can extract the focal distance FD2 and the depth map DM2 from the second image frame IM2 being rendered have. If the graphics processing unit 200A includes a representative depth value extraction module 242 and / or a blur area detection module 260 according to an embodiment, the representative depth value extraction module 242 extracts the depth map DM2 It is possible to divide it into a plurality of tiles and to extract a representative depth value for each of the divided tiles. The blur area detection module 260 may detect the blur area from the rendered second image frame IM2 and transmit the information BR2 for the detected blur area to the rendering LoD generation module 250. [

The rendering LoD generation module 250 calculates the depth of the second image frame based on the received focal length FD2 and the depth map DM2 or the focal distance FD2, the depth map DM2, and the blur area information BR2. (LOD2) for the rendering LoD to be set for the next frame (e.g., the third image frame). The information LOD2 for the rendering LoD may include a rendering LoD for each of the plurality of pixels of the third image frame or a rendering LoD for each of the plurality of tiles of the third image frame.

12 is a flowchart for explaining the operation of the graphic processing apparatus shown in FIG. 2B.

Referring to FIG. 12, the graphic processing apparatus 200 may detect whether a scene change between a first image frame and a second image frame is changed (S300). Concretely, the graphic processing apparatus 200 simultaneously or sequentially receives graphic data of a first image frame, which is an image frame to be rendered at present, and graphic data of a second image frame, which is a next frame of the first image frame, And can detect whether a scene change has occurred using the graphic data.

If the scene of the first image frame and the scene of the second image frame are the same (or the degree of change of the scene is lower than the reference level) as a result of the detection (YES in S320), the graphics processing apparatus 200 determines that the focus of the first image frame The distance may be extracted (S340), and the depth map may be extracted (S360).

Based on the extracted focal length and depth map, the graphic processing apparatus 200 generates a rendering LoD to be applied at the rendering time of the second image frame (S380), and based on the generated rendering LoD, (S400).

If it is determined that the first image frame and the second image frame are not the same (NO in S320), the graphics processing apparatus 200 may not perform the rendering LoD generation operation in steps S340 to S380 (S420) .

13 to 14 show the operation of the graphic processing apparatus shown in Fig. 2B in more detail.

13, the graphic data GD1 of the first image frame IM1 is input to the graphics processing device 200B, and the rendering module 210 renders the graphic data GD1 to generate the first image frame IM1 Can be generated.

The scene change detection module 270B detects the scene change between the first image frame IM1 and the second image frame IM2 by using the graphic data GD1 of the first image frame (Information related to the scene of the second image frame), as well as the graphic data GD2 of the second image frame, which is the next frame of the first image frame.

The scene change detection module 270B can detect whether a scene change has occurred based on the obtained graphic data GD1 and GD2 or the obtained information. The scene change detection module 270B detects the presence of the modules 230, 240, 242, 250, 260 (see FIG. 5) associated with the operation of generating the rendering LoD if the scene has not changed (CS1) for activating the operation of each module or modules.

Each of the focal distance extracting module 230 and the depth map extracting module 240 extracts a focal distance and a depth map from the first image frame being rendered and outputs the extracted focal distance FD1 and the depth map DM1 To the LoD generation module 250. The representative depth value extraction module 242 may divide the depth map DM1 into a plurality of tiles and divide the depth map DM1 into a plurality of tiles, The representative depth value for each of the tiles can be extracted. The representative depth value extraction module 242 may transmit the depth map DM1 including the representative depth values to the rendering LoD generation module 250. [

According to an embodiment, when the graphics processing unit 200A further includes a blur area detection module 260, the blur area detection module 260 detects the blur area from the rendered first image frame IM1, And transmits the information BR1 on the blurred area to the rendering RLD generation module 250. [

The rendering LoD generation module 250 generates a second image frame based on the received focal length FD1 and the depth map DM1 or the focal distance FD1, the depth map DM1, and the blur area information BR1 (LOD1) for the rendering LoD to be set for the rendering LoD. The information LOD1 for the rendering LoD may include a rendering LoD for each of a plurality of pixels of the second image frame or a rendering LoD for each of a plurality of tiles of the second image frame.

Referring to FIG. 14, the rendering module 210 generates a second image ROD2 using the information LOD1 of the rendering LoD generated by the rendering LoD generation module 250 and the graphic data GD2 of the second image frame, It is possible to generate the frame IM2.

The scene change detection module 270B detects the scene change between the second image frame IM2 and the third image frame IM3 by using the graphic data GD2 of the second image frame (Information related to the scene of the third image frame), as well as the graphic data GD3 of the third image frame, which is the next frame of the second image frame.

The scene change detection module 270B can detect whether a scene change has occurred based on the obtained graphic data GD2 and GD3 or the obtained information. When the scene changes, the scene change detection module 270B generates a signal or command CS2 that deactivates the operation of the modules 230, 240, 242, 250, 260 associated with the operation of generating the rendering LoD And can output each module or modules to a device that controls them.

Modules 230, 240, 242, 250, and 260 may not perform operations to generate a rendering LoD, depending on the signal or instruction CS2. The rendering module 210 may then perform conventional rendering (applying the same rendering LoD for all pixels of the third image frame) in the rendering of the third image frame.

While the present invention has been particularly shown and described with reference to exemplary embodiments thereof, it is to be understood that the invention is not limited to the disclosed embodiments, but, on the contrary, is intended to cover various modifications and equivalent arrangements included within the spirit and scope of the appended claims. will be. Accordingly, the true scope of the present invention should be determined by the technical idea of the appended claims.

Claims (18)

A focal length extraction module for extracting a focal length of the first image frame;
A depth map extraction module for extracting a depth map of the first image frame;
A rendering LoD generation module for generating a rendering level of detail (LoD) for a second image frame, which is the next frame of the first image frame, based on the extracted focal length and depth map; And
And a rendering module that performs rendering on the second image frame by applying the rendering LoD based on a scene change between the first image frame and the second image frame.
The method according to claim 1,
Wherein the focal length extracting module and the depth map extracting module,
And extracts the focal length and the depth map from the first image frame being rendered by the rendering module.
The method according to claim 1,
Further comprising a blur area detection module for detecting a blur area from the rendered first image frame,
Wherein the rendering LoD generation module comprises:
And generates the rendering LoD based on the focal length, the depth map, and the blur area.
The method according to claim 1,
Further comprising a representative depth value extracting module for dividing the depth map into a plurality of tiles and extracting a representative depth value of each of the plurality of divided tiles,
Wherein the rendering LoD generation module comprises:
And generates the rendering LoD based on the focal length and the representative depth value of each of the plurality of tiles.
5. The method of claim 4, wherein the representative depth value of each of the plurality of tiles is &
A middle value, or a mode value of depth values included in each of the plurality of tiles.
The method according to claim 1,
Further comprising a scene change detection module for comparing a scene identical between the first image frame and the second image frame,
The rendering module includes:
And performs rendering of the second image frame by applying the generated rendering LoD if the scene of the first image frame and the scene of the second image frame are the same.
The method according to claim 6,
And when the scene of the first image frame and the scene of the second image frame are the same, if the degree of change between the scene of the first image frame and the scene of the second image frame is lower than the reference degree, Device.
A method of operating a graphics processing device,
Extracting a focal length of the first image frame;
Extracting a depth map of the first image frame;
Generating a rendering level of detail (LoD) for a second image frame, which is the next frame of the first image frame, based on the extracted focal length and depth map; And
And performing rendering on the second image frame by applying the rendering LoD based on a scene change between the first image frame and the second image frame.
9. The method of claim 8,
Wherein the extracting of the focal length and the extracting of the depth map comprise:
And extracting the focal length and the depth map from the first image frame being rendered.
9. The method of claim 8,
Further comprising detecting a blur area from the rendered first image frame,
Wherein the generating the rendering LoD comprises:
And generating the rendering LoD based on the focal length, the depth map, and the blur area.
9. The method of claim 8, wherein extracting the depth map comprises:
Dividing the extracted depth map into a plurality of tiles;
Further comprising extracting a representative depth value of each of a plurality of divided tiles,
Wherein the generating the rendering LoD comprises:
And generating the rendering LoD based on the focal length and the representative depth value of each of the plurality of tiles.
12. The method of claim 11, wherein the representative depth value of each of the plurality of tiles is &
A middle value, or a mode value of depth values included in each of the plurality of tiles.
9. The method of claim 8,
Comparing scene identicalness between the first image frame and the second image frame; And
And performing rendering on the second image frame by applying the generated rendering LoD if the scene of the first image frame is identical to the scene of the second image frame as a result of the comparison .
14. The method of claim 13,
And when the scene of the first image frame and the scene of the second image frame are the same, if the degree of change between the scene of the first image frame and the scene of the second image frame is lower than the reference degree, Method of operation of the device.
A graphics processing device for rendering a first image frame;
A display unit displaying a rendered first image frame; And
And a control unit for controlling the graphic processing apparatus and the display unit,
The graphic processing apparatus includes:
A focal distance extracting module for extracting a focal distance of the first image frame;
A depth map extraction module for extracting a depth map of the first image frame;
A rendering LoD generation module for generating a rendering LoD for a second image frame, which is the next frame of the first image frame, based on the extracted focal length and depth map; And
And a rendering module for performing rendering on the second image frame by applying the rendering LoD based on a scene change between the first image frame and the second image frame.
16. The method of claim 15,
The graphic processing apparatus includes:
Further comprising a blur area detection module for detecting a blur area from the rendered first image frame,
Wherein the rendering LoD generation module comprises:
And generates the rendering LoD based on the focal length, the depth map, and the blur area.
16. The method of claim 15,
The graphic processing apparatus includes:
Further comprising a representative depth value extracting module for dividing the depth map into a plurality of tiles and extracting a representative depth value of each of the plurality of divided tiles,
Wherein the rendering LoD generation module comprises:
And generating the rendering LoD based on the focal length and the representative depth value of each of the plurality of tiles.
16. The method of claim 15,
The graphic processing apparatus includes:
Further comprising a scene change detection module for comparing a scene identical between the first image frame and the second image frame,
The rendering module includes:
And performs rendering on the second image frame by applying the generated rendering LoD when the scene of the first image frame and the scene of the second image frame are the same.
KR1020150172997A 2015-12-07 2015-12-07 Apparatus for processing graphics and operating method thereof, and terminal including same KR20170066844A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
KR1020150172997A KR20170066844A (en) 2015-12-07 2015-12-07 Apparatus for processing graphics and operating method thereof, and terminal including same

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
KR1020150172997A KR20170066844A (en) 2015-12-07 2015-12-07 Apparatus for processing graphics and operating method thereof, and terminal including same

Publications (1)

Publication Number Publication Date
KR20170066844A true KR20170066844A (en) 2017-06-15

Family

ID=59217384

Family Applications (1)

Application Number Title Priority Date Filing Date
KR1020150172997A KR20170066844A (en) 2015-12-07 2015-12-07 Apparatus for processing graphics and operating method thereof, and terminal including same

Country Status (1)

Country Link
KR (1) KR20170066844A (en)

Similar Documents

Publication Publication Date Title
KR20170005557A (en) Deformable display device and operating method thereof
KR20160150539A (en) Deformable display device and operating method thereof
KR20180042777A (en) Mobile terminal and operating method thereof
KR20180040451A (en) Mobile terminal and operating method thereof
KR20160147441A (en) Mobile terminal and operating method thereof
KR20180056182A (en) Mobile terminal and method for controlling the same
KR20160142172A (en) Deformable display device and operating method thereof
KR20180043019A (en) Mobile terminal
KR20170041098A (en) Mobile device and method for operating the same
KR20180092137A (en) Mobile terminal and method for controlling the same
KR20180041430A (en) Mobile terminal and operating method thereof
KR20170112527A (en) Wearable device and method for controlling the same
KR20160005416A (en) Mobile terminal
KR101727823B1 (en) Image processing device and method for operating thereof
KR20180055364A (en) Mobile terminal
KR20180047694A (en) Mobile terminal
KR20170045676A (en) Mobile terminal and operating method thereof
KR20170071334A (en) Mobile terminal and operating method thereof
KR20170029834A (en) Mobile terminal and method for controlling the same
KR20170074371A (en) Mobile terminal and method for controlling the same
KR20160067393A (en) Apparatus for controlling push service
KR101728758B1 (en) Mobile terminal and method for controlling the same
KR20160043266A (en) Mobile device and method for controlling the same
KR101637663B1 (en) Mobile terminal
KR20170066844A (en) Apparatus for processing graphics and operating method thereof, and terminal including same