KR20130032482A - Mobile terminal and encoding method of mobile terminal for near field communication - Google Patents

Mobile terminal and encoding method of mobile terminal for near field communication Download PDF

Info

Publication number
KR20130032482A
KR20130032482A KR1020110096092A KR20110096092A KR20130032482A KR 20130032482 A KR20130032482 A KR 20130032482A KR 1020110096092 A KR1020110096092 A KR 1020110096092A KR 20110096092 A KR20110096092 A KR 20110096092A KR 20130032482 A KR20130032482 A KR 20130032482A
Authority
KR
South Korea
Prior art keywords
encoding
sound source
source data
mobile terminal
cpu
Prior art date
Application number
KR1020110096092A
Other languages
Korean (ko)
Inventor
정지항
Original Assignee
엘지전자 주식회사
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 엘지전자 주식회사 filed Critical 엘지전자 주식회사
Priority to KR1020110096092A priority Critical patent/KR20130032482A/en
Publication of KR20130032482A publication Critical patent/KR20130032482A/en

Links

Images

Classifications

    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L19/00Speech or audio signals analysis-synthesis techniques for redundancy reduction, e.g. in vocoders; Coding or decoding of speech or audio signals, using source filter models or psychoacoustic analysis
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W4/00Services specially adapted for wireless communication networks; Facilities therefor
    • H04W4/80Services using short range communication, e.g. near-field communication [NFC], radio-frequency identification [RFID] or low energy communication
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W88/00Devices specially adapted for wireless communication networks, e.g. terminals, base stations or access point devices
    • H04W88/02Terminal devices

Landscapes

  • Engineering & Computer Science (AREA)
  • Signal Processing (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Computational Linguistics (AREA)
  • Health & Medical Sciences (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • Acoustics & Sound (AREA)
  • Multimedia (AREA)
  • Telephone Function (AREA)

Abstract

The present invention includes a wireless communication unit connected to the external terminal through the short-range communication, and a control unit for performing the encoding so that the sound source data suitable for the short-range communication, and transmits the encoded sound source data through the wireless communication unit, The control unit generates a plurality of encoding buffers for performing the encoding, allocates the sound source data to the plurality of sub sound source data to correspond to the plurality of encoding buffers, and assigns the plurality of sub sound source data to each of the plurality of sub sound source data. A mobile terminal is configured to perform encoding through a corresponding encoding buffer and to combine the encoded sub sound data. The control unit includes a central processing unit (CPU), a digital signal processor (DSP), and a graphics processing (GPU). Unit), wherein the CPU, the DSP, and the GP Each of U performs the encoding operation.

Description

Mobile terminal and encoding method for mobile terminal for near field communication {MOBILE TERMINAL AND ENCODING METHOD OF MOBILE TERMINAL FOR NEAR FIELD COMMUNICATION}

The present invention relates to a method for performing encoding and a mobile terminal of a mobile terminal for near field communication including Bluetooth communication.

The terminal can move And can be divided into a mobile / portable terminal and a stationary terminal depending on whether the mobile terminal is a mobile terminal or a mobile terminal. The mobile terminal can be divided into a handheld terminal and a vehicle mount terminal according to whether the user can directly carry the mobile terminal.

Such a terminal has various functions, for example, in the form of a multimedia device having multiple functions such as photographing and photographing of a moving picture, reproduction of a music or video file, reception of a game and broadcasting, etc. .

In addition, the terminal may transmit and receive a video, a game, a document file, etc. with an external server or an external terminal through various communication methods. Among various communication methods, Bluetooth communication method is preferred among devices in a short distance.

Bluetooth communication began with research to replace cables by devising wireless solutions that connect mobile terminals and their peripherals. The need for a low-cost, low-cost wireless interface between the mobile terminal and the peripherals stems from the need. Bluetooth, founded by Ericsson, was founded by the Bluetooth Special Interest Group (SIG), which consists of leading-edge IT technology companies such as Nokia, IBM, Toshiba and Intel. The Bluetooth SIG is now a global standard, with global companies including Motorola, Microsoft, Lucent Technologies and Sricom participating.

Bluetooth can make and receive calls with the mobile in your pocket or bag. Stereo headsets allow you to listen to mp3 music as well as phone calls with a wireless headset, and transfer files such as photos and ringtones between your mobile device and your mobile device or PC.

Meanwhile, when MP3 is transmitted through Bluetooth, MP3 is decoded and sent back to SBC (SubBand Codec). In other words, all Bluetooth stereo headsets must support the SBC codec by default, and most Bluetooth headsets will decode the data encoded by the SBC codec to play music.

Therefore, MP3 playback through Bluetooth is basically performed through MP3 decoding and SBC encoding, and uses a large amount of CPU (Central Processing Unit) of the mobile terminal. The problem of breaking occurs.

An object of the present invention is to provide a mobile terminal that solves a lack of capacity of a central processing unit (CPU) by using a graphics processing unit (GPU) or a digital signal processor (DSP) when playing MP3 through Bluetooth.

In addition, the present invention is to provide a method for performing the encoding process by configuring a plurality of encoding buffers in the sound source data encoding process to be applied to not only GPU, but also DSP, CPU.

It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory and are not restrictive of the invention, unless further departing from the spirit and scope of the invention as defined by the appended claims. It will be possible.

A mobile terminal according to an embodiment of the present invention for realizing the above problem is to perform a wireless communication unit connected to the external terminal through short-range communication, and to encode sound source data to be suitable for the short-range communication, and the encoded sound source data And a control unit for controlling to transmit through a wireless communication unit, wherein the control unit generates a plurality of encoding buffers for performing the encoding, and converts the sound source data into a plurality of sub sound source data to correspond to the plurality of encoding buffers. Assigning, encoding the plurality of sub sound source data through the encoding buffer corresponding to each of the plurality of sub sound source data, and combining the encoded sub sound source data.

The short range communication includes Bluetooth communication, and the encoding operation includes SBC (Subband Codec) coding for the Bluetooth communication.

The controller includes at least one of a central processing unit (CPU), a digital signal processor (DSP), and a graphics processing unit (GPU), each of the CPU, the DSP, and the GPU performing the encoding operation.

The GPU simultaneously performs each encoding operation on the plurality of sub sound source data. The GPU is configured in a multi-threaded structure including a plurality of threads, each of which is set to correspond to the respective encoding buffer.

The CPU or the DSP sequentially performs each encoding operation on the plurality of sub sound source data.

The mobile terminal further includes a memory for storing the encoded sound source data.

According to another aspect of the present invention, there is provided a method for performing encoding of a mobile terminal for short-range communication, the method comprising performing encoding on sound data to be suitable for short-range communication, and converting the encoded sound source data into short-range communication. And transmitting through the data, generating the plurality of encoding buffers for performing the encoding, converting the sound source data into the plurality of sub sound source data to correspond to the plurality of encoding buffers. Allocating, performing encoding on the plurality of sub-source data through the encoding buffer corresponding to each of the plurality of sub-source data, and combining the encoded sub-sound source data.

The short range communication includes Bluetooth communication, and the encoding operation includes SBC (Subband Codec) coding for the Bluetooth communication.

The mobile terminal according to at least one embodiment of the present invention configured as described above may provide an encoding performing method that may be applied even if a subject performing an encoding operation is performed in any one of a CPU, a DSP, and a GPU.

In the encoding process, the data can be distributed and allocated to the encoding buffer, thereby greatly allocating memory allocation.

The effects obtained by the present invention are not limited to the above-mentioned effects, and other effects not mentioned can be clearly understood by those skilled in the art from the following description will be.

1 is a flow diagram for performing an SBC encoding process through a CPU in accordance with the present invention.
2 is a flow diagram of performing an SBC encoding process through a CPU and a DSP in accordance with the present invention.
3 is a flow diagram of performing an SBC encoding process through a GPU in accordance with the present invention.
4 is a diagram schematically illustrating an operation of encoding a data to be encoded by configuring a plurality of frame buffers in accordance with the present invention.
5 is a block diagram of a mobile terminal according to one embodiment of the present invention;

Hereinafter, a mobile terminal related to the present invention will be described in detail with reference to the drawings. The suffix "module" and " part "for the components used in the following description are given or mixed in consideration of ease of specification, and do not have their own meaning or role.

The mobile terminal described in this specification may include a mobile phone, a smart phone, a laptop computer, a digital broadcasting terminal, a PDA (Personal Digital Assistants), a PMP (Portable Multimedia Player), and navigation. However, it will be readily apparent to those skilled in the art that the configuration according to the embodiments described herein may be applied to a fixed terminal such as a digital TV, a desktop computer, etc., if it is possible, except when applicable only to a mobile terminal. There will be.

As described above, the mobile terminal requires encoding to a subband codec (SBC) in order to play music through Bluetooth. Subband coding means dividing an image into several bands and transforming each band through transformation, and there is redundancy in the image or voice, and the compression efficiency can be improved by utilizing the redundancy.

On the other hand, in the mobile environment, the Bluetooth subband codec (SBC) encoding process is performed in the central processing unit (CPU) of the mobile terminal, causing a problem that the CPU load is excessive. Thus, in general, a mobile terminal or computer may execute an encoding process in a digital signal processor (DSP) to distribute the load on the CPU.

Here, digital signal processor (DSP) refers to a device that refers to the overall technology for processing digital signals. Because DSPs are primarily implemented in semiconductors, they can also be represented as processors that refer to microcontroller semiconductor chips.

DSP converts analog signals into digital signals and processes them at high speed. Therefore, the DSP can be widely applied to multimedia devices or digital communication devices that require complex signal processing. All forms of human seeing and hearing are transmitted to the eyes or ears in the form of analog signals. However, when a DSP chip is used, analog information, such as human voice, is understood by the computer, which can be used for various applications such as answering machines, fax machines, and karaoke. In addition, DSP technology can play a key role in the multimedia era, as everything processed by a computer, such as pictures and images, as well as audio can be turned into digital signals.

In other words, the DSP is an optimized architecture to meet the fast operational requirements of digital signal processing, and the DSP provides a solution with better performance, lower latency, and less power.

Hereinafter, first, the SBC encoding process through the CPU and the SBC encoding process through the CPU and the DSP will be described.

1 is a diagram illustrating a flow of performing an SBC encoding process through a CPU.

The CPU can read encoding target data from a sound source file such as MP3 (S110). In this case, the sound source file can be viewed as an original file without undergoing a process such as compression. Here, the CPU can read the encoding target data by the size of the code size * frame.

The CPU may then perform an encoding process on the data to be encoded while repeating the loop by the size of the frame (S120). The data encoding process here refers to the SBC encoding process. Therefore, the CPU repeats the encoding process by looping the number of frames until all SBC encoding processes are completed.

In detail, the SBC encoding process is performed by analyzing an audio file (S121), a calculate scale factor (S122), and a frame packing (S123) in a loop.

The CPU may store the data that has undergone the SBC encoding process in a data file or transmit the data to an external device (S130).

In the case of FIG. 1, the SBC encoding process as well as the data reading process and the writing process are performed in the CPU. However, in the case of FIG. 2 below, the SBC encoding process is performed in the DSP.

2 is a diagram illustrating a flow of performing an SBC encoding process through a CPU and a DSP to assist a CPU.

Similar to FIG. 1, the CPU may read a music file in the form of a PCM (S210), and the CPU may store or transmit SBC encoded data (S230).

Here, referring to FIG. 2 compared to the process of FIG. 1, the SBC encoding process is performed in the DSP.

The DSP performs an SBC encoding process by looping the number of frames (S220).

As described above, the SBC encoding process is performed by looping through the file analysis (S221), the scale factor calculation (S222), and the frame packing (S223).

As such, through this process, the DSP can relax the CPU load in the SBC encoding process. Since the CPU is a subject that performs various functions of the mobile terminal as well as the SBC encoding process, the process of distributing the load of the CPU can greatly improve the function of the mobile terminal.

However, while a DSP can be used with the CPU to ease the load on the CPU, the DSP is basically an expensive unit. Therefore, using the DSP can increase the manufacturing cost, which can be a great burden. In other words, there is a certain limit to mitigating CPU load using DSP.

The present invention describes a method of using a graphics processing unit (GPU) in place of a DSP in an SBC encoding process.

A GPU (Graphics Processing Unit) refers to a graphic device that processes image information or outputs a screen of a computer or a mobile terminal. A GPU is a device or processor that was originally created to assist in the graphics processing of the CPU. In other words, the GPU is designed to solve the bottleneck that occurs when the CPU performs graphics tasks, also referred to as graphics accelerator (Graphics Accelerator).

The GPU is an improvement on the 3D graphics acceleration chip that was originally used for applications in implementing 3D (three-dimensional) graphics. A semiconductor chip that performs graphics arithmetic processing, also called a core. Faster GPUs mean faster cores.

In the present invention, the GPU may simply replace the DSP in the SBC encoding process for music reproduction, instead of simply assisting the function of the CPU in graphics processing. The use of such a GPU is referred to as a general purpose GPU (GPGPU). In the present application, a GPU may perform a role of a DSP that performs an SBC encoding process as a GPGPU.

The GPU may perform SBC encoding for music reproduction in Bluetooth communication through various technologies. One example of various technologies is CUDA (Compute Unified Device Architechture) technology.

CUDA technology is a technology developed by NVIDIA Corporation that enables programmers and developers to write software applications to solve complex computational problems such as video and audio encoding, modeling for oil and gas exploration, and medical imaging. Provide a language environment. Applications are configured for parallel execution by the GPU and typically rely on specific features of the GPU.

On the other hand, CUDA technology has a single-instruction, multiple-thread (SIMT) structure. CUDA can support as many parallel processing units as there are computational units in the GPU. In a SIMT structure, a single instruction or data can be executed in multiple threads at the same time.

In the context of the present invention, in performing the SBC encoding process on the GPU through CUDA technology, it is possible to divide a file into several sub-files and process them in parallel with each thread.

3 is a diagram schematically illustrating a method of using a GPU in an SBC encoding process.

First, similar to the process of FIGS. 1 and 2, the CPU reads encoding target data from a sound source (S310). This is done in the CPU, and the read data is transferred to the GPU.

The GPU generates a plurality of encoding buffers to process the read encoding target data (S320). The GPU distributes and inserts each encoded data to correspond to multiple encoding buffers. Therefore, the data to be encoded is distributed into a plurality of sub data and distributed to each encoding buffer. In addition, each encoding buffer corresponds to one thread in the SIMT structure, that is, a multi-threaded structure, in the GPU for the encoding process.

The GPU performs an SBC encoding process (S330). Here, the SBC encoding process is performed not only in one buffer and the corresponding thread, but in multiple threads at the same time. In other words, the SBC encoding process is performed simultaneously in each of the multiple threads (thread 1, thread 2, ... thread X). The sub data distributed in each encoding buffer is subjected to the SBC encoding process simultaneously in each thread corresponding to each encoding buffer. In the encoding process performed in each thread, audio analysis (S331), scale factor calculation (S332), and frame packing (S333) are performed.

The GPU synthesizes each encoded frame which is a result encoded in each thread and generates one encoded data (S340). The synthesized encoded data is delivered to the CPU.

The CPU may store the encoded data in the data file or transmit it to the external device (S350).

In the case of the GPU, as described above, since the multi-threaded operation is supported through CUDA, the SBC encoding process is performed in each thread at the same time, so that the CPU is faster than the SBC encoding process alone or through the CPU and DSP. This can be done.

Meanwhile, the feature of generating a plurality of encoding buffers and encoding the data with respect to each buffer may be implemented in the CPU or the DSP as well as in the GPU supporting the multi-threaded operation. Since the GPU supports multi-threaded operations, SBC encoding corresponding to each buffer can be performed at the same time, whereas for CPU or DSP, SBC encoding corresponding to the next buffer after SBC encoding corresponding to one buffer is completed. The process can be performed.

4 is a diagram schematically illustrating an operation of encoding a data to be encoded by configuring a plurality of frame buffers in accordance with the present invention.

In FIG. 4, multiple encoding buffers 420a through 420x may be generated by a CPU, DSP, or GPU. The encoding buffer may be regarded as a space where data is stored, and an encoding operation is performed. When there is an input buffer associated with an encoding input and an output buffer associated with an encoded output in relation to an encoding operation, the encoding buffer corresponds to an input buffer, and the frame buffer corresponds to an output buffer.

The sound source data 410 may be divided into a plurality of sub data. Thus, each sub data may be allocated to respective encoding buffers 420a through 420x.

Multiple encoding buffers may include respective header information H1 and sub data D1 to DX. The header information includes information for encoding such as size, frame rate, channel information. The sub data may be data in the form of pulse code modulation (PCM).

Each encoding buffer may each perform an SBC encoding process. The result of the encoding process is passed to the frame buffers 430a through 430x. Frame buffers are buffers to which a result encoded by sub data included in encoding buffers is transmitted. Except for the header information H1 included in the frame buffer, the encoded data D1 'to DX' are synthesized and assembled or generated into one encoded data 440.

As described above, in the case of the GPU, since the multi-threading function is supported, each encoding process may be performed in each thread.

Also, in the case of a CPU or a DSP, each encoding may be performed in each encoded buffer. For example, after encoding is performed in the first encoding buffer 420a, encoding may be performed in the second encoding buffer 420b, and an encoding process may be performed in the last encoding buffer 420x. Even when a large number of encoding buffers are configured in this way, a speed increase can be expected rather than looping.

The number of encoding buffers may be set differently depending on the architecture, that is, CPU, DSP or GPU. For example, the number of encoding buffers may be set to 380 when the GPU performs an encoding operation, and the number may be changed. have.

Meanwhile, the present invention may create a pre-frame buffer. For example, if it is possible to predict the next sound source, the encoding buffers may be generated in advance using the method described above, encoded, and stored. It is then possible to send encoded data from prestored frame buffers rather than creating frame buffers in real time.

5 is a block diagram of a mobile terminal according to an embodiment of the present invention.

The mobile terminal 100 may include a wireless communication unit 110, a power supply unit 120, a control unit 130, a memory 140, and the like. The components shown in FIG. 1 are not essential, and a mobile terminal having more or fewer components may be implemented.

Hereinafter, the components will be described in order.

The wireless communication unit 110 may include one or more modules for enabling wireless communication between the mobile terminal 100 and the wireless communication system or between the mobile terminal 100 and the network in which the mobile terminal 100 is located. For example, the wireless communication unit 110 may include a broadcast receiving module, a mobile communication module, a wireless internet module, a short range communication module, and a location information module 115. The wireless communication unit 110 enables Bluetooth communication technology in connection with the present invention. Therefore, it is possible to transmit and receive the sound source with an external device capable of Bluetooth communication.

The short-range communication module is a module for short-range communication. Bluetooth, Radio Frequency Identification (RFID), infrared data association (IrDA), Ultra Wideband (UWB), ZigBee, and the like can be used as a short range communication technology.

The power supply unit 120 receives an external power source and an internal power source under the control of the controller 180 to supply power for operation of each component.

The controller 130 typically controls the overall operation of the mobile terminal. For example, voice communication, data communication, video communication, and the like.

The controller 130 may include a CPU (central processing unit) 132, a GPU (graphics processing unit) 134, and a DSP (digital signal processing unit) 136. The CPU 132 can control the overall operation associated with the mobile terminal 100 and can read and store sound data or transmit it to an external device in connection with the present invention. It is also possible to perform control operations of the basic DSP 136 and the GPU 134. GPU 134 may perform graphics control functions in association with display operations. In addition, the GPU 134 may perform an encoding process based on a multi-threaded function in connection with the present invention. The DSP 136 converts an analog signal into a digital signal and processes the data at high speed, and may perform an encoding process in connection with the present invention.

The memory unit 140 may store a program for processing and controlling the controller 130 and may provide a function for temporarily storing input / output data (for example, encoded data and sound source data). It can also be done.

The memory 140 may be a flash memory type, a hard disk type, a multimedia card micro type, a card type memory (e.g., SD or XD memory), a RAM (Random Access Memory), SRAM (Static Random Access Memory), ROM (Read Only Memory), EEPROM (Electrically Erasable Programmable Read-Only Memory), PROM A disk, and / or an optical disk.

The various embodiments described herein may be embodied in a recording medium readable by a computer or similar device using, for example, software, hardware, or a combination thereof.

According to a hardware implementation, the embodiments described herein include application specific integrated circuits (ASICs), digital signal processors (DSPs), digital signal processing devices (DSPDs), programmable logic devices (PLDs), field programmable gate arrays (FPGAs), and the like. It may be implemented using at least one of processors, controllers, micro-controllers, microprocessors, and electrical units for performing other functions. The described embodiments may be implemented by the controller 130 itself.

According to the software implementation, embodiments such as the procedures and functions described herein may be implemented as separate software modules. Each of the software modules may perform one or more of the functions and operations described herein. Software code can be implemented in a software application written in a suitable programming language. The software code may be stored in the memory 140 and executed by the controller 130.

Further, according to an embodiment of the present invention, the above-described method can be implemented as a code that can be read by a processor on a medium on which the program is recorded. Examples of the medium that can be read by the processor include ROM, RAM, CD-ROM, magnetic tape, floppy disk, optical data storage, etc., and may be implemented in the form of a carrier wave (e.g., transmission over the Internet) .

The mobile terminal described above can be applied to not only the configuration and method of the embodiments described above but also all or some of the embodiments may be selectively combined so that various modifications may be made to the embodiments It is possible.

100 mobile terminal 110 wireless communication unit
120 Power Supply 130 Control Unit
132 CPU 134 GPU
136 DSP

Claims (12)

A wireless communication unit connected to an external terminal through short-range communication; And
A control unit which encodes sound source data to be suitable for the short range communication, and controls to transmit the encoded sound source data through the wireless communication unit,
The control unit,
Create a plurality of encoding buffers for performing the encoding,
Assigning the sound source data to a plurality of sub sound source data to correspond to the plurality of encoding buffers,
Encoding the plurality of sub sound source data through the encoding buffer corresponding to each of the plurality of sub sound source data,
Combining the encoded respective sub sound source data,
Mobile terminal.
The method of claim 1,
The short range communication includes Bluetooth communication,
The encoding operation includes SBC (Subband Codec) coding for the Bluetooth communication.
Mobile terminal.
The method of claim 1,
The control unit includes at least one of a central processing unit (CPU), a digital signal processor (DSP), and a graphics processing unit (GPU),
The CPU, the DSP and the GPU each performing the encoding operation,
Mobile terminal.
The method of claim 3, wherein
The GPU simultaneously performing respective encoding operations on the plurality of sub-sound source data,
Mobile terminal.
The method of claim 4, wherein
The GPU is composed of a multi-threaded structure including a plurality of threads,
Each of the plurality of threads is set to correspond to the respective encoding buffer,
Mobile terminal.
The method of claim 3, wherein
Wherein the CPU or the DSP sequentially performs respective encoding operations on the plurality of sub sound source data,
Mobile terminal.
The method of claim 1,
Further comprising a memory for storing the encoded sound source data,
Mobile terminal.
Encoding the sound source data to be suitable for near field communication; And
Transmitting the encoded sound source data via the short range communication;
Performing the encoding,
Generating a plurality of encoding buffers for performing the encoding;
Allocating the sound source data into a plurality of sub sound source data to correspond to the plurality of encoding buffers;
Performing encoding on the plurality of sub sound source data through the encoding buffer corresponding to each of the plurality of sub sound source data; And
Combining the encoded sub sound source data with each other;
A method for performing encoding of a mobile terminal for near field communication.
The method of claim 8,
The short range communication includes Bluetooth communication,
The encoding operation includes SBC (Subband Codec) coding for the Bluetooth communication.
A method for performing encoding of a mobile terminal for near field communication.
The method of claim 8,
The encoding step is performed by at least one of a central processing unit (CPU), a digital signal processor (DSP), and a graphics processing unit (GPU),
A method for performing encoding of a mobile terminal for near field communication.
11. The method of claim 10,
The GPU simultaneously performing respective encoding operations on the plurality of sub-sound source data,
A method for performing encoding of a mobile terminal for near field communication.
11. The method of claim 10,
Wherein the CPU or the DSP sequentially performs respective encoding operations on the plurality of sub sound source data,
A method for performing encoding of a mobile terminal for near field communication.
KR1020110096092A 2011-09-23 2011-09-23 Mobile terminal and encoding method of mobile terminal for near field communication KR20130032482A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
KR1020110096092A KR20130032482A (en) 2011-09-23 2011-09-23 Mobile terminal and encoding method of mobile terminal for near field communication

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
KR1020110096092A KR20130032482A (en) 2011-09-23 2011-09-23 Mobile terminal and encoding method of mobile terminal for near field communication

Publications (1)

Publication Number Publication Date
KR20130032482A true KR20130032482A (en) 2013-04-02

Family

ID=48435257

Family Applications (1)

Application Number Title Priority Date Filing Date
KR1020110096092A KR20130032482A (en) 2011-09-23 2011-09-23 Mobile terminal and encoding method of mobile terminal for near field communication

Country Status (1)

Country Link
KR (1) KR20130032482A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103260034A (en) * 2013-05-14 2013-08-21 李小林 Open CL J2K compression method

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103260034A (en) * 2013-05-14 2013-08-21 李小林 Open CL J2K compression method

Similar Documents

Publication Publication Date Title
US10924876B2 (en) Interpolating audio streams
CN105378646B (en) A variety of while audio mode method and apparatus
US11395083B2 (en) Scalable unified audio renderer
CN109076304A (en) The Application Programming Interface presented for adaptive audio
US11356796B2 (en) Priority-based soundfield coding for virtual reality audio
JP2014072894A (en) Camera driven audio spatialization
US11429340B2 (en) Audio capture and rendering for extended reality experiences
US11089428B2 (en) Selecting audio streams based on motion
CN106688015B (en) Processing parameters for operations on blocks when decoding images
CN111355978B (en) Video file processing method and device, mobile terminal and storage medium
RU2656727C1 (en) Compression control surfaces supported by virtual memory
US9538208B2 (en) Hardware accelerated distributed transcoding of video clips
WO2022143258A1 (en) Voice interaction processing method and related apparatus
US10027994B2 (en) Interactive audio metadata handling
CN116569255A (en) Vector field interpolation of multiple distributed streams for six degree of freedom applications
CN111355997B (en) Video file generation method and device, mobile terminal and storage medium
CN111355960B (en) Method and device for synthesizing video file, mobile terminal and storage medium
KR20130032482A (en) Mobile terminal and encoding method of mobile terminal for near field communication
US10727858B2 (en) Error resiliency for entropy coded audio data
TW202107451A (en) Performing psychoacoustic audio coding based on operating conditions
WO2022135105A1 (en) Video dubbing method and apparatus for functional machine, terminal device and storage medium
CN112799858A (en) Heterogeneous simulation model data processing method and system under heterogeneous joint simulation environment
KR20120077504A (en) Multimedia contents processing method and system
CN117435532B (en) Copying method, device and storage medium based on video hardware acceleration interface
US20240114312A1 (en) Rendering interface for audio data in extended reality systems

Legal Events

Date Code Title Description
WITN Withdrawal due to no request for examination