CN105245795A - Method of composing multimedia data and video player for playing moving pictures in an android operating system - Google Patents

Method of composing multimedia data and video player for playing moving pictures in an android operating system Download PDF

Info

Publication number
CN105245795A
CN105245795A CN201410345468.4A CN201410345468A CN105245795A CN 105245795 A CN105245795 A CN 105245795A CN 201410345468 A CN201410345468 A CN 201410345468A CN 105245795 A CN105245795 A CN 105245795A
Authority
CN
China
Prior art keywords
data
video
pixel data
audio
android
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201410345468.4A
Other languages
Chinese (zh)
Inventor
金垠希
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Edufeelmeida
Original Assignee
Edufeelmeida
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Edufeelmeida filed Critical Edufeelmeida
Publication of CN105245795A publication Critical patent/CN105245795A/en
Pending legal-status Critical Current

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/236Assembling of a multiplex stream, e.g. transport stream, by combining a video stream with other content or additional data, e.g. inserting a URL [Uniform Resource Locator] into a video stream, multiplexing software data into a video stream; Remultiplexing of multiplex streams; Insertion of stuffing bits into the multiplex stream, e.g. to obtain a constant bit-rate; Assembling of a packetised elementary stream
    • H04N21/2365Multiplexing of several video streams
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/236Assembling of a multiplex stream, e.g. transport stream, by combining a video stream with other content or additional data, e.g. inserting a URL [Uniform Resource Locator] into a video stream, multiplexing software data into a video stream; Remultiplexing of multiplex streams; Insertion of stuffing bits into the multiplex stream, e.g. to obtain a constant bit-rate; Assembling of a packetised elementary stream
    • H04N21/2368Multiplexing of audio and video streams

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)

Abstract

Provided is a method of composing multimedia data and a video player for playing moving pictures in an android operating system (OS). The video player comprises: a control unit which is accessed through FFmpeg programs, a data storage unit which is used to extract audio source data and video source data from a moving picture, a decoding unit which is used for converting the audio source data and the video source data to an audio pulse code modulation (pulse-code modulation, PCM) data and a video pixel data, a voice reproduction unit which reproduces audio pulse code modulation data based on voice frequency PCM data using the AudioTrack grade, image producing unit which uses the application programming interface (API) to produce display images based on the video pixel data. By use of the method and the player, multimedia data is easily combined with each other and composing multimedia data is easily displayed in an android mobile device.

Description

The video player of cardon is play in multi-medium data composite algorithm and Android system
Technical field
Exemplary embodiment relates to a kind of by the method for multi-medium data compound and the video player being used for playing motion picture; More specifically, relate to one in a mobile system by multi-medium data, such as motion picture and view data, the method for compound and for playing the Android video player comprising the motion picture of described compound multi-medium data in Android (android) mobile system.
Background technology
Recently, due to transport and the high development of the communication technology, scatter in the world at the local cultural product of specific some areas and place production and consumption and consume, there is day by day growth trend.Particularly, which country and continent no matter the amazing development of recent information and communication technology (ICT) is if making the individual mobile device (such as smart mobile phone and panel computer) with high audio/video performance use in the world, no matter this makes cultural product when and where scatter rapidly in the world and consume.As a result, the market capacity of cultural product surmounts border, various countries increases in the mode of index.
The cultural product (such as film, TV play and music) with Voice & Video content is consumed often by the emotion of consumer and impression, and therefore the geographical position of many consumers no matter residing for them all can be shared in the identical impression in cultural product media.The earnest consumer of cultural product often forms themselves group's (such as fan club) and participates in the various online and offline activity relevant with the cultural product that themselves is shared, and this can produce the association market relevant with cultural product.Such as, the market of fashionable dress, tourism, accessories and personage can be derived from film, music and TV play.
But, from the viewpoint of consumer, the concomitant output of fashionable dress, tourism, accessories and personage's (being called as second culture content) usually and the relation of film, TV play and music (being called as the first cultural issues) more weak.
Such as, the theme song of popular movies and drama can easily by consuming from online internet site down-load music with digital multimedia data form the individual mobile device of consumer oneself.But, the shooting place of popular movies and drama can by the tourism of high cost initiatively and directly experience, or can be passive and indirectly experience by means of only the shooting image in place and motion picture, the major part in these images provided by the supplier of the first cultural issues.That is, from consumer's personal view, shooting place and theme song be not mutual fetter and can be consumed individually by consumer.
Expect above-mentioned personal travel downloads to shooting place and theme song individual between be separated consumption and can strengthen, because these cultural products are distributed and consume in the world.Theme song can by means of only downloading to via personal Internet easily to be consumed at a low price in individual mobile device, and no matter consumer is at which country.But shooting place of travelling can be subject to very strict restriction, the high cost of such as international tourism is with long-time.Due to these reasons, consumer usually only buy the first cultural product or by cultural product supplier the simple combination of the first cultural product and second culture product is provided, and the individual of the first cultural product and second culture product can not be enjoyed and emotion combines.Such as, when the consumer of Korean TV Play travels the shooting place of Korea S, be difficult to combine with the TV play in its people's mobile device individually in the personal feeling of Korea S's shooting field.
Therefore, proposed in Virtual Space (such as personal computer and mobile device) by the various schemes of the multi-medium data of first and second cultural product compound in addition.Such as, the background image of individual by using the image in conventional pattern program shooting place to replace theme song music video in PC.As another example, in the PC of consumer oneself, also can by the personal images compound in addition of the singer of music video and consumer.
But the cultural product that provides of major part is in order to personal entertainment itself, therefore do not allow anyly to copy (such as it is changed) or compound.Therefore, individual compound motion picture is quite coarse and of poor quality, and this is not enough to the consumption promoting cultural product.
In addition, due to the restriction of the addressable hardware resource of Android operation system and codec program, multiple compound multi-medium data is difficult to show in Android mobile device.Particularly, because codec procedure dependency is in the operating system of each mobile device, so compound multi-medium data is not all the time shown by the interior video player of mobile device.
Due to these reasons, the individual compound of multi-medium data is subject to the operating system of mobile device and the restriction of hardware resource.Particularly, even Android operation system, its programming source and its programming source be in supplier APPLE brute force control under enclosed operation system iOS compared be opposing open, and various program be can freely access, compound and the display of compound multi-medium data in Android mobile device of multi-medium data also can not be allowed because of the software issue of codec program fully.Such as, the compound motion picture of derivative cultural issues (such as chroma key motion picture) and derivative cultural issues and second culture content can not be presented in Android mobile display of the present invention.
Therefore, for can easily by multi-medium data compound and the Android mobile device showing the improvement of compound multi-medium data fully now exists demand in addition.
Summary of the invention
Exemplary embodiment of the present invention provide a kind of in Android mobile device by the method for multi-medium data compound in addition.
Other exemplary embodiment of the present invention provide a kind of video player of the motion picture for manipulating the compound multi-medium data be included in Android mobile device.
According to some exemplary embodiment, provide a kind of in Android mobile device by the method for multi-medium data compound in addition.The first data and the second data are selected from the one group of multimedia digital data be stored in the Android mobile device that operated by Android operation system.Generate the first video data, the second video data, the first voice data and second audio data by processing the first and second data, and the first video data and the second video data compound are formed composite video data thus.From the first voice data, second audio data and comprise the first and second voice datas at least one data blended data one select composite audio data.Make composite video data and composite audio data syn-chronization, form compound multi-medium data thus.
In an exemplary embodiment, select the first data and the second data as follows: active control icon on the display of Android mobile device, and touch on the indicator screen listing multimedia digital data.
In an exemplary embodiment, generate the first video data, the second video data, the first voice data and second audio data as follows: from the first data and the second data, isolate video data and voice data respectively, generate the first and second separated video datas and the first and second separated voice datas according to the first and second data Format Type separately thus.Then, the first and second video pixel data are generated from the first and second separated video datas.Utilize chroma key technique process first and second video pixel data, thus the first video pixel data is converted to the first video data and remove video data as a setting and convert the second video pixel data to the second video data video data as a setting.The first and second audio pulse code modulation (PCM) data are generated from the first and second separated voice datas.
In an exemplary embodiment, utilize chroma key technique process first and second video pixel data as follows: set the first chroma key value in a mobile device.Then, the Color Channel of the first video pixel data is associated with the first alpha channel corresponding to the first chroma key value, makes the first video pixel data of a part have transparent color and the first video pixel data of remainder is formed as background removal video data.Set the second chroma key value in a mobile device.Then, the Color Channel of the second video pixel data is associated with the second alpha channel corresponding to the second chroma key value, makes the second video pixel data of a part have transparent color and the second video pixel data is formed as background video data.
In an exemplary embodiment, the first video pixel data and the second video pixel data have identical pixel size and frame size, and the first alpha channel and the second alpha channel are complementary relationship.
In an exemplary embodiment, by be FFmpeg library in the imagemux program of perform the first video data and the second video data.
In an exemplary embodiment, the first packet is containing the digital document of the film, TV play and the music that are generated by cultural product supplier, and the second packet contains the digital document of motion picture and the image generated by mobile device user individual.
In an exemplary embodiment, the first data comprise the digital document of film, TV play and the music generated by cultural product supplier, and the second packet is containing the digital document being generated film, TV play and music by mobile device user individual.
According to some exemplary embodiment, provide a kind of for playing the video player of motion picture in the mobile device operated by Android operation system (OS).This video player comprises: control unit, FFmpeg program as multimedia control program of increasing income accesses described control unit by the local development kit (NDK) of Android, and the hardware resource that this control unit controls Android mobile device plays motion picture thus; Motion picture is separated into audio-source data and video source data and the data storage cell of storing audio source data and video source data individually; Audio-source data and video source data are converted to the decoding unit of tonepulse mode modulation (PCM) data and video pixel data; Be connected to decoding unit and use the AudioTrack class of java programming language from the sonorific sound generation unit of audio frequency PCM data in response to the sound generation signal of control unit; Be connected to decoding unit and produce signal in response to the image of control unit and by adopting graphics standard API (API) to produce the image generation unit of image from video pixel data, this graphics standard API (API) is connected to Android operation system through GLUE routine.
In an exemplary embodiment, data storage cell comprises audio pack queue and video packets queue: wherein audio-source data are stored to described audio pack queue in chronological order with predetermined audio pack, and video source data is stored to video packets queue in chronological order with predetermined video packets; And decoding unit comprises: be connected to audio pack queue and audio-source data decode become the audio decoder of audio frequency PCM data and be connected to video packets queue and video source data be decoded into the Video Decoder of video pixel data.
In an exemplary embodiment, the libavformat program that it is of the library of FFmpeg that data storage cell also comprises by use detects the source reader of the sound of motion picture and the Format Type of picture signal; And decoding unit also comprises the encoding and decoding storehouse of the codec program of the library being stored as FFmpeg.
In an exemplary embodiment, sound generation unit comprises: have in response to sound produces signal and call the PCM data link in the first function call portion of AudioTrack class, utilizes this PCM data link continuously and sequentially transmission of audio PCM data from audio decoder by using AudioTrack class; And have for calling the java local interface (JNI) utilizing the local development kit (NDK) of Android and the first hardware driver accessing the second function call portion of the Hardware drive module of mobile device, this first hardware driver drives the hardware resource for sound to produce sound according to audio frequency PCM data thus by using Hardware drive module.
In an exemplary embodiment, audio decoder is in the state activated by control unit with the form of java thread, thus converts audio-source data to audio frequency PCM data continuously when playing motion picture.
In an exemplary embodiment, image generation unit comprises: be connected to Video Decoder and the video pictures queue sequentially video pixel data be transferred to from Video Decoder; The video updating device of the API signal for activating figure standard A PI is produced in response to image produces signal; With in response to API signal from video pictures queue receiver, video pixel data and produce the pixel data processor of image according to video pixel data thus by utilizing the data processing step in graphics standard API to process video pixel data.
In an exemplary embodiment, pixel data processor comprises: have the pixel data request of calling the 3rd function call portion of the GLSurfaceView class of OpenGL module in response to API signal, this OpenGL module is one in the library module in graphics standard API; Pixel data request activates GLSurfaceView class thus transmission of video pixel data continuously from video pictures queue in chronological order; With process the video pixel data transmitted by GLSurfaceView class and on mobile device display, produce the second hardware driver of image.
In an exemplary embodiment, pixel data processor also comprises for video pixel data frame size being controlled to the frame size that equals video source data to generate the frame control module with the image of the frame size of video source data over the display.
In an exemplary embodiment, graphics standard API comprises the OpenGLES of the modified version being OpenGL for embedded system, and frame control module has the 4th function call portion of the texture mapping class for calling OpenGLES.
In an exemplary embodiment, pixel data processor also comprises the color space for changing video pixel data the color space conversion of video pixel data to be become the color-space conversion module of the color space of OpenGL.
In an exemplary embodiment, color-space conversion module has the 5th function call portion of the tinter class for calling OpenGLES.
In an exemplary embodiment, Video Decoder is in the state activated by control unit with java thread form, to convert video source data to video pixel data continuously when playing motion picture, and video updating device is in the state activated by control unit with java thread form, so that video updating device detected image produces signal and generates API signal when producing image generation signal from control unit.
Accompanying drawing explanation
Based on following detailed description also by reference to the accompanying drawings, more clearly exemplary embodiment will be understood.
Fig. 1 is the structure chart of the data recombiner unit for compound multi-medium data in Android mobile device illustrated according to the present invention's exemplary embodiment.
Fig. 2 is by the flow chart of the multi-medium data method of compound in addition in the data recombiner unit being disclosed in Android compound mobile device.
Fig. 3 is the flow chart being disclosed in complex method shown in Fig. 2 the treatment step selecting the first and second data.
Fig. 4 is the flow chart being disclosed in complex method shown in Fig. 2 the treatment step generating the first and second voice datas and the first and second video datas.
Fig. 5 is the flow chart being disclosed in the treatment step in complex method shown in Fig. 2, the first and second video pixel data being converted to the first and second video datas.
Fig. 6 be illustrate according to the present invention's exemplary embodiment for by Android operation system (OS) control the structure chart of the video player playing motion picture in mobile device.
Fig. 7 is the structure chart of the sound generation unit that video player according to Fig. 6 of the present invention's exemplary embodiment is described.
Fig. 8 is the structure chart of the image generation unit that video player according to Fig. 6 of the present invention's exemplary embodiment is described.
Embodiment
With reference to the accompanying drawings various exemplary embodiment is more fully described hereinafter, exemplary embodiment more shown in the drawings.But the present invention can be embodied as many different forms, and should not be understood as that and be confined to exemplary embodiment described herein.On the contrary, the disclosure is made will to be detailed by providing these exemplary embodiment and completely and scope of the present invention will be conveyed to those skilled in the art completely.In the accompanying drawings, for the sake of clarity, the size in layer and region and relative size can be exaggerated.
Should be understood that, when element or layer be called as " ... on ", " being connected to " or " being connected to " another element or layer time, it can directly upper, be connected to or be connected to other elements or layer, or the element that can exist between or layer.On the contrary, when element be called as " directly upper ", " being directly connected to " or " being directly connected to " another element or layer time, there is not element between or layer.Numeral similar in the text refers to similar element.Term "and/or" used herein comprises one or more being correlated with and lists the combination in any of item and all combinations.
Should be understood that, although term herein " first, second, third, etc. " can be used for describing various element, parts, region, layer and/or part, these elements, parts, region, layer and/or part should by the restrictions of these terms.These terms are only for being distinguished an element, parts, region, layer or part and another region, layer or part.Therefore, when not deviating from the present invention's instruction, the first following element, parts, region, layer or part can be called as the second element, parts, region, layer or part.
Herein the spendable term relevant with space such as " in below ", " below ", D score, " up ", " on " etc. be relation in order to describe an element or feature and another element or feature simply, as shown in the drawing.Should be understood that, the term intention relevant with space comprises the different orientation of this device in use or operation except orientation described in accompanying drawing.Such as, if by the device upset in accompanying drawing, then the element being described as " below " or " in below " will be located at above other elements or feature with other elements or feature.Therefore, two kinds of orientation above exemplary term " below " can be included in and below.This device can additionally orientation (90-degree rotation or in the other direction), and therefore part relative descriptors used herein can so be explained.
The object of term used herein just in order to describe concrete exemplary embodiment, and be not intended to limit the present invention.Unless pointed out clearly in context, singulative used herein " ", " one " and " being somebody's turn to do " intention also comprise plural form.It is also to be understood that, the term used in this specification " comprises " and/or determines " comprising " existence of described feature, integer, step, operation, element and/or parts, but does not get rid of other features one or more, integer, step, operation, element, the existence of parts and/or its ethnic group or interpolation.
Reference is herein that the profile of the schematic diagram of idealized one exemplary embodiment (and intermediate structure) is to describe one exemplary embodiment.Therefore, the change of shape in the diagram caused due to such as manufacturing process and/or tolerance is normal.Therefore, exemplary embodiment should not be understood as that and be confined to illustrated given shape region herein, but comprises the change of shape owing to such as causing.Such as, the implanted region being illustrated as rectangle usually will have the gradient of circle or bending features and/or the implant at its edge instead of change from implanted region to the binary of non-implanted region.Similarly, owing to implanting the embedding region formed, some implants can be formed in the region between the embedding region of implanting and surface.Therefore, the region shown in accompanying drawing is exemplary in itself, and their shape is not intended to the true form in the region that device is described and is not that intention limits the scope of the invention.
Unless otherwise prescribed, all terms used herein (comprising technology and scientific terminology) have the implication identical with the implication that one skilled in the art of the present invention understand usually.It will also be appreciated that the term defined in such as common dictionary should be counted as having the consistent implication of implication with them in association area background, and will not explain, unless so explicitly pointed out herein in the mode of idealized or excessive doctrine.
Hereinafter, detailed description exemplary execution mode with reference to the accompanying drawings.
Fig. 1 be illustrate according to the present invention's exemplary embodiment in Android mobile device by the topology view of the data recombiner unit of multi-medium data compound.Fig. 2 is by the flow chart of the method for multi-medium data compound in the data recombiner unit being disclosed in Android mobile device.Fig. 3 is the flow chart for disclosing in complex method shown in Fig. 2 the treatment step selecting the first and second data.Fig. 4 is the flow chart for disclosing in complex method shown in Fig. 2 the treatment step generating the first and second voice datas and the first and second video datas.Fig. 5 is the flow chart for disclosing the treatment step in complex method shown in Fig. 2, the first and second video pixel data being converted to the first and second video datas.
Referring to figs. 1 through Fig. 5, the mobile device controlled by Android operation system (OS) (being called as Android mobile device) comprises: Mobile solution processor (AP) 10, display 20 and storage device 30.Mobile AP10 comprises various application program and systematically controls the component models of mobile device.Under the control of mobile AP10, display 20 shows various data on screen and storage device 30 stores data.
Such as, mobile AP10 provides with the form of SOC (system on a chip) (SoC), and wherein component models such as CPU (CPU), Graphics Processing Unit (GPU), communication chip, various sensor chip, display driver chip and multimedia driver module is installed on a single chip.Utilize mobile AP10 to make the component models of mobile device systematically interrelated, thus manipulate mobile device with the form of Single Electron system.
Display 20 shows the data by mobile AP10 process intuitively.Memory device comprises: wherein store the read-only memory (ROM) 31 of various system program (such as Android operation system program) and other application programs (such as the codec program of multi-medium data), wherein store such as by the additional storage such as random access memory (RAM) 32 and wherein store the main storage 33 of various external data of the data of mobile AP10 process provisionally.Main storage 33 comprises flash memories in mobile device and external memory storage (such as storage card).
Especially, the application program for multi-medium data comprises the program FFmpeg that increases income being linked into mobile device by the local development kit (NDK) of use Android.Therefore, may be used for processing the multi-medium data in mobile device for the various programs of increasing income of multi-medium data in FFmpeg.Such as, FFmpeg comprises various library, such as libavformat (video and audio format parser), the libacodec (video encoding decoder) for encoding/decoding data of multiplexing and demultiplexing (mux/demux) data and the libswscale (video sealer) for the Format Type of changing pixel data.
Mobile device possesses the data recombiner unit 500 for compound multi-medium data.Such as, data recombiner unit 500 comprises: for multi-medium data is separated into voice data and video data data extractor 520, by making composite video data and composite audio data syn-chronization and generating the complex data maker 530 of compound multi-medium data and be used for control data separator 520 and complex data maker 530 and control the composite controller 510 that the data separating of multi-medium data and control is used for the data compound of compound multi-medium data thus.Voice data and the video data of process in data extractor 520 are temporarily stored in buffer 540, and compound data generator 530 receives treated voice data from the buffer 540 for generating compound multi-medium data and treated video data.
Composite controller 510 runs composite algorism and selects the one-to-many media data utilizing composite algorism to carry out compound in ROM31.Therefore, from one group of multimedia digital data the storage device 30 of Android mobile device, the first and second data (step S100) utilizing the mutual partly compound of composite algorism are selected by composite controller 510.
Such as, the compound icon that composite controller 510 is shown on display 20 screen activated.Composite algorism is started by activating compound icon.In this exemplary embodiment, activate compound icon (step S110) by touch display screen.When the compound icon on screen is activated, shows complex controll menu on the screen of the display 20 and in display 20, be listed in the multi-medium data in storage device 30.The user of mobile device can verify all multi-medium datas in storage device 30 on complex controll menu.By means of only touching multimedia data file listed on the display 20, and from multi-medium data list, select one-to-many media data as the first and second data (step S120).
First and second data are included in mobile device user private life the various multimedia digital datas produced.Such as, the first and second packets contain cultural issues (such as can download to film, music and the TV play in user's mobile device) or comprise daily life content (motion picture such as produced in the private life of mobile device user, digital picture and digital audio file).
Particularly, the first packet is containing the digital document of the film, TV play and the music that are generated by cultural product supplier, and the second packet contains the digital document of motion picture and the image generated by mobile device user individual.Therefore, the numerical data of supplier being originated simply in a mobile device by user and the numerical data that produces of user in addition compound, can accelerate consumption and the distribution of cultural issues thus.
In addition, the second data also comprise the digital document of film, TV play, music, motion picture and the image generated by cultural product supplier.That is by user in a mobile device by the numerical data generated by supplier and the numerical data to be generated by another supplier in addition compound, the cultural issues that can generate based on supplier thus generates derivative cultural issues.
Data extractor 520 processes the first and second data, generates the first and second video datas and the first and second voice datas (step S200) thus.
Such as, data extractor 520 comprises for respectively by the data-analyzing machine 521 of the first and second data separating audio data and video data, the video processor 522 for the treatment of the first and second video datas and the audio process 523 for the treatment of the first and second voice datas.
Data-analyzing machine 521 is analyzed the first and second data and isolate video data or image from the first and second data, with voice data or sound, thus generate the first and second separated video datas and the first and second separated voice datas (step S210) according to the respective Format Type of the first and second data.
Such as, the first and second data separating are become respective video data and voice data by data-analyzing machine 521, and each Format Type by using libavformat library to come audio data and video data.As a result, the first data separating is become the first separated voice data and the first separated video data, and also according to respective Format Type, the second data separating is become the second separated voice data and the second separated video data.
By the first and second separated video data transmission to video processor 522, and by the first and second separated audio data transmission to audio process 523.
In video processor 522, generate the first and second video pixel data (step S220) based on the first and second separated video datas, and the first and second video pixel data are converted respectively to the first video data and remove video data and the second video data video data (step S230) as a setting as a setting.
By the first and second separated video data transmission to the lsb decoder 522a of video processor 522, and utilize the Video Decoder of FFmpeg that they are decoded into the first and second video pixel data.
Then, the first and second video pixel data transferred to pixel processor 522b and the first and second video pixel data carried out to the process of multi-medium data compound.In this exemplary embodiment, the first video pixel data is processed into the image removing background from first through separating video data, it is called as background removal video data.In addition, the second video pixel data is processed into the removed image of a part wherein, the remainder of this image and described background removal video image carry out compound and as the background of compound multi-medium data, it is called as background video data.
Such as, in pixel processor 522b, chroma key technique is utilized to process to the first and second video pixel data.Chroma key technique by using alpha channel for representing transparent color by multi-medium data in addition compound.When the image of the motion picture to a part processes and has transparent color, the image of the motion picture of remainder only can be identified visually, because original color still exists.Be not processed into and there is motion picture that therefore transparent color have a remainder of original color can be selected as a kind of data in background removal video data and background video data.The compound of background removal video data or background video data and another motion picture is convenient to the change of motion picture background.
First and second video pixel data have the Color Channel in red, blue and blue (RGB) color space, and therefore have 3 passage dot structures, each passage of this structure has 8 sizes.Therefore, each pixel of the first and second video pixel data has the color represented by the mixing of the optics redness (R) in 24 bit data structures, blue (B) and green (G).Utilize chroma key technique to make to have the alpha channel of identical 8 sizes and Color Channel interrelated, thus respectively the first and second video pixel data are changed over 4 passage dot structures.
The transparent color of alpha channel representative except red, blue and green color, the thus transparency of each pixel of the binary code representation of alpha channel.In this exemplary embodiment, according to the combination of 8 alpha channels, alpha channel represents different 64 (=2 8) the pixel transparent degree of planting.When the binary code of each place's alpha channel is 0, pixel has definitely transparent color, and therefore the color of the Color Channel of this pixel is represented fully.On the contrary, when when the binary code of each place's alpha channel is 1, pixel has absolute opaque color, and therefore alpha channel plays the effect of the veil for covering this pixel color.In this case, the color of not display pixel on mobile device display.
First and second video pixel data and transparent alpha channel or opaque alpha channel merge at each pixel place by pixel processor 522b.Therefore, when having 3 channel data structure in each pixel place first and second video pixel data, alpha channel is added in 3 channel data structures of pixel.On the contrary, when having there is 4 channel data structure that comprise alpha channel in each pixel place first and second video pixel data, in pixel processor 522b, change the binary code of alpha channel, make to represent absolute transparent color (locating binary code at each is 0) or absolutely not transparent color (locating binary code at each is 1) at each pixel place alpha channel of the first and second video pixel data.
Binary code corresponding to the alpha channel of each pixel of the first and second video pixel data provides in the mode of chroma key value.Such as, to expecting in the first and second video pixel data that removed part is carried out process and made to have absolute opaque alpha channel, and make to have definitely transparent alpha channel to expecting in the first and second video pixel data that the part retained processes.
Each pixel of the first and second video pixel data has definitely transparent or opaque alpha channel, and provides the value of each alpha channel based on chroma key value.Such as, in the complex controll menu on mobile device display, chroma key configuration part is set, therefore with personal manner, chroma key value is supplied to the first and second data by means of only the screen touching complex controll menu.
In this exemplary embodiment, just after select the first data from the multimedia list on display, set the first chroma key value (step S231) by the screen in touch display colouring key assignments portion.Therefore, make to correspond to the first alpha channel of the first chroma key value to be associated with each Color Channel of the first video pixel data, make the first video pixel data of a part have transparent color and the first video pixel data of remainder is formed as background removal video data (step S232).Background removal video data is transferred to video buffer 541 as the first video data.
Thereafter, after select the second data from the multimedia list on display, set the second chroma key value (step S233) by the screen in touch display colouring key assignments portion.Therefore, make to correspond to the second alpha channel of the second chroma key value to be associated with each Color Channel of the second video pixel data, thus the second video pixel data of a part has transparent color and the second video pixel data of remainder is formed as background video data (step S234).Background video data are also transferred to video buffer 541 as the second video data.
Can be provided for together with mobile device pixel association algorithm that Color Channel is associated with alpha channel in ROM31 or mobile AP10, or can obtain corporally with the form of associated program.
Thereafter, utilize complex data maker 530 by background removal video data and the mutual compound of background video data, generate single compound multi-medium data thus, such as wherein by the motion picture of first and second mutual compound in data division ground.
Audio process 523 processes the first and second separated voice datas, therefore generates the first and second audio pulse code modulation (PCM) data (step S240) based on the first and second separated voice datas.
Such as, the first and second separated voice datas are decoded into the first and second audio frequency PCM data that can be directly play by hardware resource (loud speaker of such as mobile device).Such as, the audio decoder of FFmpeg is utilized to decode the first and second separated voice datas.
The first and second separated voice datas are provided according to the first and second data selections.Such as, when providing a kind of data in the first and second data with Still image data form, a kind of data in the first and second separated voice datas can not be provided in a mobile device.
First and second audio frequency PCM data are transferred to audio buffer 542 with the form of the first and second voice datas respectively.
The buffer 540 comprising video buffer 541 and audio buffer 542 is connected to data extractor 520, video data and voice data to be stored in independently in buffer 540.
First and second video datas are transferred to complex data maker 530 and mutual compound.In addition, the first and second voice datas are also transferred to complex data maker 530 and mutual compound.
Such as, complex data maker 530 comprises: for by from video buffer 541 transmission the first video data and the second video data compound formed thus composite video data video recombiner 531, be used for the first voice data transferred out from audio buffer 542 and second audio data compound to be formed thus composite audio data audio frequency recombiner 532 and be used for making composite video data and composite audio data syn-chronization form the synchronizer 533 of compound multi-medium data thus.
In video recombiner 531, first video data and the mutual compound of the second video data are formed composite video data (step S300) thus.Such as, by being used as a kind of imagemux program in the library of FFmpeg, by the first video data and the second video data compound.
Because remove image as a setting to provide the first video data and image provides the second video data as a setting, so compound multi-medium data is formed to replace the mode of the background of the first data by the background of the second data.
Particularly, when the first video pixel data and the second video pixel data have same pixel size and same number of frames size, the first alpha channel and the second alpha channel are complementary relationship.That is, when having absolute transparent color at the first alpha channel of the first video pixel data place pixel, the second alpha channel corresponding to the one other pixel of the pixel of the first video pixel data has absolutely not transparent color at the second video pixel data place.Therefore, by means of only the complementary binary code of exchange first alpha channel and the second alpha channel, easily the first data and the second data can be processed into background removal video data or background video data.
Audio frequency recombiner 532 can from the first voice data, second audio data and comprise the first and second voice datas at least one blended data one select composite audio data (step S400).
Therefore, composite audio packet containing a kind of data in the first voice data and second audio data, or comprises the mixing audio data of the first voice data and second audio data.In addition, composite audio packet is containing the blended data of the first voice data, second audio data and the 3rd voice data.In this case, independent of the first and second voice datas, the 3rd voice data can be provided to audio buffer 542.
Synchronizer 533 makes composite video data and composite audio data phase mutually synchronization, forms compound multi-medium data (step S500) thus.Such as, time tag can be added in composite video data and composite audio data, and make composite video data and composite audio data phase mutually synchronization in chronological order based on time tag.Storehouse synchronization program in Ffmpeg or external sync program can be used for the synchronization of composite video data and composite audio data.
According to the said method by multi-medium data compound, can easily by mutual for one-to-many media data compound in Android mobile device.Particularly, can utilize as increase income and freely the Ffmpeg of multimedia control program and its various library of utilizing Android NDK and accessing in mobile device to perform the compound of multi-medium data, thus in a mobile device without the need to extra recombination process.
In addition, can download to the first motion picture (such as film, music and TV play) in Android mobile device can easily with the second motion picture (such as taking the personal multi-media data of place digital picture and mobile device user) compound, thus the active consumption of promotionization product.In addition, the cultural product (such as film, TV play and music) that supplier generates can easily with the digital content compound of user sources, the consumption of second culture product the first cultural product can accelerated thus and derive from the first cultural product.
Hereinafter, the video player being used for the compound multi-medium data play in Android mobile device will be described in detail.Be the simple and easy straight control Media layer (SDL) of ffplay PROGRAMMED REQUESTS of in the library of the FFmpeg for playing motion picture, this SDL is the cross-platform library file for multi-medium data, and this file is for manipulating ffplay.But, because the local development kit (NDK) of Android does not comprise simple and easy straight control Media layer (SDL), so the ffplay of routine can not run in Android mobile device.
Fig. 6 illustrates the structure chart for playing the video player of motion picture in the mobile device controlled by Android operation system (OS) according to the present invention's exemplary embodiment.
With reference to Fig. 6, comprising for the video player 2000 (hereinafter referred to as Android video player) playing motion picture in the mobile device controlled by Android operation system: the motion picture operator D controlled by control unit 1100 and be connected to motion picture operator D and physically play the hardware resource H of motion picture.Motion picture operator D comprises: data storage cell 1200, decoding unit 1300, sound generation unit 1400 and image generation unit 1500.
Control unit 1100 control hardware resource H and motion picture operator D, plays motion picture thus in the memory device of mobile device.Control unit 1100 comprises motion picture and drives engine, utilizes the local development kit (NDK) of Android that the FFmpeg program as multimedia control program of increasing income is linked into control unit 100.
Control unit 1100 comprises the selection window be shown on mobile device display, and selection will by the motion picture of compound on selection window.By from selection window in select motion picture and automatically driving control unit thus manipulation hardware resource H and motion picture operator D, to play motion picture in a mobile device.
Motion picture is separated into audio-source data and video source data by data storage cell 1200, and separates storing audio source data and video source data.
Such as, data storage cell 1200 comprises: wherein storing moving picture storehouse, source 1210, for the sound and picture signal that detect motion picture Format Type and motion picture is separated into audio-source data and video source data source reader 1220, by predetermined audio pack in chronological order storing audio source data in audio pack queue 1230 wherein and by predetermined video packets in chronological order store video source data in video packets queue 1240 wherein.
Storehouse, source 1210 comprises the memory device of mobile device, and therefore the auxiliary storage portion of mobile device and external memory device (such as storage card) are used as storehouse, source 1210.This source movement picture is included in the compound multi-medium data formed in the recombiner unit of data shown in Fig. 1.Particularly, although motion picture comprises mobile graphics personage (such as incarnation) and the data capacity for the alpha channel therefore motion picture of transparent color is relatively large, video player 2000 can reasonably well play Large Copacity motion picture.
Source code reader 1220 is that the libavformat program of in the library of FFmpeg detects the sound of motion picture and the Format Type of picture signal by using, and voice signal is separated audio frequency source data and video source data with picture signal.
Then, in chronological order audio-source data are stored into audio pack queue 1230 by predetermined audio pack, and are stored into video packets queue 1240 by predetermined video packets by temporally suitable for video source data.
Decoding unit 1300 converts audio-source data and video source data to tonepulse mode modulation (PCM) data and video pixel data.Such as, decoding unit 1300 comprises: be connected to audio pack queue 1230 and audio-source data decode become the audio decoder 1310 of audio frequency PCM data and be connected to video packets queue 1240 and video source data be decoded into the Video Decoder 1320 of video pixel data.In addition, decoding unit 1300 comprises codecs libraries 1330, and wherein storage is the codec program of the library of FFmpeg.
Audio frequency PCM data are that representative is with the numerical data of the analoging sound signal of binary code repressentation sampling.Easily audio frequency PCM data transaction is become analogue data by analogue-to-digital converters (ADC), therefore play analogue data by hardware resource (such as loud speaker).
Video pixel data is the numerical data that representative adopts the skimulated motion picture of being sampled by pixel cell of binary code repressentation, and video pixel data is injected each pixel of mobile device display.Then, video pixel data is presented on indicator screen with the form of motion picture.Such as, video pixel data comprises Color Channel, three wherein provided 8 bit ports represent the color of each pixel in RGB color space, also comprise alpha channel, 8 wherein provided bit ports represent the transparent color of each pixel, and thus video pixel data has 4 channel designs and represents 32 bit images.
Sound generation unit 1400 is connected to the audio decoder 1310 of decoding unit 1300, and in response to the sound generation signal of control unit 1100 by using the AudioTrack class of java programming language to produce sound based on audio frequency PCM data.
Fig. 7 is the structure chart of the sound generation unit illustrated according to video player shown in Fig. 6 of the present invention's exemplary embodiment.
With reference to Fig. 7, sound generation unit 1400 comprises: have for responding sound generation signal and call the PCM data link 1410 in the first function call portion 1411 of AudioTrack class and have for calling the first hardware driver 1420 being linked into the second function call portion 1421 of the Hardware drive module of mobile device by the java local interface (JNI) of Android NDK.PCM transfer of data 1410 is by using AudioTrack class continuously and sequentially transmission of audio PCM data from audio decoder.First hardware driver 1420 drives the hardware resource for sound by using Hardware drive module, produces sound thus according to audio frequency PCM data.
When control unit 1100 produce sound produce signal time, from java class group C, call AudioTrack class and AudioTrack class asks audio frequency PCM data to audio decoder 1310.When the packet of audio frequency PCM data is transferred to AudioTrack class, AudioTrack class is by the data packet transmission of audio frequency PCM data to AudioFlinger (audio manager), and this AudioFlinger is the Hardware drive module utilizing the JNI of Android NDK (Java local interface) and be linked into the local code group (NCG) of control unit.AudioFlinger drives the hardware resource (such as loud speaker and earphone) for sound according to transmitted audio frequency PCM data.
Use the callback interface of conventional ffplay program employing for transmission of audio PCM data in the SDL storehouse for playing motion picture.Sound produces signal and calls callback interface, and callback interface is enough to be used in playing sound to audio decoder request audio frequency PCM data.In this case, audio decoder 1310 by audio-source data decode one-tenth and request amount as many audio frequency PCM data, and transmits the audio frequency PCM data of request amount at a time utilization callback interface.
But ffplay employing AudioTrack class of the present invention replaces the callback interface for transmission of audio PCM data, and thus the transmission of audio frequency PCM data must be continuous and consistent.Due to this reason, audio decoder is in the state activated with the form controlled unit 1100 of java thread, to convert audio-source data to audio frequency PCM data continuously when playing motion picture.That is, when only callback interface calls audio frequency PCM data in conventional ffplay, perform the decoding of the audio frequency PCM data based on audio-source data discontinuously.On the contrary, when playing motion picture in the Mobile player of the present invention using ffplay, the decoding from audio-source data to audio frequency PCM data is performed continuously and as one man.
Image generation unit 1500 is connected to Video Decoder 1320, and decoding unit 1300 produces signal in response to the image of control unit 1100, by using the graphics standard API (API) being connected to Android operation system through GLUE routine, produce image based on video pixel data.
Fig. 8 is the structure chart of the image generation unit illustrated according to video player shown in Fig. 6 of the present invention's exemplary embodiment.
With reference to Fig. 8, image generation unit 1500 comprises: be connected to Video Decoder 1420 and video pixel data being sequentially transferred to video pictures queue 1510 wherein, generating the video updating device 1520 of the API signal for activating figure standard A PI in response to image produces signal and in response to API signal receiver, video pixel data and by use the data processing step in graphics standard API to process video pixel data produces image thus pixel data processor 1530 according to video pixel data from video pictures queue 1510 from Video Decoder 1420.
Video pictures queue 1510 comprises buffer memory, wherein temporarily storehouse video pixel data under stack states.Such as, the auxiliary memory devices of mobile device is for video pictures queue 1510.Video packets size and time tag are distributed to video pixel data by the Format Type according to video source data and video pixel data, and according to video packets size and time tag by video pixel data temporarily storehouse in auxiliary memory devices.
The API signal that video updating device 1520 generates for activating figure standard A PI in response to image produces signal.Graphics standard API utilizes the local development kit (NDK) of Android to be linked into mobile device.
Video updating device 1520 is in the state activated with the form controlled unit 1100 of java thread, thus produces signal when producing video updating device 1520 detected image when image produces signal from control unit and generates API signal.
Pixel data processor 1530 comprises: have pixel data requester 1532 (OpenGL module is in the library module in graphics standard API) for calling the 3rd function call portion 1531 of the GLSurfaceView class of OpenGL module in response to API signal and process the video pixel data transmitted by GLSurfaceView class and on mobile device display, produce the second hardware driver 1534 of image.Pixel data requester activates GLSurfaceView class, so that transmission of video pixel data continuously from video pictures queue 1510 in chronological order.
Such as, video pixel data is shown on hardware resource (display of such as mobile device) by using the OpenGL (OPENGL) of API and OpenGLES (OpenGL of embedded system).OpenGLES is the improvement version of the OpenGL for embedded system.According to the ffplay of routine, video pixel data transferred to SDL storehouse and carry out control hardware driver module by SDL storehouse according to transmitted video pixel data.But, the image of video player 2000 of the present invention by using OpenGL or OpenGLES of graphics standard API to carry out displaying video pixel data.
Such as, pixel data requester 1532 calls the GLSurfaceView class of OpenGL through the 3rd function call portion 1531 in response to API signal.Particularly, video pixel data is asked in function G LSurfaceView.ReqestRender () video pictures queue 1510, therefore the video pixel data from video pictures queue 1510 is transferred to the second hardware driver 1534 with the form of GLSrufaceView.RequestRender (pixel data).
Then, video pixel data is transferred to the function G LSurfaceView.Render.onDraw () in the second hardware driver 1534.Function G LSurfaceView.Render.onDraw () directly access hardware resource H (such as the display of Android mobile device) and display video pixel data on the display of the mobile device.That is, utilize the class GLSurfaceView of OpenGL or OpenGLES in API instead of utilize SDL storehouse to carry out the image of display video pixel data.
In fig. 8, Reference numeral 1533 specifies the function call portion being used for calling class from the API the second hardware driver 1534.In this exemplary embodiment, function call portion 1534 is substantially the same with the 3rd function call portion 1531, because class GLSurfaceView calls from API in function call portion.
Pixel data processor 1530 also comprises frame control module 1536, and this module is for the frame size of video pixel data being controlled to the frame size equaling video source data, to produce the image with the frame size of video source data over the display.In addition, pixel data processor 1530 also comprises color-space conversion module 1538, and this module is color space for changing video pixel data thus the color space conversion of video pixel data is become the color space of OpenGL.
The image of function G LSurfaceView.Render.onDraw () display video pixel data on the screen frame being greater than video source data, thus before being shown on screen by the image of video pixel data, need the frame frame of video pixel data being reduced to video source data.
Such as, frame control module 1536 has the 4th function call portion 1537 of the texture mapping class for calling OpenGLES, is therefore adjusted the frame size of video pixel data by texture mapping class.
When video pixel data is transferred to texture mapping class, the preparation of texture mapping class is greater than the background texture of the size of data of video pixel data and video pixel data is copied in background texture.Thereafter, the size of the background texture combined with video pixel data is readjusted the frame size into video source data, and by the texture display through readjusting size on indicator screen.
When video source data being decoded into video pixel data in YUV color space, there is the problem that OpenGL or OpenGLES can not access YUV pixel data.In this case, YUV pixel data need be converted to rgb pixel data represented in RGB color space.
But, when the fast video convergent-divergent (libswscale) being the color space conversion storehouse of FFmpeg by use converts YUV pixel data to rgb pixel data, Mobile solution processor A P is exceedingly overloaded by color space conversion, and this causes the operation failure of mobile device.
Such as, color-space conversion module 1538 has the 5th function call portion 1539 of the tinter class for calling OpenGLES.Therefore, when the Mobile solution processor not having the class tinter of OpenGLES to mobile device produces any overload, be enough to convert YUV pixel data to rgb pixel data.
Therefore, class GLSurfaceView directly can access video pixel data in RGB color space and on mobile device display the image of display pixel data.Particularly, the fragment shader class for YUV pixel data being converted at each pixel place to RGB data is called in the 5th function call portion 1539.
In this case, Video Decoder 1320 is in the state activated with the form controlled unit 1100 of java thread, thus converts video source data to video pixel data continuously when playing motion picture.
According to the video player of this exemplary embodiment, do not playing motion picture by using SDL storehouse but being linked in the ffplay of Android mobile device by use graphics standard API and java standard class.
According to by the method for multi-medium data compound and the exemplary embodiment being used for the video player playing compound multi-medium data in Android mobile device, easily utilize wherein Android NDK have access to as control multi-medium data increase income and free program FFmpeg program Android mobile device in by multi-medium data in addition compound, and by use graphics standard API, compound multi-medium data to be shown on the screen of Android mobile device.
Particularly, can easily by the first cultural issues (such as film, music and TV play) with by the derivative various derivative cultural issues (such as chroma key motion picture and moving picture personage AVATAR) of the first cultural issues and the personal multi-media data in addition compound that utilizes the complex method of above-mentioned multi-medium data to be generated by each individual subscriber of mobile device.In addition, in identical mobile device, compound multi-medium data is shown well enough.Therefore, on one's own initiative by cultural product in addition compound and being modified by independent consumer, the active consumption of cultural product can be accelerated thus.In addition, the individual compound of the first and second cultural issues and personal multi-media data can promote consumption and the distribution of whole cultural product.
Foregoing teachings should not be understood as that the illustration of exemplary embodiment its restriction.Although described some exemplary embodiment, those skilled in the art have been can make many amendments in exemplary embodiment under the prerequisite not deviating from new instruction of the present invention and advantage substantially by what easily understand.Therefore, all this amendments are intended that and are included in as defined in the claims in the scope of the invention.In the claims, the sentence that method adds function is intended that contains structure described herein when performing the function enumerated, and is not only structural equivalents and is the structure of equivalence.Therefore, it should be understood that, description is above the illustration of various exemplary embodiment and should be understood as that and be confined to disclosed concrete exemplary embodiment, is intended that comprises within the scope of the appended claims the amendment of disclosed exemplary embodiment and other exemplary embodiment.

Claims (20)

1. in Android mobile device by a method for multi-medium data compound, comprising:
The first data and the second data are selected from the one group of multimedia digital data be stored in the Android mobile device that operated by Android operation system;
The first video data, the second video data, the first voice data and second audio data is generated by processing described first and second data;
By described first video data and described second video data compound, form composite video data thus;
From described first voice data, described second audio data and comprise described first and second voice datas at least one blended data one select composite audio data; With
Make described composite video data and described composite audio data syn-chronization, form described compound multi-medium data thus.
2. method according to claim 1, wherein select described first data and described second data to comprise:
Activate the control icon on Android mobile device display; With
Touch the indicator screen listing multimedia digital data.
3. method according to claim 1, wherein generates described first video data, described second video data, described first voice data and described second audio data and comprises:
From described first data and described second data, isolate picture signal and voice signal respectively, generate described first and second separated video datas and described first and second separated voice datas according to described first and second data Format Type separately thus;
The first and second video pixel data are generated from described first and second separated video datas;
Utilize the first and second video pixel data described in chroma key technique process, convert described first video pixel data to described first video data thus and remove video data as a setting, and convert described second video pixel data to described second video data video data as a setting; With
The first and second audio pulse code modulation (PCM) data are generated from described first and second separated voice datas.
4. method according to claim 3, wherein utilizes the first and second video pixel data described in chroma key technique process to comprise:
Set the first chroma key value;
The Color Channel of described first video pixel data is associated with the first alpha channel corresponding to described first chroma key value, make the first video pixel data of a part have transparent color, and the first video pixel data of remainder is formed as background removal video data;
Set the second chroma key value; With
The Color Channel of described second video pixel data is associated with the second alpha channel corresponding to described second chroma key value, makes the second video pixel data of a part have transparent color and the second video pixel data of remainder is formed as background video data.
5. method according to claim 4, wherein said first video pixel data and described second video pixel data have identical pixel size and frame size, and described first alpha channel and described second alpha channel are complementary relationship.
6. method according to claim 4, wherein utilizes as a kind of imagemux program in the library of FFmpeg to perform the compound of described first video data and described second video data.
7. according to method according to claim 1, wherein said first packet is containing the digital document of the film, TV play and the music that are generated by cultural product supplier, and described second packet contains the digital document of motion picture and the image generated by mobile device user individual.
8. method according to claim 1, wherein said first packet is containing the digital document of the film, TV play and the music that are generated by cultural product supplier, and described second packet is containing the digital document of the film, TV play, music, motion picture and the image that are generated by cultural product supplier.
9., for playing an Android video player for motion picture in the mobile device controlled by Android operation system (OS), comprising:
Control unit, the FFmpeg program as multimedia control program of increasing income utilizes the local development kit (NDK) of Android to be linked into described control unit, and the hardware resource that described control unit controls Android mobile device plays motion picture thus;
Data storage cell, described motion picture is separated into audio-source data and video source data and stores described audio-source data and described video source data dividually by described data storage cell;
Decoding unit, described decoding unit converts audio-source data and video source data to tonepulse mode modulation (PCM) data and video pixel data;
Sound generation unit, described sound generation unit is connected to described decoding unit and produces sound by the AudioTrack class of use java programming language from described tonepulse mode modulation data in response to the sound generation signal of described control unit; With
Image generation unit, described image generation unit is connected to described decoding unit and by using graphics standard API (API) to produce image from described video pixel data, described graphics standard API is connected to described Android operation system through GLUE routine in response to the image generation signal of described control unit.
10. Android video player according to claim 9, wherein said data storage cell comprises: audio pack queue and video packets queue, audio-source data are stored to described audio pack queue in chronological order by predetermined audio pack, and video source data is stored to described video packets queue in chronological order by predetermined video packets; And
Described decoding unit comprises: be connected to described audio pack queue and described audio-source data decode become the audio decoder of audio frequency pulse mode modulating data and be connected to described video packets queue and described video source data be decoded into the Video Decoder of video pixel data.
11. Android video players according to claim 10, wherein said data storage cell also comprises by being used as a kind of libavformat program in the library of described FFmpeg to detect the audio format type of motion picture and the source reader of picture signal, and described decoding unit also comprises encoding and decoding storehouse, in wherein said encoding and decoding storehouse, be stored as the codec program of the library of FFmpeg.
12. Android video players according to claim 10, wherein said sound generation unit comprises:
Pulse mode modulated data transmitting device, described pulse mode modulated data transmitting utensil has the first function call portion for calling described AudioTrack class in response to described sound produces signal, by using described AudioTrack class, utilize described pulse mode modulated data transmitting device continuously and sequentially transmission of audio pulse mode modulating data from described audio decoder; With
First hardware driver, described first hardware driver has the second function call portion for calling Hardware drive module, described Hardware drive module is linked into described mobile device by the java local interface (JNI) of the local development kit (NDK) of Android, and described first hardware driver drives the hardware resource for sound to produce sound according to described tonepulse mode modulation data thus by using described Hardware drive module.
13. Android video players according to claim 12, wherein said audio decoder is in the state activated by described control unit with the form of java thread, thus audio-source data are converted to tonepulse mode modulation data continuously when playing motion picture.
14. Android video players according to claim 10, wherein said image generation unit comprises:
Video pictures queue, described video pictures queue is connected to described Video Decoder, and described video pixel data sequentially transfers to described video pictures queue from described Video Decoder;
Video updating device, the API signal that described video updating device produces for activating figure standard A PI in response to described image produces signal; With
Pixel data processor, described pixel data processor in response to described API signal from described video pictures queue receiver, video pixel data, and by video pixel data described in the data processing step process in described graphics standard API, produce image according to described video pixel data thus.
15. Android video players according to claim 14, wherein said pixel data processor comprises:
Pixel data requester, described pixel data requester has the 3rd function call portion of calling the GLSurfaceView class of OpenGL module in response to described API signal, described OpenGL module is the one of the library module in described graphics standard API, and described pixel data requester activates described GLSurfaceView class to transmit described video pixel data continuously in chronological order from described video pictures queue; With
Second hardware driver, the video pixel data that described second hardware driver processing and utilizing GLSurfaceView class is transmitted and produce image on mobile device display.
16. Android video players according to claim 15, wherein said pixel data processor also comprises frame control module, described frame control module is used for the frame size of video pixel data to be controlled to the frame size equaling video source data, thus generates the image with the frame size of video source data on the display.
17. Android video players according to claim 16, wherein said graphics standard API comprises OpenGLES, OpenGLES is the improvement version OpenGL for embedded system, and described frame control module has the 4th function call portion of the texture mapping class for calling described OpenGLES.
18. Android video players according to claim 15, wherein said pixel data processor also comprises color-space conversion module, and described color-space conversion module is that color space for changing described video pixel data is so that by the color space of the color space conversion of described video pixel data to described OpenGL.
19. Android video players according to claim 18, wherein said color-space conversion module has the 5th function call portion of the tinter class for calling described OpenGLES.
20. Android video players according to claim 15, wherein said Video Decoder is in the state activated by described control unit with the form of java thread, thus described video source data is converted to video pixel data continuously when playing motion picture, and described video updating device is in the state activated by described control unit with the form of java thread, thus answer API signal described in producing described video updating device in image produces signal from described control unit and detect described image generation signal and generating.
CN201410345468.4A 2014-06-24 2014-07-18 Method of composing multimedia data and video player for playing moving pictures in an android operating system Pending CN105245795A (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
KR10-2014-0077502 2014-06-24
KR1020140077502A KR101577012B1 (en) 2014-06-24 2014-06-24 Method of composing multimedia data and video player for playing moving pictures in an android operating system

Publications (1)

Publication Number Publication Date
CN105245795A true CN105245795A (en) 2016-01-13

Family

ID=55020785

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201410345468.4A Pending CN105245795A (en) 2014-06-24 2014-07-18 Method of composing multimedia data and video player for playing moving pictures in an android operating system

Country Status (2)

Country Link
KR (1) KR101577012B1 (en)
CN (1) CN105245795A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109729417A (en) * 2019-03-28 2019-05-07 深圳市酷开网络科技有限公司 A kind of video-see play handling method, smart television and storage medium
CN118013901A (en) * 2024-04-10 2024-05-10 芯动微电子科技(武汉)有限公司 Prototype verification system and method for image signal processor

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR101898208B1 (en) * 2017-05-10 2018-09-12 주식회사 곰앤컴퍼니 Method of selecting intermediate advertisement positions in video contents
KR102340963B1 (en) * 2021-02-05 2021-12-20 주식회사 파인드커넥트 Method and Apparatus for Producing Video Based on Artificial Intelligence
KR20240062268A (en) * 2022-10-28 2024-05-09 주식회사 엘지유플러스 Graphic user interface providing method and apparatus for home menu on iptv or ott application

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102857833A (en) * 2012-10-15 2013-01-02 深圳市佳创软件有限公司 Audio decoding system and method adapted to android stagefright multimedia framework
CN103366780A (en) * 2012-03-31 2013-10-23 盛乐信息技术(上海)有限公司 Multimedia player engine system and use method thereof, and multimedia player
CN103713891A (en) * 2012-10-09 2014-04-09 阿里巴巴集团控股有限公司 Method and device for graphic rendering on mobile device

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR100810649B1 (en) 2007-06-25 2008-03-06 주식회사 모비더스 System and method for moving picture file and multimedia file synthesis
KR100950911B1 (en) * 2008-02-22 2010-04-01 주식회사 텔레칩스 Mobile terminal sharing decoded multimedia video and audio signal

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103366780A (en) * 2012-03-31 2013-10-23 盛乐信息技术(上海)有限公司 Multimedia player engine system and use method thereof, and multimedia player
CN103713891A (en) * 2012-10-09 2014-04-09 阿里巴巴集团控股有限公司 Method and device for graphic rendering on mobile device
CN102857833A (en) * 2012-10-15 2013-01-02 深圳市佳创软件有限公司 Audio decoding system and method adapted to android stagefright multimedia framework

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
李向军 等: "《一种基于Android平台的移动终端多媒体播放器的扩展设计》", 《微电子学与计算机》 *
马建设 等: "《基于Android系统的视频播放器开发》", 《计算机应用与软件》 *

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109729417A (en) * 2019-03-28 2019-05-07 深圳市酷开网络科技有限公司 A kind of video-see play handling method, smart television and storage medium
CN109729417B (en) * 2019-03-28 2019-09-10 深圳市酷开网络科技有限公司 A kind of video-see play handling method, smart television and storage medium
CN118013901A (en) * 2024-04-10 2024-05-10 芯动微电子科技(武汉)有限公司 Prototype verification system and method for image signal processor

Also Published As

Publication number Publication date
KR101577012B1 (en) 2015-12-11

Similar Documents

Publication Publication Date Title
CN105245795A (en) Method of composing multimedia data and video player for playing moving pictures in an android operating system
CN106165403B (en) Data output device, data output method and data creation method
RU2387013C1 (en) System and method of generating interactive video images
CN114302196B (en) Display device, external device and play parameter adjusting method
US5566290A (en) Multi-media device
US7213228B2 (en) Methods and apparatus for implementing a remote application over a network
US6078328A (en) Compressed video graphics system and methodology
CN108881916A (en) The video optimized processing method and processing device of remote desktop
MXPA06009473A (en) Display processing device.
WO2006120821A1 (en) Image processing system
CN103947221A (en) User interface display method and device using same
CN111405221B (en) Display device and display method of recording file list
CN111899322A (en) Video processing method, animation rendering SDK, device and computer storage medium
US20210289263A1 (en) Data Transmission Method and Device
CN112118468A (en) Method for changing color of peripheral equipment along with color of picture and display equipment
CN106973320A (en) A kind of multi-path flash demo method, system and intelligent television
CN101523909B (en) Image display device, image data providing device, image display system, image display system control method, control program, and recording medium
CN110505511A (en) It is a kind of to play the method, apparatus of video, system in webpage and calculate equipment
CN113630655A (en) Method for changing color of peripheral equipment along with picture color and display equipment
CN112203154A (en) Display device
CN1585019A (en) Apparatus, systems and methods relating to an improved media player
CN107734388A (en) A kind of player method and device of television startup displaying file
CN112162764A (en) Display device, server and camera software upgrading method
CN112399199A (en) Course video playing method, server and display equipment
US20220210520A1 (en) Online video data output method, system, and cloud platform

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right

Effective date of registration: 20180208

Address after: Nine digital road, 31 street, 31 street, 53, Seoul, South Korea, 811

Applicant after: Le Ning card company

Address before: 43 new street Pu Lu Seocho South Korea Seoul special city 37,2 layer

Applicant before: EDUFEELMEIDA

TA01 Transfer of patent application right
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20160113

WD01 Invention patent application deemed withdrawn after publication