CN101232598A - Equipment and method for displaying video image - Google Patents

Equipment and method for displaying video image Download PDF

Info

Publication number
CN101232598A
CN101232598A CNA2008101011597A CN200810101159A CN101232598A CN 101232598 A CN101232598 A CN 101232598A CN A2008101011597 A CNA2008101011597 A CN A2008101011597A CN 200810101159 A CN200810101159 A CN 200810101159A CN 101232598 A CN101232598 A CN 101232598A
Authority
CN
China
Prior art keywords
video
video window
image
background
frame
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CNA2008101011597A
Other languages
Chinese (zh)
Inventor
曹虹
曹玉弟
周翀
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Vimicro Corp
Original Assignee
Vimicro Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Vimicro Corp filed Critical Vimicro Corp
Priority to CNA2008101011597A priority Critical patent/CN101232598A/en
Publication of CN101232598A publication Critical patent/CN101232598A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • User Interface Of Digital Computer (AREA)

Abstract

The embodiment of the invention discloses a video image display method, and the method is as follows: Create a first video window, and display a user's selection or input of background contents inside the first video window; compose images captured with physical camera shooting apparatus with the contents that are currently displayed inside the first video window, so as to display the final effect inside the second video window. The embodiment of the invention discloses a video image display device. Using the method, the user can control display of video images with background contents.

Description

A kind of display packing of video image and equipment
Technical field
The present invention relates to image processing field, relate in particular to a kind of display packing and equipment of video image.
Background technology
At present, when using communication software to carry out Video chat or video conference, in order to strengthen user's experience, the image overlay that background animation or picture and physics camera head are captured is presented in the video window.
As shown in Figure 1, behind open communication software, communication software calls the decoding synthesis module that is registered in this software automatically, and the decoding synthesis module at first reads background animation file or static images according to the path of system default or user's selection; Then, the decoding synthesis module receives the frame of video that the physics camera head is sent here; Frame or order that circulation is obtained in the animation file are obtained static images, the animation frame that gets access to or picture and frame of video are synthesized, and the image overlay after will synthesizing are presented in the video window.
Such as: the reproduction time of supposing the background animation file is 10 seconds, and getting the time interval is 1/25 second, and then frame per second was 25 frame/seconds, and the background animation file will be broken down into 250 frame static image datas.The frame per second of supposing the video image that the physics camera head imports into also was 25 frame/seconds, under the normal condition, first 250 two field picture that the 250 frame static image datas of decoding synthesis module after with the background animation file decoding transmit with physics camera head driver respectively is corresponding one by one to synthesize processing; Similarly, second 250 two field picture that 250 frame still images behind the background animation file decoding and physics camera head are transmitted is corresponding one by one synthesizes processing.And the like, cyclic process each time all can obtain the video image output of animation background and be presented in the video window.
The decoding synthesis module is registered in the communication software in modes such as virtual unit, DirectShow Filter, and common communication software has QQ, MSN etc.
In realizing process of the present invention, the inventor finds to exist at least in the prior art following technical problem:
The decoding synthesis module obtains the Frame or the static images of background animation file in the prior art with fixed form, the frame of video that the Frame that gets access to or picture and physics video-unit capture is synthesized demonstration, and can't respond user message, cause the uncontrollable demonstration that has the video image of background information of user.
Summary of the invention
The embodiment of the invention provides a kind of display packing and equipment of video image, in order to the problem of the demonstration that solves the uncontrollable video image that has a background information of user in the prior art.
The embodiment of the invention provides a kind of display packing of video image, and this method comprises:
Create first video window, by the background content of described first video window explicit user selection or input;
Current content displayed is synthetic in image that the physics camera head is captured and described first video window is presented in second video window.
The embodiment of the invention provides a kind of display device of video image, and this equipment comprises:
The background module is used to create first video window, by the background content of described first video window explicit user selection or input;
Synthesis module, the image and the current content displayed of described first video window that are used for the physics camera head is captured are synthesized, and the image after will synthesizing sends second video window to and shows.
Among the present invention, when carrying out the broadcast of video image, at first create first video window, and select or the background content of input, and the image that current content displayed in first video window and physics camera head capture synthesized be presented in second video window by this window explicit user.The establishment of first video window provides a platform with user interactions, controls the demonstration that has the video image of background content in second video window thereby reach the user.
Description of drawings
Fig. 1 is for having the demonstration schematic diagram of the video image of background in the prior art;
Fig. 2 is the schematic flow sheet of method that the embodiment of the invention provides;
Fig. 3 A is the schematic flow sheet of example of the present invention;
Fig. 3 B is the schematic flow sheet of example of the present invention;
Fig. 3 C is the schematic flow sheet of example of the present invention;
Fig. 4 A is the structural representation of equipment that the embodiment of the invention provides;
Fig. 4 B is the example schematic of equipment that the embodiment of the invention provides.
Embodiment
Synthesize when showing at the image that background content and physics camera head are captured, in order to respond user's control, the embodiment of the invention provides a kind of display packing of video image, in this method, when carrying out the broadcast of video image, at first create first video window, select or the background content of input by the described first video window explicit user, the image that the content in first video window and physics camera head are captured synthesizes demonstration then.
Referring to Fig. 2, the display packing of the video image that the embodiment of the invention provides specifically may further comprise the steps:
Step 20: create first video window;
Step 21: by first video window explicit user selection of creating or the background content of importing;
In this step, as first embodiment, the background paper in that the first video window explicit user is selected specifically comprises:
Step S01: the playing request that receives the background paper information that comprises user's selection; The storing path that this background paper information can be background paper etc.;
Step S02: read described background paper, this background paper decoded, and in first video window file behind the broadcast decoder.
Described background paper includes but not limited to: video file, picture file etc.Described video file includes but not limited to: Flash animation file, video files etc.In first video window, during playing video file, can play according to the frame per second of video file itself; When the playing pictures file, can also be presented at frame by frame in first video window with the picture that predefined speed reads in the picture file in proper order.Can comprise one or more pictures in the picture file.
Preferable, when first video window is play background paper, can control the broadcast of background paper in first video window according to the control information of input, specifically can control the playback rate, playing progress rate of background paper etc.For example, the user imports the playback rate value of expectation in mode such as tabulation selections etc., and when receiving this playback rate value, system is according to this playback rate broadcast background paper; Again for example, the user determines the play position of expectation by dragging modes such as playing progress rate scroll bar, and system's pairing Frame of this play position from background paper begins to play in first video window.
As second embodiment, the background information in the input of the first video window explicit user specifically comprises:
Step S11: receive the background information of user by the input of first video window, this background information includes but not limited to: image, literal etc.;
Step S12: the background information that receives is presented in first video window.
Step 22: current content displayed is synthetic in the image that the physics camera head is captured and first video window is presented in second video window.
Corresponding with first embodiment in the step 21, this step specifically comprises:
Step S21: the frame of video of obtaining the current image that captures of physics camera head;
Here, can initiatively obtain the frame of video of the current image that captures of physics camera head, also can be that the physics camera head initiatively sends the frame of video that the current image that captures forms.
Step S22: the Frame that obtains the current broadcast of first video window;
When Frame of the every broadcast background paper of first video window, this Frame is buffered in the buffer area, be the Frame of always storing current broadcast in the buffer area, this step just can directly get access to the Frame of the current broadcast of first video window from buffer area.
Step S23: the Frame and the frame of video that get access to are carried out the synthetic processing of image;
Step S24: will synthesize the video image that obtains after the processing and export and be presented in second video window.
Need to prove that step S21 and step S22 also can carry out simultaneously.
Corresponding with second embodiment in the step 21, this step specifically comprises:
Step S31: the frame of video of obtaining the current image that captures of physics camera head;
Here, can initiatively obtain the frame of video of the current image that captures of physics camera head, also can be that the physics camera head initiatively sends the frame of video that the current image that captures forms.
Step S32: the background information of obtaining the current demonstration of first video window;
When first video window receives the background information of user's input, except background information is shown, background information can also be buffered in the buffer area, this step just can directly get access to the background information of the current demonstration of first video window from buffer area.Certainly, the user also can delete or revise in the window content displayed, and the buffer area that needs carries out content update accordingly.
Step S33: the background information and the frame of video that get access to are carried out the synthetic processing of image;
Step S34: will synthesize the video image that obtains after the processing and export and be presented in second video window.
Need to prove that step S31 and step S32 also can carry out simultaneously.
Preferable, when synthetic the demonstration, according to the parameter that is provided with, the image that the partial content of current demonstration in first video window or full content and physics camera head can be captured synthesizes and shows.For example, can be set to only show the first half in first video window, then when synthetic the demonstration, need to extract the shown content of the first half of first video window, the image that the content extracted and physics camera head are captured synthesizes demonstration then.
Certainly, do not play any background paper and receive the user when importing at first video window, first video window is created all transparent bitmaps, and capturing the frame of video of the image effect after synthetic with the physics camera head is exactly this frame of video itself.
With instantiation method provided by the present invention is described below:
Utilize the Flash control of the ActiveX that MacroMedia provides to create first video window in this example, playing flash animation in first video window of creating, specific as follows:
Step 1: create first video window, the Flash control (a suitable Flash player) of the ActiveX that this window adding MacroMedia provides, and create a HBITMAP object that is used to intercept the animation frame of the current broadcast of first video window accordingly, can realize with reference to following code:
PrepareFromProxyWindow()
{
CComPtr<IShockwaveFlash〉m_pFlashCtrl; //Flash control
CComQIPtr<IViewObject〉m_pFlashDraw; //Flash picture interface
HBITMAPm_hDIBFlash; // animation frame BITMAP object
BYTE*m_pFlashBuf; // animation frame buffer memory
// establishment Flash object
GetDlgControl(IDC_SHOCKWAVEFLASH,uuidof(IShockwaveFlash),
(void**)&m_pFlashCtrl);
m_pFlashDraw=m_pFlashCtrl;
IntialFlashBuffer();
m_hDIBFlash=CreateDIBSection(...(void**)&m_pFlashBuf,...)
}
Step 2:, can realize with reference to following code at first video window broadcast Flash animation as a setting:
PlayFlashInProxyWindow()
{
HDC?hdc=CreateCompatibleDC(NULL);
SelectObject(hdc,m_hDIBFlash);
m_pFlashDraw->Draw(...hdc,...);
}
Step 3: as shown in Figure 3A, receive frame of video, utilize the animation frame object m_hDIBFlash that creates from animation frame buffer m_pFlashBuf, to obtain the animation frame of the current broadcast of first video window from the physics camera head;
Step 4: will be presented in the video playback window that communication software provides from the frame of video of physics camera head and the animation frame that gets access to are synthetic;
Step 5: shown in Fig. 3 B, receive the Word message that the user imports first video window, this literal information is added among the animation frame buffer memory m_pFlashBuf;
Step 6: receive frame of video, utilize the animation frame object m_hDIBFlash that creates from animation frame buffer m_pFlashBuf, to obtain the Word message of user's input from the physics camera head;
Step 7: will be presented in the video playback window that communication software provides from the frame of video of physics camera head and the Word message that gets access to are synthetic;
Step 8: shown in Fig. 3 C, receive the control information that comprises playback rate that the user imports, according to this playback rate playing cartoon file in first video window;
Step 9: receive frame of video, utilize the animation frame object m_hDIBFlash that creates from animation frame buffer m_pFlashBuf, to obtain the animation frame of the current broadcast of background video window from the physics camera head;
Step 10: will be presented in the video playback window that communication software provides from the frame of video of physics camera head and the animation frame that gets access to are synthetic.
Need to prove that the execution sequence that step 3-4, step 5-7, step 8-10 are not strict can be any execution sequence.
Referring to Fig. 4 A, the embodiment of the invention also provides a kind of display device of video image, and this equipment comprises background module 40 and synthesis module 41, wherein:
Background module 40 is used to create first video window, by the background content of described first video window explicit user selection or input;
Synthesis module 41, the image and the current content displayed of described first video window that are used for the physics camera head is captured are synthesized, and the image after will synthesizing sends second video window to and shows.
Concrete, background module 40 comprises creating unit 50 and response unit 51, wherein:
Creating unit 50 is used to create first video window;
Response unit 51 is used for the background content of selecting or importing by the described first video window explicit user;
Synthesis module 41 comprises processing unit 52 and interface unit 53, wherein:
Processing unit 52, the image and the current content displayed of described first video window that are used for the physics camera head is captured are synthesized processing; This processing unit can be according to the parameter that is provided with, and the image that the partial content of current demonstration in described first video window or full content and physics camera head are captured synthesizes;
Interface unit 53 is used for sending the image after the described synthetic processing to second video window and shows.
As first embodiment, response unit 51 comprises reading unit 60 and broadcast unit 61, wherein:
Reading unit 60 is used to read the background paper that the user selects; Described background paper comprises: video file or picture file.Described video file includes but not limited to: Flash animation file, video files etc.
Broadcast unit 61 is used for playing described background paper at described first video window;
Processing unit 52 comprises first acquiring unit 62 and first module 63, wherein:
First acquiring unit 62 is used to obtain the frame of video of the current image that captures of described physics camera head and the current data presented frame of described first video window;
First module 63 is used for described Frame and described frame of video are synthesized processing.
Preferable, response unit 51 further comprises: control unit 64 is used for controlling according to the control information of input the broadcast of the described first video window background paper.This control unit specifically can be controlled playback rate, playing progress rate of background paper etc.
As second embodiment, response unit 51 comprises receiving element 65 and display unit 66, wherein:
Receiving element 65 is used to receive the background information that the user imports, and this background information includes but not limited to: image, literal etc.;
Display unit 66 shows this background information in described first video window;
Processing unit 52 comprises the second acquisition unit 67 and second unit 68, wherein:
Second acquisition unit 67 is used to obtain the frame of video of the current image that captures of described physics camera head and the background information of the current demonstration of described first video window;
Second unit 68 is used for described background information and described frame of video are synthesized processing.
Second video window that this paper mentions can be the video playback window that communication software (as QQ, MSN) is provided.
Shown in Fig. 4 B, the DirectShow interface of virtual picture pick-up device and virtual unit is equivalent to the processing unit and the interface unit of synthesis module among Fig. 4 A respectively among the figure, and agent window is equivalent to the background module among Fig. 4 A.When virtual picture pick-up device receives the frame of video that interface that the physics picture pick-up device drives sends, obtain content displayed in the current agent window, this content and the frame of video received are synthesized, and the video window that the DirectShow interface of the image after will synthesizing by virtual unit sends MSN to shows.
Suppose that the user is chosen in the agent window with the speed of 10 frame/seconds (every 100ms plays a frame) and play background paper, the frame per second of physics picture pick-up device was 20 frame/seconds, be that the every 50ms of virtual picture pick-up device accepts a frame of video from the physics picture pick-up device, when the 1st 50ms and the 2nd 50ms, agent window only has been played to the 1st frame of background paper, the 1st frame of video that will receive this moment and the 2nd frame of video are all synthetic with the 1st frame of agent window, and deliver to the video window of MSN.The the 3rd and the 4th frame of video that to receive equally and the 2nd frame of the current broadcast of agent window are synthetic, and the rest may be inferred, can reach the effect of the data-frame sync of frame of video and agent window broadcast.
Drawing circle with the user by the mouse input is example, supposes that the user draws circle in beginning in the 1st second in agent window, has used 2 seconds.Virtual unit is before the 1st second moment, and the user does not have input whatever, and the image of delivering to the video window of MSN is the image that the physics camera head captures; By the 2nd second, virtual unit received the 20th frame of video that the physics picture pick-up device is sent here, and the user has imported circle half this moment, the 20th frame of video and this half circle was synthesized and export to the video window of MSN; At the 3rd second, virtual unit received the 40th frame of video that the physics picture pick-up device is sent here, and the user has imported whole circle this moment, the 40th frame of video and this whole circle was synthesized and export to the video window of MSN; And the 1st~19,21~39 frame of video of receiving from the physics picture pick-up device is to import ongoing circle with the user to synthesize and export equally, reaches frame of video and user and imports synchronous effect.
To sum up, beneficial effect of the present invention is:
In the scheme that the embodiment of the invention provides, when carrying out the broadcast of video image, at first create first video window, and select or the background content of input, and the image that current content displayed in first video window and physics camera head capture synthesized be presented in second video window by this window explicit user.The establishment of first video window provides a platform with user interactions, thereby the user can control the demonstration of video image in second video window.
Simultaneously, during by the first video window display background content, can be the background paper that second video window is play as a setting that is desirably in of playing user's selection at first video window; Also can be the background information that is presented at second video window by the expectation of first video window reception user input as a setting, for the user provides interaction platform more flexibly.
Further, first video window play that the user selects be desirably in the background paper that second video window plays as a setting the time, can respond the control information of user's input, control the broadcast of background paper in first video window according to this broadcast information, for example playback rate, play position etc., and then controlled background effect in second video window, promoted user's use experience greatly.
Obviously, those skilled in the art can carry out various changes and modification to the present invention and not break away from the spirit and scope of the present invention.Like this, if of the present invention these are revised and modification belongs within the scope of claim of the present invention and equivalent technologies thereof, then the present invention also is intended to comprise these changes and modification interior.

Claims (14)

1. the display packing of a video image is characterized in that, this method comprises:
Create first video window, by the background content of described first video window explicit user selection or input;
Current content displayed is synthetic in image that the physics camera head is captured and described first video window is presented in second video window.
2. the method for claim 1 is characterized in that, the described background content of selecting by the first video window explicit user comprises:
Read the background paper that the user selects, in described first video window, play described background paper;
Current content displayed is synthesized and is comprised in described image that the physics camera head is captured and described first video window:
Obtain the frame of video of the current image that captures of described physics camera head, and the current data presented frame of described first video window; Described Frame and described frame of video are synthesized processing.
3. method as claimed in claim 2 is characterized in that, described background paper comprises: video file or picture file.
4. method as claimed in claim 2 is characterized in that, this method further comprises:
Control the broadcast of background paper in described first video window according to the control information of input.
5. method as claimed in claim 4 is characterized in that, described control content comprises:
Control playback rate, the playing progress rate of background paper in described first video window.
6. the method for claim 1 is characterized in that, described background content by the input of the first video window explicit user comprises:
Receive the background information of user's input, this background information is shown in described first video window;
Current content displayed is synthesized and is comprised in described image that the physics camera head is captured and described first video window:
Obtain the frame of video of the current image that captures of described physics camera head, and the background information of the current demonstration of described first video window; Described background information and described frame of video are synthesized processing.
7. method as claimed in claim 6 is characterized in that, described background information comprises:
Image and/or literal.
8. the method for claim 1 is characterized in that, according to the parameter that is provided with, the partial content or the full content of current demonstration synthesize in image that the physics camera head is captured and described first video window.
9. the display device of a video image is characterized in that, this equipment comprises:
The background module is used to create first video window, by the background content of described first video window explicit user selection or input;
Synthesis module, the image and the current content displayed of described first video window that are used for the physics camera head is captured are synthesized, and the image after will synthesizing sends second video window to and shows.
10. equipment as claimed in claim 9 is characterized in that, described background module comprises:
Creating unit is used to create first video window;
Response unit is used for the background content of selecting or importing by the described first video window explicit user;
Described synthesis module comprises:
Processing unit, the image and the current content displayed of described first video window that are used for the physics camera head is captured are synthesized processing;
Interface unit is used for sending the image after the described synthetic processing to second video window and shows.
11. equipment as claimed in claim 10 is characterized in that, described response unit comprises:
Reading unit is used to read the background paper that the user selects;
Broadcast unit is used for playing described background paper at described first video window;
Described processing unit comprises:
First acquiring unit is used to obtain the frame of video of the current image that captures of described physics camera head and the current data presented frame of described first video window;
First module is used for described Frame and described frame of video are synthesized processing.
12. equipment as claimed in claim 11 is characterized in that, described response unit further comprises:
Control unit is used for controlling according to the control information of input the broadcast of the described first video window background paper.
13. equipment as claimed in claim 10 is characterized in that, described response unit comprises:
Receiving element is used to receive the background information that the user imports;
Display unit shows this background information in described first video window;
Described processing unit comprises:
Second acquisition unit is used to obtain the frame of video of the current image that captures of described physics camera head and the background information of the current demonstration of described first video window;
Unit second is used for described background information and described frame of video are synthesized processing.
14. equipment as claimed in claim 10 is characterized in that, described processing unit is used for:
According to the parameter that is provided with, the partial content or the full content of current demonstration synthesize in image that the physics camera head is captured and described first video window.
CNA2008101011597A 2008-02-28 2008-02-28 Equipment and method for displaying video image Pending CN101232598A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CNA2008101011597A CN101232598A (en) 2008-02-28 2008-02-28 Equipment and method for displaying video image

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CNA2008101011597A CN101232598A (en) 2008-02-28 2008-02-28 Equipment and method for displaying video image

Publications (1)

Publication Number Publication Date
CN101232598A true CN101232598A (en) 2008-07-30

Family

ID=39898734

Family Applications (1)

Application Number Title Priority Date Filing Date
CNA2008101011597A Pending CN101232598A (en) 2008-02-28 2008-02-28 Equipment and method for displaying video image

Country Status (1)

Country Link
CN (1) CN101232598A (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103209312A (en) * 2012-01-12 2013-07-17 中兴通讯股份有限公司 Video player, mobile terminal and method for mobile terminal to play videos
CN103828350A (en) * 2011-12-01 2014-05-28 坦戈迈公司 Augmenting a video conference
CN105120301A (en) * 2015-08-25 2015-12-02 小米科技有限责任公司 Video processing method and apparatus, and intelligent equipment
CN109168076A (en) * 2018-11-02 2019-01-08 北京字节跳动网络技术有限公司 Method for recording, device, server and the medium of online course
WO2019062571A1 (en) * 2017-09-30 2019-04-04 腾讯科技(深圳)有限公司 Dynamic image synthesis method and device, terminal and storage medium
CN110083412A (en) * 2011-10-31 2019-08-02 三星电子株式会社 The non-transitory computer-readable medium of mobile communication terminal and store instruction

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110083412A (en) * 2011-10-31 2019-08-02 三星电子株式会社 The non-transitory computer-readable medium of mobile communication terminal and store instruction
CN110083412B (en) * 2011-10-31 2023-06-02 三星电子株式会社 Mobile communication terminal and non-transitory computer readable medium storing instructions
CN103828350A (en) * 2011-12-01 2014-05-28 坦戈迈公司 Augmenting a video conference
CN103209312A (en) * 2012-01-12 2013-07-17 中兴通讯股份有限公司 Video player, mobile terminal and method for mobile terminal to play videos
WO2013104142A1 (en) * 2012-01-12 2013-07-18 中兴通讯股份有限公司 Video player, mobile terminal, and video playing method of mobile terminal
CN103209312B (en) * 2012-01-12 2018-03-23 中兴通讯股份有限公司 A kind of method of video player, mobile terminal and mobile terminal playing video
CN105120301A (en) * 2015-08-25 2015-12-02 小米科技有限责任公司 Video processing method and apparatus, and intelligent equipment
WO2019062571A1 (en) * 2017-09-30 2019-04-04 腾讯科技(深圳)有限公司 Dynamic image synthesis method and device, terminal and storage medium
CN109598775A (en) * 2017-09-30 2019-04-09 腾讯科技(深圳)有限公司 A kind of dynamic image synthetic method, device, terminal and storage medium
US11308674B2 (en) 2017-09-30 2022-04-19 Tencent Technology (Shenzhen) Company Limited Dynamic image compositing method and apparatus, terminal and storage medium
CN109168076A (en) * 2018-11-02 2019-01-08 北京字节跳动网络技术有限公司 Method for recording, device, server and the medium of online course

Similar Documents

Publication Publication Date Title
CN108881767B (en) Screen recording terminal system and method for realizing screen recording by using same
CN107613357B (en) Sound and picture synchronous optimization method and device and readable storage medium
US7823080B2 (en) Information processing apparatus, screen display method, screen display program, and recording medium having screen display program recorded therein
US10335692B2 (en) Game history recording apparatus and method for recording and interacting with game history
CN101232598A (en) Equipment and method for displaying video image
CN104159151A (en) Device and method for intercepting and processing of videos on OTT box
CN108604438B (en) Display control method and program for making the computer-implemented display control method
US8837912B2 (en) Information processing apparatus, information processing method and program
US20110261157A1 (en) Content reproducing apparatus, reproducing method, program, and recording medium
CN107484011B (en) Video resource decoding method and device
CN103024561A (en) Method and device for displaying dragging progress bar
CN207882853U (en) A kind of intelligent information release system
KR101745895B1 (en) System and method for content extension, presentation device and computer program for the same
CN204013943U (en) A kind of device that carries out video intercepting and process on OTT box
CN110505511A (en) It is a kind of to play the method, apparatus of video, system in webpage and calculate equipment
US20080074554A1 (en) Apparatus for transmitting broadcast signal, method thereof, method of producing broadcast signal and apparatus for receiving broadcast signal
US11183219B2 (en) Movies with user defined alternate endings
KR20080104415A (en) System and method of editing moving picture and recording medium having the method embodied program
CN106792219B (en) It is a kind of that the method and device reviewed is broadcast live
CN102868913A (en) Method and system for remote synchronous virtual monitoring
WO2001045411A1 (en) System and method for delivering interactive audio/visual product by server/client
KR102139331B1 (en) Apparatus, server, and method for playing moving picture contents
CN107396179A (en) A kind of TV functions methods of exhibiting, storage medium and television terminal
CN106791890A (en) A kind of method and apparatus for building multi-angle frame TV program
KR101918395B1 (en) Apparatus and method for displaying movement trace of contents object

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C12 Rejection of a patent application after its publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20080730