CN101094407B - Video circuit, video system and video processing method - Google Patents
Video circuit, video system and video processing method Download PDFInfo
- Publication number
- CN101094407B CN101094407B CN 200710126493 CN200710126493A CN101094407B CN 101094407 B CN101094407 B CN 101094407B CN 200710126493 CN200710126493 CN 200710126493 CN 200710126493 A CN200710126493 A CN 200710126493A CN 101094407 B CN101094407 B CN 101094407B
- Authority
- CN
- China
- Prior art keywords
- video
- sequence
- video data
- subframe
- metadata
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Expired - Fee Related
Links
Images
Landscapes
- Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)
- Television Signal Processing For Recording (AREA)
- Controls And Circuits For Display Device (AREA)
Abstract
A video processing system applies sub-frame processing to the video data to generate a first sequence and a second sequence of the video data sub-frame which are defined by metadata. The processing circuit incorporates the first sequence and the second sequence of the video data sub-frame to generate a third sequence of the video data sub-frame. A self-adapting video processing circuit receives coding-source video data, original source video data, similar display metadata, target display metadata and/or target display information. The self-adapting video processing circuit processes the one or more outputs generated from the inputted information comprising chipped metadata, encoded target display video data, target display video data, and digital authority management/charging signal.
Description
Technical field
The present invention relates to video processing equipment, more particularly, relate to the video information that preparation will show on video player.
Background technology
Film and other video content use the 35mm film to take with the aspect ratio of 16:9 usually.When film enters primary market, will duplicate the 35mm film and be distributed to each cinema then, thereby sell spectators film.For example, cinema is incident upon film " large-screen " usually and upward watches for the paying spectators, and this realizes by using high lumen light beam transmission 35mm film.In case film leaves " large-screen ", just enter the secondary market, the video disc or the shadow bands (for example VHS shadow bands, DVD, high definition (HD)-DVD, blue-ray DVD and other recording medium) that comprise film by sale are distributed to individual spectators with film.The method that other is used at secondary market distribution film also comprises by the Internet download with by the broadcasting of TV network provider.
For distributing by the secondary market, the content on the 35mm film will be with every film frame digital video that is that unit is converted to original (raw).Want to reach the definition of HD, each film frame needs 1920 * 1080 pixels at least, and for the film of one one two little duration, this raw digital video needs about 25GB memory space.For avoiding this memory requirement, use encoder to encode usually and compress raw digital video, reduce memory requirement significantly with this.Coding standard comprises such as but not limited to motion picture expert group (MPEG)-1, MPEG-2, the enhancement mode MPEG-2 that is used for HD, MPEG-4AVC, H.261, H.263 with film and (SMPTE) VC-1 of Television Engineer association.
In order to satisfy the requirement that on telephone set, PDA(Personal Digital Assistant) and other handheld device, shows film, the compression digital of digital video data usually by the Internet download, upload or store on the handheld device, by handheld device video data is carried out decompress(ion), decoding then, so that be presented on the video display screen of handheld device, thereby show the user.Yet the size of this handheld device can limit the size of the video display screen (screen) on it usually.For example, the small screen on the handheld device has only 2 inches (5.08cm) long (diagonal) usually.By contrast, screen of TV set is generally 30-60 inches (76.2cm-152.4cm) (diagonal measurement) or bigger.Difference on the screen size to spectators perceptible image quality bigger influence is arranged.
For example, common traditional PD A has the ratio of width to height identical with human eye with high-end telephone set screen.On the small screen, human eye often can't be discovered tiny details, for example text, facial characteristics and distant objects.For example, at the cinema in, for comprising the performer that is positioned at a distance and the panorama of railway mark, spectators can identify performer's facial expression and the literal on the read flag easily.On the HD television screen, spectators also might accomplish this point.But, in the time of on being transformed into the small screen of handheld device, because the restriction of human eye will no longer may be discerned facial characteristics and literal.
No matter screen has muchly, and its definition is not to be subjected to technical limitations, is subjected to the restriction of human eye exactly.But on the small screen, this restriction is more obvious.For example, the ratio of width to height of common traditional PD A and high-end telephone set screen is 4:3, can display resolution be the QVGA video of 320 * 240 pixels usually.By contrast, the ratio of width to height of HD television screen is generally 16:9, usually can display resolution up to the video of 1920 * 1080 pixels.The HD video is being changed so that make it be adapted at lacking in the process that shows on a lot of the small screen on the pixel, pixel data will merge, and the loss meeting of video details is a lot.The level that the small screen pixel count is increased to the HD television set can be avoided above-mentioned transfer process, still, as mentioned before, is subjected to the restriction of human eye itself, and the details of video still can be lost.
Usually use video coding conversion and editing system that video is transformed into another kind of form and definition from a kind of form and definition, so that on specific screen, carry out playback.For example, what import this system may be the DVD video, and after transfer process, the video of output will be play on the QVGA screen again.Also can use the output video after interactive editor's function generates editor and conversion in the transfer process.In order to support multiple different screen size, definition and coding standard, need to generate multiple outputting video streams or file.
Video is normally taken with " large-screen " form, and this form can be received good effect when the power supply movie theatre is watched.Because video will carry out code conversion subsequently, so " large-screen " format video may be not enough to support to be transformed into small screen size.In this case, also not having a kind of transfer process to generate can be for the suitable video of the small screen demonstration.Introduction by this paper and is with reference to the accompanying drawings compared with technical solution of the present invention, and limitation and shortcoming existing and conventional method will become more obvious for a person skilled in the art.
Summary of the invention
The present invention relates to a kind of device and method of operation, in description of drawings, embodiment and claim, be described in detail.
According to an aspect of the present invention, provide a kind of video circuit, its received code video, described encoded video are represented video data full frame sequence, and described video circuit comprises:
Treatment circuit is used for described encoded video application decoder and subframe are handled, and generates video data subframe first sequence and video data subframe second sequence;
Described video data subframe first sequence corresponding zone in video data full frame sequence is different from described video data subframe second sequence;
Described treatment circuit merges described video data subframe first sequence and described video data subframe second sequence, generates video data subframe the 3rd sequence.
In video circuit of the present invention, described treatment circuit is encoded to described video data subframe the 3rd sequence.
In video circuit of the present invention, described treatment circuit uses described decoding in order and subframe is handled.
In video circuit of the present invention, described treatment circuit uses described decoding simultaneously and subframe is handled.
In video circuit of the present invention, described treatment circuit is implemented described subframe based on the subframe metadata and is handled.
In video circuit of the present invention, before implementing described subframe processing, the characteristics of described treatment circuit based target display device are cut out described subframe metadata.
In video circuit of the present invention, the characteristics of described treatment circuit based target display device are cut out described video data subframe the 3rd sequence.
In video circuit of the present invention, described treatment circuit comprises Digital Right Management.
In video circuit of the present invention, described treatment circuit comprises accounting management.
According to an aspect of the present invention, provide a kind of video system, it receives the video of representing video data full frame sequence, and described video system comprises:
Treatment circuit is used for described Video Applications subframe is handled, and generates video data subframe first sequence and video data subframe second sequence;
Described video data subframe first sequence is by at least the first parameter-definition, and described video data subframe second sequence is by at least the second parameter-definition, and described at least the first parameter and described at least the second parameter are formed metadata jointly;
Described treatment circuit receives described metadata, handles so that carry out described subframe;
Described treatment circuit merges described video data subframe first sequence and described video data subframe second sequence, generates video data subframe the 3rd sequence.
In video system of the present invention, described treatment circuit receives described metadata by communication link.
In video system of the present invention, described treatment circuit receives described metadata from movable memory equipment.
In video system of the present invention, described metadata comprises meta data file, and described meta data file comprises that at least one video adjusts parameter, and this parameter is associated with at least a portion in described video data subframe first sequence.
In video system of the present invention, send described video data subframe the 3rd sequence so that on the target display screen, show.
In video system of the present invention, before carrying out described subframe processing, described treatment circuit is cut out described metadata.
In video system of the present invention, described cutting out comprises described video data subframe the 3rd sequence adjusted, so that show on the target display screen.
In video system of the present invention, described target display screen is in the another location that is different from described video system.
According to an aspect of the present invention, provide a kind of method of carrying out Video processing, having comprised:
Receive the video data of representing video data full frame sequence;
Described video data is carried out subframe to be handled, generate video data subframe first sequence and video data subframe second sequence, described video data subframe first sequence is by at least the first parameter-definition, described video data subframe second sequence is by at least the second parameter-definition, and described at least the first parameter and described at least the second parameter are formed metadata jointly;
Described video data subframe first sequence and described video data subframe second sequence are merged, generate video data subframe the 3rd sequence.
In method of the present invention,
First area in the corresponding described video data full frame sequence of described video data subframe first sequence;
Second area in the corresponding described video data full frame sequence of described video data subframe second sequence;
Described first area is different from described second area.
In method of the present invention, also comprise described video data is decoded.
In method of the present invention, to the decoding of described video data occur in the subframe of described video data handled before.
In method of the present invention, also comprise described video data subframe the 3rd sequence is encoded.
In method of the present invention, also be included in described video data is carried out before subframe handles, the characteristics of based target video display apparatus are cut out described metadata.
In method of the present invention, comprise that also the characteristics of based target display device are cut out described video data subframe the 3rd sequence.
In method of the present invention, also comprise described video data, described metadata and at least one the Applied Digital rights management wherein of described video data subframe the 3rd sequence.
In method of the present invention, also comprise wherein at least one of described video data, described metadata and described video data subframe the 3rd sequence used the accounting management operation.
In method of the present invention, also comprise by communication link receiving described metadata.
In method of the present invention, also comprise by movable memory equipment receiving described metadata.
In method of the present invention, described metadata comprises meta data file, and described meta data file comprises that at least one video adjusts parameter, and this parameter is associated with at least a portion in described video data subframe first sequence.
In method of the present invention, comprise that also described video data subframe the 3rd sequence is mail to the target display screen to be shown.
By the specific descriptions of the present invention being carried out below with reference to accompanying drawing, it is more clear that many aspects of the present invention and advantage will become.
Description of drawings
Fig. 1 is the structural representation according to the adaptive video processing system of one embodiment of the invention;
Fig. 2 is the structural representation according to the adaptive video processing system of the embodiment of the invention and a plurality of embodiment of subframe metadata generation system;
Fig. 3 is the structural representation of the video capture/subframe metadata generation system according to one embodiment of the invention;
Fig. 4 is the schematic diagram of exemplary original video frame and corresponding subframe;
Fig. 5 provides the structural representation of processing system for video one embodiment of the graphical user interface that comprises the video editing instrument that is used to create subframe;
Fig. 6 is the schematic diagram of exemplary original video frame and corresponding subframe;
Fig. 7 is the correspondence table of the exemplary subframe metadata of sequence of subframes;
Fig. 8 is the correspondence table that comprises the exemplary subframe metadata of subframe edit file;
Fig. 9 is the schematic diagram according to the video processing circuits of one embodiment of the invention;
Figure 10 is according to the adaptive video treatment circuit structure of one embodiment of the invention and the schematic diagram of operation;
Figure 11 is the functional block diagram according to adaptive video treatment circuit first specific embodiment of the present invention;
Figure 12 is the functional block diagram according to adaptive video treatment circuit second specific embodiment of the present invention;
Figure 13 is the functional block diagram according to adaptive video treatment circuit the 3rd specific embodiment of the present invention;
Figure 14 is the functional block diagram according to adaptive video treatment circuit the 4th specific embodiment of the present invention;
Figure 15 is the flow chart according to the video processing procedure of one embodiment of the invention.
Embodiment
Fig. 1 is the structural representation according to the adaptive video processing system of one embodiment of the invention.Adaptive video processing system 10 comprises that decoder 22, encoder 24, metadata treatment circuit 26, target display screen cut out circuit 28 and management circuit 30.Management circuit 30 has and Video processing operation, Digital Right Management operation and billing operation function associated.Adaptive video treatment circuit 10 can be hardware, software or hardware and software and combination.In various embodiments, adaptive video treatment circuit 10 can be general purpose microprocessor, special microprocessor, digital signal processor, application specific integrated circuit or can be used for executive software instruction and deal with data so that finish other Digital Logic of function described in Fig. 1-Figure 15.
Adaptive video treatment circuit 10 receives one or more inputs, generates one or more outputs.Usually, adaptive video treatment circuit 10 receiving video datas 11 full frame sequences, metadata 15 and target display information 20.Video data 11 full frame sequences can be coding source video 12 or original source video 14.Video data full frame sequence may be captured by video camera or camera system, hereinafter will be described in detail with reference to figure 3-Fig. 9.Direct receiving video data 11 full frame sequences from this video camera also can be from memory receiving video data 11 full frame sequences the server for example.
Adaptive video treatment circuit 10 can be by wired or wireless connection directly from video camera receiving video data 11 full frame sequences, also can be by wired or wireless connection receiving video data 11 full frame sequences from memory device.This wired or wireless connection can by wireless lan (wlan), wide area network (WAN), the Internet, Local Area Network, satellite network, wired network makes up mutually or above-mentioned network one of them provide.After receiving video data 11 full frame sequences, adaptive video treatment circuit 10 can be stored in video data full frame sequence in the memory, uses temporary storage directly video data 11 full frame sequences to be operated when perhaps needing.
What adaptive video treatment circuit 10 may be received second is input as metadata 15.Metadata 15 comprises similar display screen metadata 16 or target display screen metadata 18.Usually, as hereinafter will being described in detail with reference to figure 2-Fig. 9, metadata is a kind of like this information, and adaptive video treatment circuit 10 usefulness metadata are adjusted video data full frame sequence, generate output, on one or more target video equipment, showing.The mode of using metadata 15 to adjust video data 11 full frame sequences will specifically be described in detail with reference to figure 6-Figure 15.As can be seen, the concrete metadata that adaptive video treatment circuit 10 is received will mail to specific target display screen from the title of similar display screen metadata 16 and target display screen metadata 18, perhaps mail to one group of target display screen.For example, similar display screen metadata 16 can comprise the particular data that is used for one group of similar display screen.This similar display screen have identical screen resolution, identical aspect ratio and/or with group in the identical further feature of other display screen.Target display screen metadata 18 is corresponding to the specific objective display screen of target video player.Target display screen metadata 18 is cut out specially, is used to adjust video data 11 full frame sequences, so that generate target display screen video.
Another input that adaptive video treatment circuit 10 may be received is a target display information 20.Target display information 20 can comprise the out of Memory of the target display screen special use of the information format of the video data that the target display screen of aspect ratio, target video player of target display screen of screen resolution, the target video player of the target display screen of target video player will receive or target video player.Adaptive video treatment circuit 10 can use target display information further processing video data full frame sequence and/or metadata 15, so that cut out according to the specific objective display screen of target video player.
In its various operations, adaptive video treatment circuit 10 generates two kinds of video outputs 31,33, and Digital Right Management (DRM)/fee charging signal 38.First kind output 31 comprise coding source video 12, original source video 14 and cut out after metadata 32.The coding source video 12 adaptive video treatment circuit 10 of only flowing through constitutes an output.In like manner, the original source video 14 adaptive video treatment circuit 10 of only flowing through constitutes an output.Yet the metadata 32 after cutting out is to be generated by one or more processing the in 10 pairs of similar display screen metadata 16 of adaptive video treatment circuit, target display screen metadata 18 and the target display information 20.Display screen metadata 32 after cutting out will be used by the target video equipment with target display screen, to create the video of cutting out according to the target display screen.The target video player can use the metadata 32 after cutting out, and one or more in conjunction with in coding source video 12 and the original source video 14 are that target display devices is created display message.
Second kind of output that adaptive video treatment circuit 10 generates is target display screen video 33, comprising coding target display screen video 34 and/or target display screen video 36.These outputs 34 and 36 are created by adaptive video treatment circuit 10, so that show on the target display screen of target video player.Coding each in the target video data 34 and 36 is based on all that video input 11, metadata 15 and target display information 20 create.The establishment mode of coding target display screen video 34 and target display screen video 36 is by the concrete operations decision of adaptive video treatment circuit 10.These concrete operations of adaptive video treatment circuit 10 will be described in detail with reference to Figure 11-Figure 15.
In an operational instances of adaptive video treatment circuit 10, adaptive video treatment circuit 10 received code source videos 12.Adaptive video treatment circuit 10 uses 22 pairs of coding source videos 12 of decoder to decode subsequently.Adaptive video treatment circuit 10 uses metadata 15 and/or target display information 20 to come operation decodes source video subsequently, generates target display screen video.Subsequently, adaptive video treatment circuit 10 uses encoder 24 to create coding target display screen video 34.Coding target display screen video 34 is to generate for showing on the target display screen specially.Therefore, target display screen metadata 18 and/or target display information 20 are used to handle not coding source video, according to need reducing it of specific objective video equipment and corresponding target display screen thereof, generate target display screen video.
In another operational instances of adaptive video treatment circuit 10, adaptive video treatment circuit 10 receives original source video 14.Original source video 14 comprises video data full frame sequence.Adaptive video treatment circuit 10 apply metadatas 15 and/or target display information 32 are created target display screen video 36.Than the operation of creating coding target display screen video 34, adaptive video treatment circuit 10 is not encoded to adjusted video, generates target display screen video 36 with this.
In another operation of Fig. 1 adaptive video treatment circuit 10, adaptive video treatment circuit 10 receives similar display screen metadata 16, and target display information 20.The similar display screen metadata 16 that adaptive video treatment circuit 10 is received not is to generate for the target display screen of target video player specially.Therefore, adaptive video treatment circuit 10 uses its metadata treatment circuit 26, and based target display information 20 is adjusted similar display screen metadata 16, generates the metadata 32 after cutting out.
In another operation of Fig. 1 adaptive video treatment circuit 10, use the target display screen to cut out circuit 28 and create one or more among coding target display screen video 34 and the target display screen video 36.The target display screen is cut out circuit 28 and is used target display information 20 further to adjust Frame, cuts out output 33 according to the target display screen of target video player specially.
The management circuit 30 of adaptive video treatment circuit 10 is carried out the Video processing bookkeepings, create target display screen video 33 or cut out after metadata 32.Digital rights circuit in the management circuit 30 in the adaptive video treatment circuit 10 is carried out its operation, not only is the source video 11 and the metadata 15 that enters the station of entering the station, and also comes the combine digital rights management for output 31 and 33.The Digital Right Management circuit of management circuit 30 can come together to guarantee that operation to the source video that comprises the video data full frame is through authorizing with far-end server or miscellaneous equipment.
When the user passed through adaptive video treatment circuit 10 executable operations, the billing operation of management circuit 30 was used for the user is chargeed.For example, the user of target video equipment asks adaptive video treatment circuit 10 to prepare target display screen video 36 from original source video 14.Management circuit 30 judges at first whether the user has the right to visit original source video 14, metadata 15 and the target display information 20 that is used for creating target display screen video 36.After the access originator video 14 of having the right by the definite user of combine digital rights management operation, management circuit 30 starts billing operations.These billing operations will charge to the user, the expense of perhaps notifying the user to deduct.
Adaptive video treatment circuit 10 can be realized by the combination of hardware, software or hardware and software.Adaptive video treatment circuit 10 can be realized by the software application on personal computer, server computer, set-top box or the miscellaneous equipment.Among Fig. 1 adaptive video treatment circuit 10 other/extra operation will be described with reference to figure 2-Figure 15.
Fig. 2 is the structural representation according to the adaptive video processing system of the embodiment of the invention and a plurality of embodiment of subframe metadata generation system.The framework of describing among Fig. 2 comprises that adaptive video is handled (AVP) system and the subframe metadata generates (SMG) system.Usually, SMG system and AVP system can be distributed in the communication construction one, two or more than on two the parts.
Subframe metadata generation system 100 comprises video camera 110 and/or computing system 140.The video camera 110 capture video data full frame original series that will describe in detail with reference to figure 3-Fig. 9.Subsequently, computing system 140 and/or video camera 110 are imported specified subframe generator data based on the user.These are imported specified subframe based on the user and are used for indicating which subdivision that will use the data represented picture of full frame video to create video as the target video player specially.These target video players can comprise video player 144,146,148 and 150.
The AVG system that shows among Fig. 2 is used for creating the video data sequence of subframes from video data full frame sequence and metadata, and wherein metadata is generated by the SMG system.The AVG system of camera system 100 and/or SMG system can be stored in server 152, digital computer 142 or video and display the play in device 144,146,148 and/or 150.If in metadata and the system of source video storage in Fig. 2, AVP can carry out after a while.As selection, also can take the source video, and the application program of the SMG of video camera 110, computing system 140 and/or computing system 142 generated after the metadata at video camera 110, carry out AVP immediately.
Communication system 154 comprises one or more in communication construction 156 and/or the physical medium 158.Communication construction 156 is supported the exchange of source video 11, metadata 15, target display information 20, output 31, display screen video 33 and DRM/ fee charging signal 38 that preamble is described with reference to figure 1.As shown in the figure, communication construction 156 can comprise the Internet and other data network.As selection, video data and other input and output can be write in the physical medium 158, thereby by physical medium 158 distributions.Can hire out in the shop at video physical medium 158 is hired out to the user, the user applies it in the physical medium video player.
Metadata is manipulated in the adaptive video processing that the present invention will describe in detail and other input is operated the video data full frame, generates target video data, so that show on video player 144,146,148 and/or 150.Be used for to receive from single source or multiple source for video data 11, metadata 15 and the target video display message 20 of player 144,146,148 and 150 establishment target display screen videos.For example, storing metadata 15 on server 152, and source video 11 is stored in different places.As selection, source video 11, metadata 15 and target display information 20 also can together be stored on server 152 or another individual equipment.
Adaptive video of the present invention handle operation can by computing system 142, video camera 110, computing system 140, player 144,146,148 and/or 150 and server 152 in one or more the execution.As will be with reference to Figure 10-Figure 15 described in detail, these be operating as the specific objective video player and create target display screen video.
Fig. 3 is the structural representation of the video capture/subframe metadata generation system according to one embodiment of the invention.Video capture among Fig. 3/subframe metadata generation system 100 comprises video camera 110 and SMG system 120.Video camera 110 is taken the original full sequence of the video data relevant with picture 102.Video camera 110 also can obtain audio frequency by microphone 111a and 111b.Video camera 110 can mail to the full frame of video data in control desk 140 or transfer to SMG system 120 and carry out.The SMG system 120 of video camera 110 or control desk 140 receives input by user input device 121 or 123 from the user.According to user's input, SMG system 120 shows one or more subframes on video display screen, the full frame sequence of same display video data on this video display screen.According to the subframe from user's input and out of Memory establishment, metadata 15 are created by SMG system 120.The video data output of video capture/subframe metadata generation system 100 is one or more coding source videos 12 or original source video 14.Video capture/subframe metadata generation system 100 is also exported metadata 15, and this metadata can be similar display screen metadata 16 and/or target display screen metadata 18.Video capture/subframe metadata generation system 100 also exportable target display information 20.
The original video frame sequence that video camera 110 is taken is a picture 102.Picture 102 can be any picture that video camera 110 is taken.For example, picture 102 can be a landscape significantly, and picture contains a lot of details.In addition, picture 102 can also be the performer's of dialogue mutually a head photograph.In addition, picture 102 can also be the motion picture that dog chases after ball.In the process of taking original video frame, the type of picture 102 can change usually in time.
In existing video capture system, the user operates the original video frame that video camera 110 comes shooting picture 102, and these frames are optimized, to adapt to " curtain " form.In the present invention, these original video frame just can be changed subsequently, so that finally shown by the target video player of being furnished with video display screen separately.Because subframe metadata generation system 120 can be taken dissimilar pictures in time, captured video is changed the mode that is used for the subframe of watching with generation on the target video player also can change in time." large-screen " form always not can be good at being converted on the small screen type.Therefore, subframe metadata generation system 120 of the present invention is supported a kind of like this original video frame style of shooting, promptly when being transformed into than small-format, this mode can provide high-quality video subframe, on the one or more video display screens that are presented at the target video player.
Fig. 4 is the schematic diagram of exemplary original video frame and corresponding subframe.As shown in the figure, video display screen 400 comprises a viewing area, has wherein shown the original video frame sequence of picture 102 in the representative graph 3.According to embodiment shown in Figure 4, SMG system 120 also is used to respond other signal of representative of consumer input, except that subframe 402, also shows other subframe 404 and 406 that is associated with original video frame sequence on video display screen 400.In these subframes 402 each all has and a plurality of target video display screens one of them corresponding aspect ratio and size.In addition, SMG system 120 generates the metadata 15 relevant with each subframe 402,404 and 406.Subframe metadata generation system 120 metadata 15 relevant with subframe 402,404 and 406 that generated makes corresponding target video display screens generate corresponding display frame on its video display screen.In the embodiment shown in fig. 4, SMG system 120 comprises single video display screen 400, and each subframe 402,404 and 406 shows by this display screen just.In another embodiment, each in a plurality of subframes of processing system for video generation will be presented on the corresponding target video player independently.
In the embodiment shown in fig. 4, in the subframe group, have at least two subframes 404 corresponding with the single frame in the original video frame sequence with 406.Therefore, for example, in specific target video player, subframe 404 with 406 and the associated video information that wherein comprises will be on the single target video player in different time showings.In the embodiment shown in fig. 4, the first of the shown video of target video player shows is the picture that the dog that comprises in the subframe 404 chases after ball, is the picture of the rebound described in the subframe 406 and the second portion of the shown video of target video player is showed.Therefore, in the present embodiment, video sequence adjacent in time in the target video player is generated by single original video frame sequence.
In addition, in the embodiment shown in fig. 4, in the subframe group, have at least two subframes to comprise the object that the locus changes with original video frame sequence.In this frame, show that the locus of the subframe 404 of dog can change with the original video frame sequence relevant with the subframe 406 that shows rebound.In addition, in the embodiment shown in fig. 4, the frame that two subframes in the subframe group are can be corresponding at least two in the original video frame sequence different.In this embodiment, the different frame in subframe 404 and 406 original video frame sequence that can corresponding be presented on the video display screen 400.In this embodiment, in very first time section, select subframe 404 so that in a period of time, show the image of dog.In addition, in this embodiment, the subframe different time period of 406 correspondences, be used to show rebound.In this embodiment, at least a portion correspondence in the subframe group 404 and 406 is by a sprite of picture that original video frame sequence is described.Described sequence can show on entire display screen 400, also can show in subframe 402.
Fig. 5 provides the structural representation of processing system for video one embodiment of the graphical user interface (GUI) that comprises the video editing instrument that is used to create subframe.What show on the Video processing display screen 502 is present frame 504 and subframe 506 thereof.Subframe 506 comprises by the video data in the area-of-interest of user's appointment.In case specified subframe 506, the user just can use one or more video editing instruments that offer the user by GUI508 to edit subframe 506.For example, as shown in Figure 5, by clicking or select a kind of edit tool among the GUI508, the user can use optical filtering, color correction, stack or other edit tool to subframe 506.In addition, GUI508 also can make the user move between primitive frame and/or subframe, so that watch and more original sequence of subframes and sequence of subframes.
Fig. 6 is the schematic diagram of exemplary original video frame and corresponding subframe.In Fig. 6, first picture 602 is described by first sequence 604 of original video frame 606, and second picture 608 is described by second sequence 610 of original video frame 606.Therefore, each picture 602 and 608 comprises the sequence separately 604 and 610 of original video frame 606, watches by each original video frame 606 in each sequence 604 and 610 of demonstration original video frame 606 in proper order.
But, want on the small video display screen, to show each picture 602 and 608, and don't can reduce the perceptible video quality of spectators that each picture 602 and 608 may be partitioned into the sprite that separately shows.For example, as shown in Figure 6, in first picture 602, there are two sprites 612 and 614, in second picture 608, have a sprite 616.Can watch by showing the sequence separately 604 and 610 of original video frame 606 in proper order as each picture 602 and 608, each sprite 612,614 and 616 also can be watched by showing subframe 618 (618a, 618b and 618c) sequence separately.
For example, the first frame 606a in original video frame first sequence 604, the user can specify two subframe 618a and 618b, and each subframe comprises the video data of the different sprites 612 of representative and 614.Suppose sprite 612 and 614 continuously among 606 first sequences 604 of original video frame, the user can be further specifies two subframe 618a and 618b respectively for each sprite 612 and 614 among subsequently each original video frame 606a in first sequence 604 of original video frame 606.So just, obtain subframe 618a first sequence 620, each the subframe 618a that wherein comprises comprises the video content of representing sprite 612; Also can obtain subframe 618b second sequence 630, each the subframe 618b that wherein comprises comprises the video content of representing sprite 614.Each sequence 620 and 630 of subframe 618a and 618b can show in proper order.For example, intersect to show in order corresponding to each subframe 618a of first sprite 612 with corresponding to each subframe 618b of second sprite, 614 sequences 30.In this way, film still can keep the logic flow of picture 602, and can allow spectators see the small detail of picture 602.
In like manner, the first frame 606b in original video frame 606 second sequences 610, the user can specify the subframe 618c of corresponding sprite 616.Suppose sprite 616 once more continuously through original video frame 606 second sequences 610, the user further specifies the subframe 618c that comprises sprite 616 in follow-up each original video frame 606 in original video frame 606 second sequences 610.So just, obtain subframe 618c sequence 640, each subframe 618c wherein comprises the video content of representing sprite 616.
Fig. 7 is the correspondence table of the exemplary subframe metadata of sequence of subframes.What comprise in the subframe metadata 150 of showing among Fig. 7 is tactic metadata 700, is used to indicate the order (DISPLAY ORDER just) of subframe.For example, tactic metadata 700 can identify the sequence of subframes of sprite sequence and each sprite.Use the example of showing among Fig. 7, tactic metadata 700 can be divided into a plurality of groups 720 of subframe metadata 150, wherein the specific sprite of each group 720 correspondence.
For example, in first group 720, tactic metadata 700 starts from first subframe (for example subframe 618a) in subframe first sequence (for example sequence 620), is thereafter each other subframe in first sequence 620.In Fig. 7, first subframe in first sequence is marked as the subframe A of original video frame A, and last subframe in first sequence is marked as the subframe F of original video frame F.After last subframe of first sequence 620, tactic metadata 700 is extended to second group 720, first subframe (for example subframe 618b) that it starts from subframe second sequence (for example sequence 630) ends at last subframe of second sequence 630.In Fig. 7, first subframe in second sequence is marked as the subframe G of original video frame A, and last subframe in second sequence is marked as the subframe L of original video frame F.Last group 720 starts from first subframe (for example subframe 618c) of subframe the 3rd sequence (for example sequence 640), ends at last subframe in the 3rd sequence 640.In Fig. 7, first subframe in the 3rd sequence is marked as the subframe M of original video frame G, and last subframe in the 3rd sequence is marked as the subframe P of original video frame I.
What comprise in each group 720 is the subframe metadata of each independent subframe in this group 720.For example first group 720 comprises the subframe metadata 150 of each subframe in subframe first sequence 620.In an one exemplary embodiment, subframe metadata 150 can be formed the metadata text file of the clauses and subclauses 710 that wherein comprise some.Each clauses and subclauses 710 in the metadata text file comprise the subframe metadata 150 of this specific sub-frame.Therefore, each clauses and subclauses 710 in the metadata text file comprise the subframe identifier that identifies the specific sub-frame that is associated with this metadata, and quote a frame in the original video frame sequence.
Edit file includes but not limited to, panning direction (pan direction) and pan rate (pan rate), convergent-divergent rate, contrast adjustment, brightness adjustment, optical filtering parameter and video effect parameter.Specifically, be associated with subframe, have the adaptable edit file of several types, they are relevant with following content: a) vision adjustment, for example brightness, optical filtering, video effect, contrast and color adjustment; B) movable information, for example pan, acceleration, speed, the subframe moving direction on original frame sequence; C) adjusted size information, for example convergent-divergent (comprise amplification, dwindle and scaling) of subframe on original frame sequence; D) with original video data fall into subframe those part correlations connection, merge or by the additional media of any kind of its stack (for example Die Jia text or figure or additional audio frequency).
Fig. 8 is the correspondence table that comprises the exemplary subframe metadata of subframe edit file.The subframe metadata comprises metadata header 802.Metadata header 802 comprises metadata (MD) parameter, Digital Right Management (DRM) parameter and accounting management parameter.Metadata parameters comprises the information relevant with this metadata, as date created, expiration date, founder's sign, target video device category (category/categories), target video device class (class/classes), source video information and the common out of Memory relevant with all metadata.Digital Right Management in the metadata header 802 partly comprises the information that is used to judge that the subframe metadata is whether available and this subframe metadata can be used which kind of degree.Accounting management parameter in the metadata header 802 is included in the information that is used to start billing operation when metadata is brought into use.
The subframe meta-data pack is contained in the clauses and subclauses 804 of subframe text.The subframe metadata 150 of each subframe comprises subframe routine information 806, the aspect ratio (SF than) of for example distributing to the subframe identifier (SF ID) of this subframe, the information that is associated with original video frame that therefrom extract to remove this subframe (OF ID, OF counting, play side-play amount), subframe position and size (SF position, SF size) and will showing the display screen of this subframe.In addition, as shown in Figure 8, the sub-frame information 804 of specific sub-frame can comprise the edit file 806 that is used to edit this subframe.The example of the edit file 806 shown in Fig. 8 comprises panning direction and pan rate, scaling, color adjustment, optical filtering parameter, replenishing and other video effect and relevant parameter image or video sequence.
Fig. 9 is the schematic diagram according to the video processing circuits of one embodiment of the invention.Video processing circuits 900 is supported SMG of the present invention or the AVP system that earlier in respect of figures 1 one Fig. 8 describe.Video processing circuits 900 comprises treatment circuit 910 and local storage 930, the two common storage and executive software instruction and deal with data.Treatment circuit 910 can be microprocessor, digital signal processor, application specific integrated circuit or can be used in deal with data and the circuit of other type of software operation.Local storage 930 is one or more in random access storage device, read-only memory, hard disk drive, CD-ROM drive and/or other memory that can store data and software program.
Video processing circuits 900 also comprises display screen interface 920, one or more user interface 917, one or more output interface 980 and video camera/camera interface 990.When carrying out the SMG system, video processing circuits 900 comprises video camera and/or video camera interface.Video processing circuits 900 receiving video data full frame sequences.If comprise video camera in the video processing circuits 900, then by video camera capture video data full frame sequence.Video data full frame sequence is stored in the local storage 930 as original video frame 115.Display screen interface 920 is connected to one or more display screens, and these display screens are directly served by video processing circuits 900.User's input interface 917 is connected to one or more user input devices, for example keyboard, mouse or other user input device.Communication interface 980 can be connected to other communication link that data network, DVD burner maybe can mail to information video processing circuits 900 and read information from video processing circuits 900.
Local storage 930 stores can be by the operating system 940 of treatment circuit 910 execution.In like manner, local storage 930 stores the software instruction that is used to realize SMG function and/or AVP function 950.After treatment circuit 910 was carried out SMG and/or AVP software instruction 950, video processing circuits 900 just can be carried out SMG function and/or AVP function operations.
Video processing circuits 900 is stored it after also can or generating in subframe metadata 150 generative processes and finishing.When video processing circuits 900 was carried out the SMG system, video processing circuits 900 was created metadata 15, and it is stored in the local storage as subframe metadata 150.Video processing circuits 900 is carried out the AVP system, and video processing circuits 900 can receive subframe metadata 15 by communication interface 980, handles the source video of receiving by communication interface 980 equally 11 so that use it for.Also storing execution in local storage 930 after, video processing circuits 900 just can realize the software instruction of encoder and/or decoder operation 960.The mode that video processing circuits 900 is carried out SMG and/or AVP system is introduced with reference to figure 1-Fig. 8 and Figure 10-Figure 15.
Refer now to Fig. 1,3,4 and 9, in a specific operation process, treatment circuit 910 is application decoder and subframe processing operation on encoded video 14, generates video data subframe first sequence and video data subframe second sequence simultaneously.Video data subframe first sequence corresponding zone in video data full frame sequence is different from video data subframe second sequence.In addition, treatment circuit 910 merges video data subframe first sequence and video data subframe second sequence, generates video data subframe the 3rd sequence.
Treatment circuit 910 can be encoded to video data subframe the 3rd sequence.Treatment circuit 910 can be in turn and/or application decoder and subframe are handled operation simultaneously.Treatment circuit can be implemented subframe according to subframe metadata 15 and handle.But the characteristics of treatment circuit 910 based target display devices are cut out the subframe metadata, implement subframe then and handle.But the characteristics of treatment circuit 910 based target video equipments are cut out video data subframe the 3rd sequence.
According to another operation, 910 pairs of Video Applications subframes of treatment circuit are handled operation, generate video data subframe first sequence and video data subframe second sequence.Video data subframe first sequence is defined by at least the first parameter, and video data subframe second sequence is defined by at least the second parameter.Described at least the first parameter and at least the second parameter constitute metadata jointly.Treatment circuit 910 receives metadata, carries out subframe and handles, and video data subframe first sequence and video data subframe second sequence are merged, and generates video data subframe the 3rd sequence.Video data subframe the 3rd sequence will mail to the target display screen and show.Before carrying out the subframe processing, treatment circuit 910 can be cut out metadata earlier.Treatment circuit 910 also can be made amendment to video data subframe the 3rd sequence, so that show on the target display screen.
Figure 10 is according to the adaptive video treatment circuit structure of one embodiment of the invention and the schematic diagram of operation.A kind of specific implementation of having showed adaptive processing circuit 1000 among the figure.Adaptive processing circuit 1000 comprises that decoder 1002, metadata treatment circuit 1004, metadata cut out circuit 1006 and management circuit 1008.Adaptive processing circuit 1000 can comprise that also the target display screen cuts out circuit 1010 and encoder 1012.Adaptive processing circuit 1000 receives original source video 16, coding source video 14, similar display screen metadata 16 and/or target display information 20.
The decoder 1002 received code source videos 14 of adaptive processing circuit 1000 are decoded to it, generate original video.As selection, the original source video 16 that adaptive processing circuit receives will directly offer adaptive processing circuit 1000 as original video.Metadata is cut out circuit 1006 and is received similar display screen metadata 16, management circuit receiving target display information 20.
In its operating process, 1004 pairs of original videos of metadata treatment circuit and metadata 15 are operated, and generate output and mail to the target display screen and cut out circuit 1010.Metadata is cut out circuit 1006 and is received similar display screen metadata 16, based on the interface data of receiving from management circuit 1008, generates the metadata 32 after cutting out.Management circuit 1008 receiving target display information 20 generate output, mail to metadata and cut out circuit 1006, decoder 1002, metadata treatment circuit 1004 and target display screen and cut out one or more in the circuit 1010.Metadata treatment circuit 1004 is handled original videos based on the metadata 32 of cutting out from metadata after the cutting out that circuit 1006 receives, and generates output, and this output will further be cut out circuit 1010 by the target display screen and be cut out, generation target display screen video 36.Target display screen video 36 can generate coding target display screen video 34 by encoder 1012 codings.
Among Figure 10 each parts of adaptive processing circuit 1000 all can based on its receive arbitrarily or all import executable operations.For example, decoder 1002 can be cut out its operation based on the information of receiving from management circuit 1008, and coding source video 14 is encoded.But this operation based target display information 20 is carried out.In addition, metadata is cut out circuit 1006 can revise similar display screen metadata 16 based on the information of receiving from management circuit 1008, generates the metadata 32 after cutting out.Metadata is cut out information that circuit 1006 receives from management circuit 1008 and is based on target display information 20.Similar display screen metadata 16 can corresponding have a group or a class target display screen of similar characteristic.But adaptive processing circuit 1000 generates the metadata 32 after the cutting out of corresponding specific objective display screen.Therefore, metadata is cut out circuit 1006 based target display information 20 and is revised similar display screen metadata 16 with the relevant information that is generated by management circuit 1008, generates the metadata 32 after cutting out.
Metadata treatment circuit 1004 can be revised original video based on similar display screen metadata 16, generates the display screen video.As selection, metadata treatment circuit 1004 is handled original video based on the metadata after cutting out 32, generates output.But metadata treatment circuit 1004 can also not generate the display screen video of final form.Therefore, but the target display screen cut out the out of Memory (based target display information 20) that circuit 1010 use and management circuit 1008 offer him and further cut out the display screen video, generate target display screen video 36.The target display screen is cut out cutting out in the coding target display screen video 34 that also shows encoder 1012 generations of circuit 1010 execution.
Figure 11 is the functional block diagram according to adaptive video treatment circuit first specific embodiment of the present invention.In the present embodiment, decoder 1102 received code source videos 12 generate not encoded video 1104.Metadata treatment circuit 1106 receives not encoded video 1104 or original source video 14.Based target display screen metadata 18, metadata treatment circuit 1106 are handled original source videos 14 and/or encoded video 1104 not, generate the output video data.Metadata treatment circuit 1106 also can be cut out circuit 1112 from target display screen metadata and receive input.Target display screen metadata is cut out circuit 1112 and is received similar display screen metadata 16 and target display information 20.Based on similar display screen metadata 16 and target display information 20, target display screen metadata is cut out the metadata 32 that circuit 1112 generates after cutting out.Therefore, metadata treatment circuit 1106 use target display screen metadata 18 and/or cut out after metadata 32 handle its input video, generate output.
For the target display screen of target video player, the processing that the output of metadata treatment circuit 1106 is accepted may be also insufficient.Therefore, the supplementary target display screen is cut out the output that circuit 1108 receives metadata treatment circuit 1106, and based target display information 20 is further handled input video, generates target display screen video 36.Target display screen video 36 is to cut out according to the target display screen of target video player specially.Encoder 1110 is also cut out circuit 1108 from the supplementary target display screen and is received output, and output is encoded, and generates coding target display screen video 34.Coding target display screen video 34 is that the form of following the receivable video data of target video player is encoded.Target video player received code target video 34 is based on this video 34 display video picture on its display screen.
Figure 12 is the functional block diagram according to adaptive video treatment circuit second specific embodiment of the present invention.Compare with the structure among Figure 11, integrated decoding and metadata treatment circuit 1202 received code source videos 12, original source video 14, target display screen metadata 18, and cut out circuit 1208 from target display screen metadata and receive metadata 32 after cutting out.Target display screen metadata is cut out the metadata 32 that circuit 1208 generates after cutting out based on similar display screen metadata 16 and target display information 20.
Integrated decoding and metadata treatment circuit 1202 are handled its input, generate the display screen video as output.At any time, mail to the appearance simultaneously of all inputs of integrated decoding and metadata treatment circuit 1202.For example, if coding source video 12 occurs, 1202 pairs of coding source videos 12 of then integrated decoding and metadata treatment circuit are decoded, use target display screen metadata 18 then and/or cut out after 32 pairs of metadata not the coding source video handle, generate video output.Certainly, when integrated decoding and metadata treatment circuit 1202 reception original source videos 14, it need not before carrying out metadata processing operation original source video 14 to be decoded.
The output of integrated decoding and metadata treatment circuit 1202 will be cut out circuit 1204 by supplementary target and be received.Supplementary target is cut out circuit 1204 and is gone back receiving target display information 20.Supplementary target is cut out circuit 1204 based target display information 20 its video data of receiving from integrated decoding and metadata treatment circuit 1202 is handled, and generates target display screen video 36.As selection, supplementary target is cut out circuit 1204 and is generated output and mail to encoder 1206, and the latter encodes to the data of input, generates coding target display screen video 34.Target display screen video 36 and coding target display screen video 34 all are the specific objective display screens that is specifically applied to the target video player.
Figure 13 is the functional block diagram according to adaptive video treatment circuit the 3rd specific embodiment of the present invention.In structure shown in Figure 13, integrated decoding, target are cut out and metadata treatment circuit 1302 received code source videos 12, original source video 14, target display screen metadata 18, similar display screen metadata 16 and target display information 20.Signal based on the effective and existence that it received, integrated decoding, target are cut out and are carried out decode operations, target with metadata treatment circuit 1302 and cut out operation and metadata and handle in the operation one or more, generate mail to supplementary target cut out the video data of circuit 1304 and/or cut out after metadata 32.
Supplementary target is cut out circuit 1304 and is received integrated decoding, target and cut out output with metadata treatment circuit 1302 and target display information 20.Based on its input data, supplementary target is cut out the output that circuit 1304 generates target display screen video 36 and/or mails to encoder 1306.Encoder 1306 is cut out circuit 1304 from supplementary target and is received the input data, generates coding target display screen video 34.
Figure 14 is the functional block diagram according to adaptive video treatment circuit the 4th specific embodiment of the present invention.In the embodiment shown in fig. 14, the coding source video 12 of generation mails to decoder 1402 decodes, and generates not encoded video 1104.Integrated target is cut out with metadata treatment circuit 1404 and is received not encoded video 1104, original source video 14, target display screen metadata 18, similar display screen metadata 16 and target display information 20.Based on its input and concrete operator scheme, integrated target is cut out with the 1404 generation outputs of metadata treatment circuit and is mail to supplementary target and cuts out circuit 1406, also can generate the metadata 32 after cutting out simultaneously.
Supplementary target is cut out circuit 1406 and is received integrated target and cut out output with metadata treatment circuit 1404, and with its input as oneself, goes back receiving target display information 20 simultaneously.As its output, supplementary target is cut out circuit 1406 and is generated target display screen video 36, and it is mail to encoder 1408.Encoder 1408 is encoded to its input, generates coding target display screen video 34.Target display screen video 36 and coding target display screen video 34 are to be specifically applied to the selected target video player that comprises the target video display screen.
Each structure among Figure 11-Figure 14 all can be realized by the adaptive video treatment circuit 1000 among Figure 10.In addition, the various embodiments among the structure of adaptive video treatment circuit 1000 and operation and Figure 11-Figure 14 also can be finished by the one or more equipment that have the adaptive video processing capacity among Fig. 2 among Figure 10.Therefore, the various operations among Figure 11-Figure 14 can be realized by one, two or more particular procedure parts/devices.These particular procedure operation mode that is adopted when being arranged on, two or more processing unit/equipment that distributes can be selected based on treatment effeciency, resource location, Data Position, customer location, service provider location or other resource location.
Figure 15 is the flow chart according to the video processing procedure of one embodiment of the invention.According to the present invention, the operation 1500 of video processing circuits starts from receiving video data (step 1510).When the video data that receives was coded format, video processing circuits was to video data decode (step 1512).Video processing circuits receives metadata (step 1514) subsequently.This metadata can be normal metadata described herein, similar metadata or cut out after metadata.When receive be similar metadata or normal metadata the time, the operation among Figure 15 comprises that the based target display information cuts out (step 1516) to metadata.Step 1516 is optional.
Subsequently, the operation among Figure 15 comprises that based on metadata video data being carried out subframe handles (step 1518).Operation subsequently comprises that based target display information 20 cuts out the video data subframe output sequence (step 1520) that generates in step 1518.The operation of step 1520 generates the video data subframe output sequence after cutting out.Subsequently, optionally, this video data subframe output sequence will encode (step 1522).At last, the video data sequence of subframes will output to storage in the memory, output to target device or otherwise export or output to other position (step 1524) by network.
According to a specific embodiment of showing among Figure 15, processing system for video receives the video data of representing video data full frame sequence.Processing system for video carries out subframe to video data subsequently to be handled, and generates video data subframe first sequence and video data subframe second sequence.Video data subframe first sequence is defined by at least the first parameter, and video data subframe second sequence is defined by at least the second parameter, and this at least the first parameter and at least the second parameter are formed metadata jointly.By video data subframe first sequence and video data subframe second sequence are merged, processing system for video generates video data subframe the 3rd sequence subsequently.
In the present embodiment, the first area of video data subframe first sequence in can corresponding video data full frame sequence, the second area of video data subframe second sequence in can corresponding video data full frame sequence, wherein the first area is different from second area.
Persons skilled in the art know, the term of Shi Yonging " communication connection " herein comprises wirelessly and wired, directly connects with the indirect of element, assembly, circuit or module by other and is connected.Persons skilled in the art also know, infer that connecting (inferred coupling, for example, an element is pushed disconnection and receives another element) comprises the mode the same with " communication connection " wired and wireless in two elements, direct be connected indirectly.
More than invention has been described by means of the explanation function of appointment and the method step of relation.For the convenience of describing, boundary and order that these functions are formed module and method step are defined herein specially.Yet, as long as given function and relation can realize suitably that the variation of boundary and order allows.The boundary of any above-mentioned variation or order should be regarded as in the scope of claim protection.
Below also invention has been described by means of the functional module that some critical function is described.For the convenience of describing, the boundary that these functions are formed module is defined herein specially.When these important function are suitably realized, change its boundary and allow.Similarly, flow chart modules is illustrated by special definition herein also and some important function is extensive use that the boundary of flow chart modules and order can be otherwise defined, as long as still can realize these critical functions.The variation of the boundary of above-mentioned functions module, flow chart functional module and order must be regarded as in the claim protection range.
Those skilled in the art also know functional module described herein and other illustrative modules, module and assembly, can combine as example or by the integrated circuit of discrete component, specific function, the processor that has suitable software and similar device.
In addition, though to describe the purpose of details be clear and understand that the foregoing description, the present invention are not limited to these embodiment.That any those skilled in the art know, to these features and embodiment carry out that various changes or equivalence are replaced and technical scheme, all belong to protection scope of the present invention.
Claims (8)
1. video circuit that is used for the received code video, described encoded video is represented video data full frame sequence, it is characterized in that, and described video circuit comprises:
Treatment circuit is used for described encoded video application decoder and subframe are handled, and generates video data subframe first sequence and video data subframe second sequence; The characteristics of based target display device are cut out the metadata that is used to adjust video data full frame sequence, and described subframe is handled based on the metadata after cutting out;
Described video data subframe first sequence corresponding zone in video data full frame sequence is different from described video data subframe second sequence;
Described treatment circuit merges described video data subframe first sequence and described video data subframe second sequence, generates video data subframe the 3rd sequence, and cuts out described video data subframe the 3rd sequence according to the characteristics of target display devices.
2. video circuit according to claim 1 is characterized in that, described treatment circuit is encoded to described video data subframe the 3rd sequence.
3. video circuit according to claim 1 is characterized in that, described treatment circuit uses described decoding in order and subframe is handled.
4. video circuit according to claim 1 is characterized in that, described treatment circuit uses described decoding simultaneously and subframe is handled.
5. a video system is used to receive the video of representing video data full frame sequence, it is characterized in that, described video system comprises:
Treatment circuit is used for described Video Applications subframe is handled, and generates video data subframe first sequence and video data subframe second sequence;
Described video data subframe first sequence is by at least the first parameter-definition, and described video data subframe second sequence is by at least the second parameter-definition, and described at least the first parameter and described at least the second parameter are formed metadata jointly;
Described treatment circuit receives described metadata, and the characteristics of based target display device are cut out described metadata with adjustment video data full frame sequence, and based on the metadata after cutting out described video data are carried out described subframe and handle;
Described treatment circuit merges described video data subframe first sequence and described video data subframe second sequence, generates video data subframe the 3rd sequence.
6. video system according to claim 5 is characterized in that described treatment circuit receives described metadata by communication link.
7. a method that is used to carry out Video processing is characterized in that, comprising:
Receive the video data of representing video data full frame sequence;
The characteristics of based target display device are cut out metadata to adjust video data full frame sequence, and based on the metadata after cutting out described video data is carried out subframe and handle, generate video data subframe first sequence and video data subframe second sequence, described video data subframe first sequence is by at least the first parameter-definition, described video data subframe second sequence is by at least the second parameter-definition, and described at least the first parameter and described at least the second parameter are formed described metadata jointly;
Described video data subframe first sequence and described video data subframe second sequence are merged, generate video data subframe the 3rd sequence;
Cut out described video data subframe the 3rd sequence according to the characteristics of target display devices.
8. method according to claim 7 is characterized in that,
First area in the corresponding described video data full frame sequence of described video data subframe first sequence;
Second area in the corresponding described video data full frame sequence of described video data subframe second sequence;
Described first area is different from described second area.
Applications Claiming Priority (4)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US11/474,032 | 2006-06-23 | ||
US11/474,032 US20070268406A1 (en) | 2006-05-22 | 2006-06-23 | Video processing system that generates sub-frame metadata |
US11/491,051 US20080007649A1 (en) | 2006-06-23 | 2006-07-20 | Adaptive video processing using sub-frame metadata |
US11/491,051 | 2006-07-20 |
Publications (2)
Publication Number | Publication Date |
---|---|
CN101094407A CN101094407A (en) | 2007-12-26 |
CN101094407B true CN101094407B (en) | 2011-09-28 |
Family
ID=38992380
Family Applications (5)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN 200710126493 Expired - Fee Related CN101094407B (en) | 2006-06-23 | 2007-06-20 | Video circuit, video system and video processing method |
CN 200710128026 Active CN101106717B (en) | 2006-06-23 | 2007-06-21 | Video player circuit and video display method |
CN 200710128027 Pending CN101106704A (en) | 2006-06-23 | 2007-06-21 | Video camera, video processing system and method |
CN 200710128029 Pending CN101106684A (en) | 2006-06-23 | 2007-06-22 | Video processing device and method |
CN 200710128031 Expired - Fee Related CN101098479B (en) | 2006-06-23 | 2007-06-22 | Method and equipment for processing video data |
Family Applications After (4)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN 200710128026 Active CN101106717B (en) | 2006-06-23 | 2007-06-21 | Video player circuit and video display method |
CN 200710128027 Pending CN101106704A (en) | 2006-06-23 | 2007-06-21 | Video camera, video processing system and method |
CN 200710128029 Pending CN101106684A (en) | 2006-06-23 | 2007-06-22 | Video processing device and method |
CN 200710128031 Expired - Fee Related CN101098479B (en) | 2006-06-23 | 2007-06-22 | Method and equipment for processing video data |
Country Status (1)
Country | Link |
---|---|
CN (5) | CN101094407B (en) |
Families Citing this family (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP5332369B2 (en) * | 2008-07-18 | 2013-11-06 | ソニー株式会社 | Image processing apparatus, image processing method, and computer program |
JP5420381B2 (en) * | 2009-11-25 | 2014-02-19 | オリンパスイメージング株式会社 | Photography equipment and accessory equipment that can be attached to and detached from the photography equipment |
CN102891951B (en) * | 2011-07-22 | 2016-06-01 | 锋厚科技股份有限公司 | Signal of video signal transporter, reception device, transmission system and method thereof |
AU2019311295A1 (en) * | 2018-07-27 | 2021-01-21 | Appario Global Solutions (AGS) AG | Method and system for dynamic image content replacement in a video stream |
JP2022136571A (en) * | 2021-03-08 | 2022-09-21 | セイコーエプソン株式会社 | display system |
CN113990355A (en) * | 2021-09-18 | 2022-01-28 | 赛因芯微(北京)电子科技有限公司 | Audio program metadata and generation method, electronic device and storage medium |
CN113891105A (en) * | 2021-09-28 | 2022-01-04 | 广州繁星互娱信息科技有限公司 | Picture display method and device, storage medium and electronic equipment |
Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20010047517A1 (en) * | 2000-02-10 | 2001-11-29 | Charilaos Christopoulos | Method and apparatus for intelligent transcoding of multimedia data |
Family Cites Families (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6647061B1 (en) * | 2000-06-09 | 2003-11-11 | General Instrument Corporation | Video size conversion and transcoding from MPEG-2 to MPEG-4 |
JP2005531971A (en) * | 2002-07-01 | 2005-10-20 | コーニンクレッカ フィリップス エレクトロニクス エヌ ヴィ | Video signal processing system |
CN100385956C (en) * | 2002-11-01 | 2008-04-30 | 诺基亚有限公司 | A method and device for transcoding images |
CN1595946A (en) * | 2004-06-23 | 2005-03-16 | 深圳市彩秀科技有限公司 | Method for transmitting website picture to handset |
-
2007
- 2007-06-20 CN CN 200710126493 patent/CN101094407B/en not_active Expired - Fee Related
- 2007-06-21 CN CN 200710128026 patent/CN101106717B/en active Active
- 2007-06-21 CN CN 200710128027 patent/CN101106704A/en active Pending
- 2007-06-22 CN CN 200710128029 patent/CN101106684A/en active Pending
- 2007-06-22 CN CN 200710128031 patent/CN101098479B/en not_active Expired - Fee Related
Patent Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20010047517A1 (en) * | 2000-02-10 | 2001-11-29 | Charilaos Christopoulos | Method and apparatus for intelligent transcoding of multimedia data |
Non-Patent Citations (2)
Title |
---|
JP平3-135673A 1991.06.10 |
同上. |
Also Published As
Publication number | Publication date |
---|---|
CN101106684A (en) | 2008-01-16 |
CN101106717B (en) | 2013-03-20 |
CN101106704A (en) | 2008-01-16 |
CN101098479B (en) | 2010-08-11 |
CN101098479A (en) | 2008-01-02 |
CN101106717A (en) | 2008-01-16 |
CN101094407A (en) | 2007-12-26 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
KR100909440B1 (en) | Sub-frame metadata distribution server | |
KR100904649B1 (en) | Adaptive video processing circuitry and player using sub-frame metadata | |
KR100906957B1 (en) | Adaptive video processing using sub-frame metadata | |
KR100912599B1 (en) | Processing of removable media that stores full frame video ? sub?frame metadata | |
TWI477143B (en) | Simultaneous video and sub-frame metadata capture system | |
CN101094407B (en) | Video circuit, video system and video processing method | |
KR100915367B1 (en) | Video processing system that generates sub-frame metadata | |
CN1372759A (en) | Transcoding for consumer set-top storage application | |
Van Tassel | Digital TV over broadband: Harvesting bandwidth | |
Alforova et al. | Impact of Digital Technologies on the Development of Modern Film Production and Television | |
CN100587793C (en) | Method for processing video frequency, circuit and system | |
Browne | High Definition Postproduction: Editing and Delivering HD Video | |
Ochiva | Entertainment technologies: past, present and future | |
Thompson | Investigations With Prototype Workflows and Specialist Cameras for Wider Target Platform Coverage Reduced Complexity and Universal Distribution | |
Thompson | Travails with My Camera: Investigations with Prototype Workflows and Specialist Cameras for Wider Target Platform Coverage, Reduced Complexity, and Universal Distribution | |
Dickson | Building an Ecosystem for 8K | |
Fair | The impact of digital technology upon the filmmaking production process | |
Danielsen | MPEG-4 for DTV |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
REG | Reference to a national code |
Ref country code: HK Ref legal event code: DE Ref document number: 1115703 Country of ref document: HK |
|
C14 | Grant of patent or utility model | ||
GR01 | Patent grant | ||
REG | Reference to a national code |
Ref country code: HK Ref legal event code: GR Ref document number: 1115703 Country of ref document: HK |
|
CF01 | Termination of patent right due to non-payment of annual fee | ||
CF01 | Termination of patent right due to non-payment of annual fee |
Granted publication date: 20110928 Termination date: 20170620 |