CN106851168A - Video format recognition methods, device and player - Google Patents
Video format recognition methods, device and player Download PDFInfo
- Publication number
- CN106851168A CN106851168A CN201710101681.4A CN201710101681A CN106851168A CN 106851168 A CN106851168 A CN 106851168A CN 201710101681 A CN201710101681 A CN 201710101681A CN 106851168 A CN106851168 A CN 106851168A
- Authority
- CN
- China
- Prior art keywords
- sprite
- dividing mode
- video
- evaluation parameter
- mode
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N7/00—Television systems
- H04N7/01—Conversion of standards, e.g. involving analogue television standards or digital television standards processed at pixel level
- H04N7/0117—Conversion of standards, e.g. involving analogue television standards or digital television standards processed at pixel level involving conversion of the spatial resolution of the incoming video signal
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/10—Processing, recording or transmission of stereoscopic or multi-view image signals
- H04N13/106—Processing image signals
- H04N13/139—Format conversion, e.g. of frame-rate or size
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N7/00—Television systems
- H04N7/01—Conversion of standards, e.g. involving analogue television standards or digital television standards processed at pixel level
- H04N7/0127—Conversion of standards, e.g. involving analogue television standards or digital television standards processed at pixel level by changing the field or frame frequency of the incoming video signal, e.g. frame rate converter
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Computer Graphics (AREA)
- Image Analysis (AREA)
Abstract
The embodiment of the present invention provides a kind of video format recognition methods, device and player, and methods described includes:A kind of dividing mode is chosen from multigroup dividing mode as the first dividing mode, the picture to be detected of video to be identified is divided according to the first dividing mode, obtain the first sprite and the second sprite;Calculate the first sprite the first evaluation parameter corresponding with the second sprite;If the first evaluation parameter meets first condition, the form of video to be identified is video format corresponding with the first dividing mode;Otherwise, the second dividing mode is then chosen from remaining mode, divided according to second dividing mode, the 3rd sprite and the 4th sprite are obtained, the second evaluation parameter is calculated, if the second evaluation parameter meets second condition, then the form of video to be identified is video format corresponding with the second dividing mode, otherwise, continue to divide, the form until determining video to be identified.All of main flow three-dimensional video-frequency form can be distinguished and recognized to methods described.
Description
Technical field
The present invention relates to video data process field, in particular to a kind of video format recognition methods, device and broadcast
Put device.
Background technology
Three-dimensional video-frequency is hard by producer, content obtaining mode, the requirement of storage efficiency of transmission and the stereoscopic display supported
There are various spatial organization's modes in the difference of part, its frame picture.When terminal user needs the three-dimensional video-frequency source to different-format
When playing out, due to sortord cannot be predicted, it is necessary to according to the pictorial feature for actually showing, the manual switching in broadcasting
Three-dimensional synthesis type, can so influence the continuity played;Or after advance preview one by one before broadcasting, file is manually set
Into the differentiable naming method of player, this causes additional workload when number of videos is more.Therefore, according in three-dimensional video-frequency
Hold, the automatic identification video format classification in automatic mark before broadcasting or broadcasting can be greatly enhanced and be played using three-dimensional video-frequency
The convenience of device.But there is following defect in the more existing method for being identified classification according to video pictures feature:Energy
Enough main flow three-dimensional video-frequency form species distinguished and recognize are incomplete.
The content of the invention
In view of this, the purpose of the embodiment of the present invention is to provide a kind of video format recognition methods, device and player,
To solve the above problems.
To achieve these goals, the technical scheme that the embodiment of the present invention is used is as follows:
In a first aspect, the embodiment of the invention provides a kind of video format recognition methods, methods described includes:From multigroup stroke
A kind of dividing mode is chosen in point mode as the first dividing mode, the picture to be detected of video to be identified is divided according to first
Mode is divided, and obtains the first sprite and the second sprite, a kind of every group of dividing mode video format of correspondence;Calculate described
First sprite the first evaluation parameter corresponding with second sprite;If first evaluation parameter meets first condition,
Then the form of the video to be identified is video format corresponding with first dividing mode;If first evaluation parameter is not
Meet first condition, then choose a kind of dividing mode as the second division side from multigroup remaining mode of dividing mode
Formula, the picture to be detected of the video to be identified is divided according to second dividing mode, obtain the 3rd sprite and
4th sprite, calculates the 3rd sprite the second evaluation parameter corresponding with the 4th sprite, if described second comments
Valency parameter meets second condition, then the form of the video to be identified is video format corresponding with second dividing mode,
Otherwise, continuation is chosen a kind of dividing mode and is divided from multigroup remaining mode of dividing mode, until determining
State the form of video to be identified.
Second aspect, the embodiment of the invention provides a kind of video format identifying device, and described device includes:First treatment
Module, for choosing a kind of dividing mode from multigroup dividing mode as the first dividing mode, by the to be checked of video to be identified
Survey picture to be divided according to the first dividing mode, obtain the first sprite and the second sprite, every group of dividing mode correspondence one
Plant video format;Computing module, for calculating first sprite the first evaluation parameter corresponding with second sprite;
Judge module, if meeting first condition for first evaluation parameter, the form of the video to be identified is and described
The corresponding video format of one dividing mode;Second processing module, if being unsatisfactory for first condition for first evaluation parameter,
A kind of dividing mode is chosen from multigroup remaining mode of dividing mode as the second dividing mode, to be identified is regarded described
The picture to be detected of frequency is divided according to second dividing mode, obtains the 3rd sprite and the 4th sprite, calculates institute
The 3rd sprite the second evaluation parameter corresponding with the 4th sprite is stated, if second evaluation parameter meets Article 2
Part, then the form of the video to be identified is video format corresponding with second dividing mode, otherwise, is continued from multigroup institute
State and choose a kind of dividing mode in the remaining mode of dividing mode and divided, the lattice until determining the video to be identified
Formula.
The third aspect, the embodiment of the invention provides a kind of player, and the player includes memory and processor, institute
State memory and be couple to the processor, the memory store instruction, when executed by the processor so that
Operated below the computing device:A kind of dividing mode is chosen from multigroup dividing mode as the first dividing mode, will be treated
Recognize that the picture to be detected of video is divided according to the first dividing mode, obtain the first sprite and the second sprite, every group
A kind of video format of dividing mode correspondence;Calculate first sprite corresponding with second sprite first and evaluate ginseng
Amount;If first evaluation parameter meets first condition, the form of the video to be identified is and first dividing mode
Corresponding video format;If first evaluation parameter is unsatisfactory for first condition, from multigroup remaining side of dividing mode
A kind of dividing mode is chosen in formula as the second dividing mode, by the picture to be detected of the video to be identified according to described second
Dividing mode is divided, and obtains the 3rd sprite and the 4th sprite, is calculated the 3rd sprite and is drawn with the described 4th son
Corresponding second evaluation parameter in face, if second evaluation parameter meets second condition, the form of the video to be identified is
Video format corresponding with second dividing mode, otherwise, continues to be chosen from multigroup remaining mode of dividing mode
A kind of dividing mode is divided, the form until determining the video to be identified.
Compared with prior art, a kind of video format recognition methods provided in an embodiment of the present invention, device and player, lead to
Cross and a kind of dividing mode is chosen from multigroup dividing mode as the first dividing mode, the picture to be detected of video to be identified is pressed
Divided according to the first dividing mode, obtained the first sprite and the second sprite, a kind of every group of dividing mode video lattice of correspondence
Formula, calculates first sprite the first evaluation parameter corresponding with second sprite, if first evaluation parameter is full
Sufficient first condition, then the form of the video to be identified is video format corresponding with first dividing mode, if described
One evaluation parameter is unsatisfactory for first condition, then choose a kind of dividing mode conduct from multigroup remaining mode of dividing mode
Second dividing mode, the picture to be detected of the video to be identified is divided according to second dividing mode, obtains the
Three sprites and the 4th sprite, calculate the 3rd sprite the second evaluation parameter corresponding with the 4th sprite, if
Second evaluation parameter meets second condition, then the form of the video to be identified is corresponding with second dividing mode
Video format, otherwise, continuation is chosen a kind of dividing mode and is divided from multigroup remaining mode of dividing mode, until
The form of the video to be identified is determined, by this in advance by every kind of main flow three-dimensional video-frequency form and a kind of suitable division
Mode is mapped, and a kind of dividing mode is chosen every time and is divided and is identified using corresponding evaluation parameter, until sentencing
The form of the video to be identified is made, all of main flow three-dimensional video-frequency form can be distinguished and recognized to this mode.
To enable the above objects, features and advantages of the present invention to become apparent, preferred embodiment cited below particularly, and coordinate
Appended accompanying drawing, is described in detail below.
Brief description of the drawings
Technical scheme in order to illustrate more clearly the embodiments of the present invention, below will be attached to what is used needed for embodiment
Figure is briefly described, it will be appreciated that the following drawings illustrate only certain embodiments of the present invention, thus be not construed as it is right
The restriction of scope, for those of ordinary skill in the art, on the premise of not paying creative work, can also be according to this
A little accompanying drawings obtain other related accompanying drawings.
Fig. 1 is the structured flowchart of player provided in an embodiment of the present invention.
Fig. 2 is a kind of flow chart of video format recognition methods that first embodiment of the invention is provided.
Fig. 3 is the schematic diagram of the video format that first embodiment of the invention is provided.
Fig. 4 is a kind of flow charts of step S220 in a kind of video format recognition methods that first embodiment of the invention is provided.
Fig. 5 is the flow chart of step S330 in a kind of video format recognition methods that first embodiment of the invention is provided.
Fig. 6 is step S220 another kind flows in a kind of video format recognition methods that first embodiment of the invention is provided
Figure.
Fig. 7 is the flow of step S420 parts in a kind of video format recognition methods that first embodiment of the invention is provided
Figure.
Fig. 8 is the stream of step S420 another part in a kind of video format recognition methods that first embodiment of the invention is provided
Cheng Tu.
Fig. 9 is a kind of structured flowchart of video format identifying device that third embodiment of the invention is provided.
Specific embodiment
Below in conjunction with accompanying drawing in the embodiment of the present invention, the technical scheme in the embodiment of the present invention is carried out clear, complete
Ground description, it is clear that described embodiment is only a part of embodiment of the invention, rather than whole embodiments.Generally exist
The component of the embodiment of the present invention described and illustrated in accompanying drawing can be arranged and designed with a variety of configurations herein.Cause
This, the detailed description of the embodiments of the invention to providing in the accompanying drawings is not intended to limit claimed invention below
Scope, but it is merely representative of selected embodiment of the invention.Based on embodiments of the invention, those skilled in the art are not doing
The every other embodiment obtained on the premise of going out creative work, belongs to the scope of protection of the invention.
It should be noted that:Similar label and letter represents similar terms in following accompanying drawing, therefore, once a certain Xiang Yi
It is defined in individual accompanying drawing, then it need not be further defined and explained in subsequent accompanying drawing.Meanwhile, of the invention
In description, term " first ", " second " etc. are only used for distinguishing description, and it is not intended that indicating or implying relative importance.
Fig. 1 shows a kind of structured flowchart of the player 100 that can be applied in the embodiment of the present invention.As shown in figure 1, broadcasting
Putting device 100 includes memory 102, storage control 104, and one or more (one is only shown in figure) processors 106, peripheral hardware connect
Mouth 108, radio-frequency module 110, audio-frequency module 112, Touch Screen 114 etc..These components pass through one or more communication bus/letter
Number line 116 is mutually communicated.
Memory 102 can be used to store software program and module, the video format identification side such as in the embodiment of the present invention
Method and the corresponding programmed instruction/module of device, processor 106 by run software program of the storage in memory 102 with
And module, so that various function application and data processing are performed, such as video format recognition methods provided in an embodiment of the present invention.
Memory 102 may include high speed random access memory, may also include nonvolatile memory, such as one or more magnetic
Property storage device, flash memory or other non-volatile solid state memories.Processor 106 and other possible components are to storage
The access of device 102 can be carried out under the control of storage control 104.
Various input/output devices are coupled to processor 106 and memory 102 by Peripheral Interface 108.In some implementations
In example, Peripheral Interface 108, processor 106 and storage control 104 can be realized in one single chip.In some other reality
In example, they can be realized by independent chip respectively.
Radio-frequency module 110 is used to receive and send electromagnetic wave, realizes the mutual conversion of electromagnetic wave and electric signal, so that with
Communication network or other equipment are communicated.
Audio-frequency module 112 provides a user with COBBAIF, and it may include one or more microphones, one or more raises
Sound device and voicefrequency circuit.
Touch Screen 114 provides an output and inputting interface simultaneously between player 100 and user.Specifically, touch
Control screen 114 shows video frequency output to user, and the content of these video frequency outputs may include word, figure, video and its any group
Close.
It is appreciated that the structure shown in Fig. 1 is only to illustrate, player 100 may also include more more than shown in Fig. 1 or more
Few component, or with the configuration different from shown in Fig. 1.Each component shown in Fig. 1 can use hardware, software or its group
Close and realize.
In the embodiment of the present invention, client is installed in player 100, the client can be that third-party application is soft
Part, such as decoder or player, provide the user decoding or play the service of video.
Fig. 2 shows a kind of flow chart of video format recognition methods that first embodiment of the invention is provided, and refers to figure
2, methods described includes:
Step S210, chooses a kind of dividing mode as the first dividing mode from multigroup dividing mode, is regarded to be identified
The picture to be detected of frequency is divided according to the first dividing mode, obtains the first sprite and the second sprite, every group of division side
A kind of video format of formula correspondence.
The implementation method of picture to be detected has a lot, as a kind of implementation method, can take the first frame of video to be identified
As picture to be detected.Used as another embodiment, if the totalframes nFrameNum of video to be identified, picture to be detected can
To take [0.1*nFrameNum], a frame or multiframe in [0.3*nFrameNum] ..., [0.9*nFrameNum] frame
It is analyzed as picture to be detected, wherein, square brackets [] are represented and rounded.It is understood that video to be identified can be taken
A certain frame or multiframe are used as picture to be detected, it is also possible to take all frames of video to be identified as picture to be detected, finally judge
As a result, judged result according to all frames determines.
The implementation method of the first dividing mode has various.Several groups of segmentation operators are described below, in following each group segmentation operators
Specifically include different partitioning schemes again, it is to be understood that these partitioning schemes can serve as the first dividing mode.
Wherein, following slash both sides are respectively segmentation rectangle (the first son pictures that correspond to produce the first sprite and the second sprite
Face is commutative with the corresponding relation relative to slash of the second sprite), the data in bracket are corresponding in turn to normalization (with view picture
The width of figure, highly be unit) after Transverse coordinates, ordinate, peak width, region height.Further, if segmentation sprite
The difference of corresponding vertex ordinate for artwork total height α times of , Er Transverse coordinates, width, highly completely the same, claim this two subregion to have
There is the longitudinal translation symmetry of base α;If the difference for splitting sprite corresponding vertex abscissa is α times of artwork total height, and vertical seat
It is mark, width, highly completely the same, claim this two subregion that there is the transverse translation symmetry of base α.
1. segmentation operators 1:
(1) (0,0,0.5,1)/(0.5,0,0.5,1).
(2) (x, y, w, h)/(x+0.5, y, w, h)
2. segmentation operators 2:
(1):(0,0,1/3,1/3)/(1/3,0,1/3,1/3)
(2):(0,0,1/3,1/3)/(2/3,0,1/3,1/3)
(3):(1/3,0,1/3,1/3)/(2/3,0,1/3,1/3)
(4):(0,3/8,1/3,7/24)/(1/3,3/8,1/3,7/24)
(5):(0,3/8,1/3,7/24)/(2/3,3/8,1/3,7/24)
(6):(1/3,3/8,1/3,7/24)/(2/3,3/8,1/3,7/24)
(7):(random subset of the first subregion in a kind of any of the above described point of partitioning scheme)/(therewith with base 1/3 or base
The respective subset of the second subregion in the same segmentation operators of 2/3 transverse translation symmetry).
3. segmentation operators 3:
(1):(0,1/3,1/3,1/3)/(0,2/3,1/3/1/3)
(2):It is other with base 1/3 or the sub-regions of 2/3 longitudinal translation symmetry of base two
4. segmentation operators 4:
(1):(0,0,1,1/2)/(0,1/2,1,1/2);
(2):Other two sub-regions with the longitudinal translation symmetry of base 1/2
Further, video format includes left-right format, top-down format, multiple views view (including eight view formats, nine palaces
Sound of laughing formula), RGB+ depth formats, 2D forms.
For for 3D (3Dimensional, 3-dimensional) video, when frame of video encapsulation is carried out, right and left eyes can be distinguished
In corresponding two width picture compression forward one frame 3D frame of video pictures.Wherein, the mode of compression has many kinds, for example, respectively by a left side
The width of the corresponding picture of right eye reduces half, then and drains into a frame 3D frame of video pictures.Mode side by side also has many
Kind, for example, left and right side by side, the mode such as side by side up and down.Fig. 3 is referred to, picture corresponding for left and right right and left eyes placed side by side
3D frame of video, the form of its 3D video is left-right format (first row the 2nd in Fig. 3);For placement right and left eyes pair side by side up and down
The 3D frame of video of the picture answered, the form of its 3D video is top-down format (first row the 3rd in Fig. 3);Particularly, RGB+ depth
Form is a kind of special left-right format, and its left and right is respectively a secondary colour picture, it is another it is secondary be black and white screen (second row in Fig. 3
1st).Further, for multiple views view, again can be with eight view formats (second row the 3rd in Fig. 3) or nine grids lattice
The form (second row the 2nd in Fig. 3) of formula is compressed encapsulation.
Used as a kind of implementation method, the video of left-right format is corresponding with segmentation operators 1, multiple views view and segmentation operators 2
Correspondence, nine grids form is corresponding with segmentation operators 3, and RGB+ depth formats are corresponding with segmentation operators 1,2D forms and segmentation operators 4
Correspondence.
Step S220, calculates first sprite the first evaluation parameter corresponding with second sprite.
As a kind of implementation method, Fig. 4 is referred to, step S220 includes:
Step S310, obtains the corresponding fisrt feature point set of first sprite, and second sprite the
Two set of characteristic points.
The characteristic point of image is both a station location marker for point, while also illustrating that its local domain has certain pattern
Feature.According to different image matching algorithms, the definition of its characteristic point may be different, for example, it may be some of image pole
Value point, flex point, marginal point, crosspoint etc..Because the definition of characteristic point may be different, therefore first sprite is extracted respectively
Implementation method with the characteristic point in second sprite is also different, for example SIFT algorithms, SURF algorithm, ORB algorithms etc., this
When repeat no more.
Step S320, according to Feature Points Matching algorithm, obtains the fisrt feature point set and the second feature point set
The matching characteristic point pair being mutually matched in conjunction.
Further, as a kind of implementation method, when the acquisition fisrt feature point set and the second feature point set
The all matching characteristic points being mutually matched in conjunction can take wherein to rear first to these matching characteristic points to being ranked up
With degree highest subset, then to the matching characteristic point in this subset to calculating.Certainly, can not also be ranked up, also may be used
Matching degree is taken after sequence time high, or the like other subsets;Sort and select matching degree (including highest, secondary high) subset higher
Purpose be to improve recognition accuracy (correspondingly, judgment threshold can also be improved);The purpose of choosing time subset high is to pick
The sample of the characteristic point pair that those elements are simply repeated is removed, and obtains the real sample with parallax incidence relation.Further
, the element number of subset can be certain fixed proportion of total matching number of fixed value, or acquisition.
Used as another embodiment, before extracting and matching feature points, the subregion for obtaining after singulated is differentiated
Rate is higher, can enter row interpolation and zoom to proper proportion to improve matching speed and reduce the noise influence of image, also may be used
To keep former resolution ratio to improve the capacity of matched sample collection.
Step S330, according to the matching characteristic point pair, calculates the of first sprite and second sprite
One evaluation parameter.
Fig. 5 is referred to, used as a kind of implementation method, each described matching characteristic point is to special including fisrt feature point and second
Levy a little, step S330 includes:
Step S331, calculates the fisrt feature point of each matching characteristic point centering and the vertical seat of second feature point respectively
The absolute value of target difference.
Step S332, the absolute value of the corresponding difference of all matching characteristic points is added, divided by the matching
The number of characteristic point pair, as the first evaluation parameter.
Certainly, the implementation method of step S330 is not limited thereto, it is also possible to calculate each matching characteristic point respectively
The difference of the fisrt feature point of centering and the ordinate of second feature point, by the corresponding difference of all matching characteristic points
Quadratic sum be added, divided by the number of the matching characteristic point pair, as the first evaluation parameter.
Further, if the fisrt feature point set be mutually matched in the second feature point set match spy
Levy a little to number use fixed value, then can without normalization;If taking fixed proportion, then then need normalization.
As another embodiment, Fig. 6 is referred to, step S220 includes:
Step S410, obtains multiple pixels of first sprite as the first pixel point set, and obtain described
Multiple pixels of the second sprite are used as the second pixel point set.
Step S420, calculates corresponding first average color difference of the first pixel point set and second pixel respectively
Gather corresponding second average color difference.
As a kind of implementation method, Fig. 7 is referred to, corresponding first average color of calculating the first pixel point set
Difference, including:
Step S421, obtains the grey decision-making of the red, green, blue triple channel of each pixel in the first pixel point set respectively.
Step S422, obtains grayscale difference degree of each pixel in red, blue, green triple channel.
Step S423, the grayscale difference degree to all pixels point is sued for peace and divided by picture in the first pixel point set
The number of vegetarian refreshments, obtains corresponding first average color difference of the first pixel point set.
Used as a kind of implementation method, the first average color difference can be calculated according to following formula:
Wherein, n is the pixel count of sampling, and R, G, B is respectively the grey decision-making of the red, green, blue triple channel of pixel.
As a kind of implementation method, Fig. 8 is referred to, corresponding second average color of calculating the second pixel point set
Difference, including:
Step S424, obtains the grey decision-making of the red, green, blue triple channel of each pixel in people's pixel set respectively.
Step S425, obtains grayscale difference degree of each pixel in red, blue, green triple channel.
Step S426, the grayscale difference degree to all pixels point is sued for peace and divided by picture in the second pixel point set
The number of vegetarian refreshments, obtains corresponding second average color difference of the second pixel point set.
Likewise, used as a kind of implementation method, the second average color difference can be calculated according to following formula:
Wherein, n is the pixel count of sampling, and R, G, B is respectively the grey decision-making of the red, green, blue triple channel of pixel.
Step S430, using first average color difference and second average color difference as the first evaluation parameter.
Step S230, if first evaluation parameter meets first condition, the form of the video to be identified is and institute
State the corresponding video format of the first dividing mode.
It is understood that the implementation method of the first evaluation parameter is different, its corresponding first condition and with described first
The corresponding video format of dividing mode is also different.
Used as a kind of implementation method, first average color difference and second average color difference are the first evaluation parameter, then
If first average color difference is more than three threshold values more than first threshold and second average color difference, the video to be identified
Form be RGB+ depth formats.
Step S240, it is remaining from multigroup dividing mode if first evaluation parameter is unsatisfactory for first condition
A kind of dividing mode is chosen in mode as the second dividing mode, by the picture to be detected of the video to be identified according to described
Two dividing modes are divided, and obtain the 3rd sprite and the 4th sprite, calculate the 3rd sprite with the described 4th son
Corresponding second evaluation parameter of picture, if second evaluation parameter meets second condition, the form of the video to be identified
It is video format corresponding with second dividing mode, otherwise, continues to be selected from multigroup remaining mode of dividing mode
Take a kind of dividing mode to be divided, the form until determining the video to be identified.
The present embodiment is illustrated with three specific embodiments below.
(1) the first specific embodiment:
Any one mode in segmentation operators 1 is chosen first from multigroup dividing mode as the first dividing mode, will
The picture to be detected of video to be identified is divided according to the first dividing mode, obtains the first sprite and the second sprite, root
According to above-mentioned steps S310 to the mode described in step S330, first sprite is calculated corresponding with second sprite
First evaluation parameter;If first evaluation parameter is less than first threshold, the form of the video to be identified is left-right format;
It is any one in selection segmentation operators 2 from multigroup dividing mode if first evaluation parameter is more than or equal to first threshold
The mode of kind is divided the picture to be detected of video to be identified according to the second dividing mode as the second dividing mode, according to
Above-mentioned steps S310 calculates the 3rd sprite corresponding with the 4th sprite the to the mode described in step S330
Two evaluation parameters;If second evaluation parameter is less than Second Threshold, the form of the video to be identified is regarded for multiple views form
Frequently, in order to further discriminate between the multiple views format video, any one side in segmentation operators 3 is chosen from multigroup dividing mode
Formula is divided the picture to be detected of video to be identified according to the 3rd dividing mode, according to above-mentioned as the 3rd dividing mode
Step S310 calculates the 5th sprite and is commented with the 6th sprite the corresponding 3rd to the mode described in step S330
Valency parameter;If the 3rd evaluation parameter is less than the 3rd threshold value, the form of the video to be identified is nine grids form, otherwise for
Eight view formats;If second evaluation parameter is more than or equal to Second Threshold, from multigroup dividing mode in selection segmentation operators 4
Any one mode is drawn the picture to be detected of video to be identified according to the 4th dividing mode as the 4th dividing mode
Point, according to above-mentioned steps S310 to the mode described in step S330, calculate the 7th sprite and the 8th sprite
Corresponding 4th evaluation parameter, if the 4th evaluation parameter is less than the 4th threshold value, the form of the video to be identified is upper and lower
Form;If the 4th evaluation parameter is more than or equal to the 4th threshold value, the residue in segmentation operators 1 is chosen from multigroup dividing mode
A kind of unselected mode enters the picture to be detected of video to be identified according to the 5th dividing mode as the 5th dividing mode
Row is divided, and according to above-mentioned steps S410 to the mode described in step S430, calculates the 9th sprite with the described tenth son
Corresponding 5th evaluation parameter of picture, if the first average color difference in the 5th evaluation parameter is more than the 5th threshold value and second average
When aberration is less than six threshold values, then the form of the video to be identified is RGB+ depth formats, if in the 5th evaluation parameter
When first average color difference is more than six threshold values less than the 5th threshold value and the second average color difference, then the form of the video to be identified is
2D forms.
It is understood that the implementation order that above-mentioned flow is based on each segmentation operators can be exchanged, identification is not influenceed to tie
Really, simply in video collection to be identified when the ratio difference of various forms is more significant, the difference that step is screened in consumption is had.
Can be with Optimizing Flow improving recognition efficiency using this point.For example eight view types are found when precognition or in preliminary sampling
When video classification is more, the identification of video format can be carried out using second specific embodiment as follows.
(2) second specific embodiment:
Any one mode in segmentation operators 2 is chosen first from multigroup dividing mode as the first dividing mode, will
The picture to be detected of video to be identified is divided according to the first dividing mode, obtains the first sprite and the second sprite, root
According to above-mentioned steps S310 to the mode described in step S330, first sprite is calculated corresponding with second sprite
First evaluation parameter;If first evaluation parameter is less than Second Threshold, from multigroup dividing mode in selection segmentation operators 3
Any one mode as the second dividing mode, the picture to be detected of video to be identified is drawn according to the second dividing mode
Point, the 3rd sprite and the 4th sprite are obtained, according to above-mentioned steps S310 to the mode described in step S330, calculate described
3rd sprite the second evaluation parameter corresponding with the 4th sprite, if second evaluation parameter is less than the 3rd threshold value,
Then the form of the video to be identified is nine grids form, is otherwise eight view formats.If first evaluation parameter be more than or
Equal to Second Threshold, any one mode in segmentation operators 1 is chosen from multigroup dividing mode as the 3rd dividing mode, will
The picture to be detected of video to be identified is divided according to the 3rd dividing mode, obtains the 5th sprite and the 6th sprite, root
According to above-mentioned steps S310 to the mode described in step S330, the 5th sprite is calculated corresponding with the 6th sprite
3rd evaluation parameter, if the 3rd evaluation parameter is less than first threshold, the form of the video to be identified is left-right format;
Otherwise, any one mode in segmentation operators 4 is chosen from multigroup dividing mode as the 3rd dividing mode, is regarded to be identified
The picture to be detected of frequency is divided according to the 4th dividing mode, the 7th sprite and the 8th sprite is obtained, according to above-mentioned step
Rapid S310 calculates the 7th sprite the corresponding with the 8th sprite the 4th and evaluates to the mode described in step S330
Parameter, if the 4th evaluation parameter is less than the 4th threshold value, the form of the video to be identified is top-down format;Otherwise, from
The remaining unselected a kind of mode in segmentation operators 1 is chosen in multigroup dividing mode as the 5th dividing mode, will wait to know
The picture to be detected of other video is divided according to the 5th dividing mode, according to above-mentioned steps S410 to described in step S430
Mode, calculates the 9th sprite the 5th evaluation parameter corresponding with the tenth sprite, if in the 5th evaluation parameter
The first average color difference when being less than six threshold values more than the 5th threshold value and the second average color difference, then the form of the video to be identified
It is RGB+ depth formats, if the first average color difference in the 5th evaluation parameter is less than the 5th threshold value and the second average color difference is more than
During six threshold values, then the form of the video to be identified is 2D forms.
Wherein, corresponding subregion size and segmentation operators race should make appropriate adjustment, left to avoid the occurrence of part
Also occurring that ordinate deviation is less under former segmentation operators 2 and being effectively matched for right form, is multi views so as to be screened by mistake
Situation, for example can be by a kind of adjustment mode in segmentation operators 2:(0,0,1/6,1/3)/(1/3,0,1/6,1/3).
Further, when finding that the video classification of RGB+ depth types is more when precognition or in preliminary sampling, Ke Yiru
The lower identification that video format is carried out using the third specific embodiment.
(3) the third specific embodiment:
Any one mode in segmentation operators 1 is chosen first from multigroup dividing mode as the first dividing mode, will
The picture to be detected of video to be identified is divided according to the first dividing mode, obtains the first sprite and the second sprite, root
According to above-mentioned steps S310 to the mode described in step S330, first sprite is calculated corresponding with second sprite
First evaluation parameter;If first evaluation parameter is less than first threshold, the form of the video to be identified is left-right format;
If first evaluation parameter is more than or equal to first threshold, according to above-mentioned steps S410 to the mode described in step S430,
First sprite the second evaluation parameter corresponding with second sprite is calculated, if in second evaluation parameter first
When average color difference is less than six threshold values more than the 5th threshold value and the second average color difference, then the form of the video to be identified is RGB+
Depth format, if the first average color difference in second evaluation parameter is less than the 5th threshold value and the second average color difference is more than the 6th threshold
During value, then any one mode in segmentation operators 2 is chosen from multigroup dividing mode as the second dividing mode, will be to be identified
The picture to be detected of video is divided according to the second dividing mode, the 3rd sprite and the 4th sprite is obtained, according to above-mentioned
Step S310 calculates the 3rd sprite and is commented with the 4th sprite the corresponding 3rd to the mode described in step S330
Valency parameter;It is any one in selection segmentation operators 3 from multigroup dividing mode if the 3rd evaluation parameter is less than Second Threshold
The mode of kind is divided the picture to be detected of video to be identified according to the 3rd dividing mode as the 3rd dividing mode, is obtained
5th sprite and the 6th sprite, according to above-mentioned steps S310 to the mode described in step S330, calculate the 5th son
Picture the 4th evaluation parameter corresponding with the 6th sprite, it is described to treat if the 4th evaluation parameter is less than the 3rd threshold value
The form for recognizing video is nine grids form, is otherwise eight view formats.If the 3rd evaluation parameter is more than or equal to second
Threshold value, chooses any one mode in segmentation operators 3 as the 4th dividing mode from multigroup dividing mode, is regarded to be identified
The picture to be detected of frequency is divided according to the 4th dividing mode, the 7th sprite and the 8th sprite is obtained, according to above-mentioned step
Rapid S310 calculates the 7th sprite the corresponding with the 8th sprite the 5th and evaluates to the mode described in step S330
Parameter, if the 5th evaluation parameter is less than the 4th threshold value, the form of the video to be identified is top-down format, is otherwise 2D lattice
Formula.
Wherein, the first threshold in above-mentioned 3 implementation methods, Second Threshold, the 3rd threshold value, the 4th threshold value, the 5th threshold value,
And the 6th threshold value it is identical, its value can rule of thumb be configured.
When further, under special circumstances, for example only existing certain the classification subset for being contained in six kinds of form classifications
During video, above-mentioned flow can be simplified further only to carry out the identification of wherein finite steps, such as, only carried out based on segmentation operators
1 and the comparing of first threshold, whether belong to left and right format video side by side to define.
Further, it is assumed that picture to be detected is multiple, after the above-mentioned identification process for completing the first frame picture to be detected,
One of following two strategies can be taken, to complete the format identification of whole video:
(1) precision is preferential.The identification process of remaining picture to be detected in multiple pictures to be detected is continued to complete one by one, with table
Certainly method obtains form classification of most form classifications for identifying as whole video.
(2) speed-priority.When first recognition result of picture to be detected is any non-2D classifications to there is larger probability,
Terminate remaining frame in recognition sequence, in this, as the form classification of whole video;If first picture to be detected is identified as 2D classes
Not, then the identification process of next picture to be detected is continued.
Further, for single video and for, according to the method described above identification video format before, methods described is also
Including:Completion is initialized and chosen or default filename, local file path or Internet video flow path;According to upper
State after method identification video format, methods described also includes:According to input stereo format and the three-dimensional display being adapted to
Difference, calls different three-dimensional video-frequency composition rules, to show correct stereoscopic parallax picture.
For multiple videos, as a kind of implementation method, video file can be one by one read by playlist order
Sample frame and disposably complete the format identification of All Files in list, the layout sequence form format of All Files is identified
List is added in corresponding playlist information, then starts first broadcasting of file in list, is often played to new text
Part is first to read the format identification (FID) that pre-identification goes out from list, further according to input stereo format and the three-dimensional display being adapted to not
Together, different three-dimensional video-frequency composition rules are called, to show correct stereoscopic parallax picture.Played file can so be reduced
Due to the time delay that subsequent file identification is produced when redirecting, the continuity of broadcasting is preferable.
Alternatively implementation method, it is also possible to read first video by playlist, recognizes first video lattice
Formula, calls corresponding three-dimensional video-frequency composition rule, then reads next video, recognizes its form, calls corresponding three-dimensional video-frequency
Composition rule.So can at any time toward before adding in list after new file, and player initialization and broadcasting pictures occur
Time delay is shorter.
Video format recognition methods provided in an embodiment of the present invention, by choosing a kind of division side from multigroup dividing mode
Formula is divided the picture to be detected of video to be identified according to the first dividing mode as the first dividing mode, obtains first
Sprite and the second sprite, a kind of every group of dividing mode video format of correspondence, calculate first sprite and described second
Corresponding first evaluation parameter of sprite, if first evaluation parameter meets first condition, the lattice of the video to be identified
Formula is video format corresponding with first dividing mode, if first evaluation parameter is unsatisfactory for first condition, from many
A kind of dividing mode is chosen as the second dividing mode in the group remaining mode of dividing mode, by the video to be identified
Picture to be detected is divided according to second dividing mode, obtains the 3rd sprite and the 4th sprite, calculates described the
Three sprites the second evaluation parameter corresponding with the 4th sprite, if second evaluation parameter meets second condition,
The form of the video to be identified is video format corresponding with second dividing mode, otherwise, is continued from multigroup described stroke
A kind of dividing mode is chosen in point remaining mode of mode to be divided, the form until determining the video to be identified, lead to
Cross it is this every kind of main flow three-dimensional video-frequency form is mapped with a kind of suitable dividing mode in advance, a kind of division is chosen every time
Mode is divided and is identified using corresponding evaluation parameter, the form until determining the video to be identified, this
All of main flow three-dimensional video-frequency form can be distinguished and recognized to mode, and error in classification rate is low and efficiency high.
Fig. 9 is referred to, is that the functional module of the video format identifying device 500 that second embodiment of the invention is provided is illustrated
Figure.The video format identifying device 500 runs on player 100, it is also possible in running on single processing unit.It is described to regard
Frequency format identification device 500 includes first processing module 510, computing module 520, judge module 530 and Second processing module
540。
First processing module 510, for choosing a kind of dividing mode from multigroup dividing mode as the first dividing mode,
The picture to be detected of video to be identified is divided according to the first dividing mode, the first sprite and the second sprite is obtained,
A kind of every group of dividing mode video format of correspondence;
Computing module 520, for calculating first sprite the first evaluation parameter corresponding with second sprite.
Judge module 530, if meeting first condition, the form of the video to be identified for first evaluation parameter
It is video format corresponding with first dividing mode.
Second processing module 540, if being unsatisfactory for first condition for first evaluation parameter, from multigroup division
A kind of dividing mode is chosen in the remaining mode of mode as the second dividing mode, by the picture to be detected of the video to be identified
Divided according to second dividing mode, obtained the 3rd sprite and the 4th sprite, calculate the 3rd sprite with
Corresponding second evaluation parameter of 4th sprite, it is described to be identified if second evaluation parameter meets second condition
The form of video is video format corresponding with second dividing mode, otherwise, continues remaining from multigroup dividing mode
Mode in choose a kind of dividing mode and divided, the form until determining the video to be identified.
Each module can be that now, above-mentioned each module can be stored in depositing for player 100 by software code realization above
In reservoir 102.Each module can equally be realized by hardware such as IC chip above.
When the video format identifying device 500 runs on player, in file-storage device or from network
The video sequence that device port is obtained can automatically be recognized and played.
When the video format identifying device 500 runs on single processing unit, the processing unit is broadcast with common
Device coupling is put, when identification video format is needed, is identified by the processing unit, and generate corresponding video name or video
Path, for example, can be deserved to be called in old file name that appropriate differentiable prefix or suffix or other special identifiers is increased, commonly broadcasts
Device is put after file or playlist is read in, can be called corresponding vertical according to the difference of above-mentioned mark in played file title
Volumetric video composition rule is played again after being synthesized.
Third embodiment of the invention provides a kind of player, and the player includes memory and processor, described
Memory is couple to the processor, and the memory store instruction makes described when executed by the processor
Operated below computing device:
A kind of dividing mode is chosen from multigroup dividing mode as the first dividing mode, by the to be detected of video to be identified
Picture is divided according to the first dividing mode, obtains the first sprite and the second sprite, and every group of dividing mode correspondence is a kind of
Video format;
Calculate first sprite the first evaluation parameter corresponding with second sprite;
If first evaluation parameter meets first condition, the form of the video to be identified is to be divided with described first
The corresponding video format of mode;
If first evaluation parameter is unsatisfactory for first condition, chosen from multigroup remaining mode of dividing mode
A kind of dividing mode as the second dividing mode, by the picture to be detected of the video to be identified according to second dividing mode
Divided, obtained the 3rd sprite and the 4th sprite, calculated the 3rd sprite corresponding with the 4th sprite
Second evaluation parameter, if second evaluation parameter meets second condition, the form of the video to be identified is and described
The corresponding video format of two dividing modes, otherwise, continues to choose a kind of division from multigroup remaining mode of dividing mode
Mode is divided, the form until determining the video to be identified.
It should be noted that each embodiment in this specification is described by the way of progressive, each embodiment weight
Point explanation is all difference with other embodiment, between each embodiment identical similar part mutually referring to.
The technology effect of video format identifying device and player that the embodiment of the present invention is provided, its realization principle and generation
Fruit is identical with preceding method embodiment, and to briefly describe, device embodiment part does not refer to part, refers to preceding method implementation
Corresponding contents in example.
In several embodiments provided herein, it should be understood that disclosed apparatus and method, it is also possible to pass through
Other modes are realized.Device embodiment described above is only schematical, for example, flow chart and block diagram in accompanying drawing
Show the device of multiple embodiments of the invention, the architectural framework in the cards of method and computer program product,
Function and operation.At this point, each square frame in flow chart or block diagram can represent one the one of module, program segment or code
Part a, part for the module, program segment or code is used to realize holding for the logic function for specifying comprising one or more
Row instruction.It should also be noted that at some as in the implementation replaced, the function of being marked in square frame can also be being different from
The order marked in accompanying drawing occurs.For example, two continuous square frames can essentially be performed substantially in parallel, they are sometimes
Can perform in the opposite order, this is depending on involved function.It is also noted that every in block diagram and/or flow chart
The combination of the square frame in individual square frame and block diagram and/or flow chart, can use the function or the special base of action for performing regulation
Realized in the system of hardware, or can be realized with the combination of computer instruction with specialized hardware.
In addition, each functional module in each embodiment of the invention can integrate to form an independent portion
Divide, or modules individualism, it is also possible to which two or more modules are integrated to form an independent part.
If the function is to realize in the form of software function module and as independent production marketing or when using, can be with
Storage is in a computer read/write memory medium.Based on such understanding, technical scheme is substantially in other words
The part contributed to prior art or the part of the technical scheme can be embodied in the form of software product, the meter
Calculation machine software product is stored in a storage medium, including some instructions are used to so that a computer equipment (can be individual
People's computer, server, or network equipment etc.) perform all or part of step of each embodiment methods described of the invention.
And foregoing storage medium includes:USB flash disk, mobile hard disk, read-only storage (ROM, Read-Only Memory), arbitrary access
Memory (RAM, Random Access Memory), magnetic disc or CD etc. are various can be with the medium of store program codes.Need
It is noted that herein, such as first and second or the like relational terms are used merely to an entity or operation
Made a distinction with another entity or operation, and not necessarily require or imply these entities or exist between operating any this
Actual relation or order.And, term " including ", "comprising" or its any other variant be intended to nonexcludability
Comprising so that process, method, article or equipment including a series of key elements not only include those key elements, but also wrapping
Include other key elements being not expressly set out, or also include for this process, method, article or equipment is intrinsic wants
Element.In the absence of more restrictions, the key element limited by sentence "including a ...", it is not excluded that wanted including described
Also there is other identical element in process, method, article or the equipment of element.
The preferred embodiments of the present invention are the foregoing is only, is not intended to limit the invention, for the skill of this area
For art personnel, the present invention can have various modifications and variations.It is all within the spirit and principles in the present invention, made any repair
Change, equivalent, improvement etc., should be included within the scope of the present invention.It should be noted that:Similar label and letter exists
Similar terms is represented in following accompanying drawing, therefore, once being defined in a certain Xiang Yi accompanying drawing, then it is not required in subsequent accompanying drawing
It is further defined and is explained.
The above, specific embodiment only of the invention, but protection scope of the present invention is not limited thereto, and it is any
Those familiar with the art the invention discloses technical scope in, change or replacement can be readily occurred in, should all contain
Cover within protection scope of the present invention.Therefore, protection scope of the present invention described should be defined by scope of the claims.
Claims (10)
1. a kind of video format recognition methods, it is characterised in that methods described includes:
A kind of dividing mode is chosen from multigroup dividing mode as the first dividing mode, by the picture to be detected of video to be identified
Divided according to the first dividing mode, obtained the first sprite and the second sprite, a kind of every group of dividing mode video of correspondence
Form;
Calculate first sprite the first evaluation parameter corresponding with second sprite;
If first evaluation parameter meets first condition, the form of the video to be identified is and first dividing mode
Corresponding video format;
If first evaluation parameter is unsatisfactory for first condition, choose a kind of from multigroup remaining mode of dividing mode
Dividing mode is carried out the picture to be detected of the video to be identified according to second dividing mode as the second dividing mode
Divide, obtain the 3rd sprite and the 4th sprite, calculate the 3rd sprite and the 4th sprite corresponding second
Evaluation parameter, if second evaluation parameter meets second condition, the form of the video to be identified is and described second stroke
Divide the corresponding video format of mode, otherwise, continuation chooses a kind of dividing mode from multigroup remaining mode of dividing mode
Divided, the form until determining the video to be identified.
2. the method according to claim 1, it is characterised in that calculating first sprite and the described second son
First evaluation parameter of picture, including:
Obtain the corresponding fisrt feature point set of first sprite, and second sprite second feature point set;
According to Feature Points Matching algorithm, obtain what is be mutually matched in the fisrt feature point set and the second feature point set
Matching characteristic point pair;
According to the matching characteristic point pair, the first evaluation parameter of first sprite and second sprite is calculated.
3. the method according to claim 2, it is characterised in that each described matching characteristic point is to including fisrt feature point
With second feature point, it is described according to the matching characteristic point pair, calculate the of first sprite and second sprite
One evaluation parameter, including:
The fisrt feature point for calculating each matching characteristic point centering respectively is exhausted with the difference of the ordinate of second feature point
To value;
The absolute value of the corresponding difference of all matching characteristic points is added, divided by the number of the matching characteristic point pair
Mesh, as the first evaluation parameter.
4. the method according to claim 1, it is characterised in that calculating first sprite and the described second son
Corresponding first evaluation parameter of picture, including:
Multiple pixels of first sprite are obtained as the first pixel point set, and obtains many of second sprite
Individual pixel is used as the second pixel point set;
Corresponding first average color difference of the first pixel point set and the second pixel point set corresponding are calculated respectively
Two average color differences;
Using first average color difference and second average color difference as the first evaluation parameter.
5. the method according to claim 4, it is characterised in that calculating the first pixel point set corresponding
One average color difference, including:
The grey decision-making of the red, green, blue triple channel of each pixel in the first pixel point set is obtained respectively;
Obtain grayscale difference degree of each pixel in red, blue, green triple channel;
The grayscale difference degree to all pixels point is sued for peace and divided by the number of pixel in the first pixel point set, is obtained
Take corresponding first average color difference of the first pixel point set.
6. the method according to claim 4, it is characterised in that calculating the second pixel point set corresponding
Two average color differences, including:
The grey decision-making of the red, green, blue triple channel of each pixel in people's pixel set is obtained respectively;
Obtain grayscale difference degree of each pixel in red, blue, green triple channel;
The grayscale difference degree to all pixels point is sued for peace and divided by the number of pixel in the second pixel point set, is obtained
Take corresponding second average color difference of the second pixel point set.
7. the method according to claim 4, it is characterised in that if first evaluation parameter meets first
Part, then the form of the video to be identified is video format corresponding with first dividing mode, including:
It is described to be identified if first average color difference is more than three threshold values more than first threshold and second average color difference
The form of video is RGB+ depth formats.
8. method according to any one of claim 1 to 7, it is characterised in that the video format includes left-right format,
Top-down format, eight view formats, nine grids form, RGB+ depth formats, 2D forms.
9. a kind of video format identifying device, it is characterised in that described device includes:
First processing module, for choosing a kind of dividing mode from multigroup dividing mode as the first dividing mode, will wait to know
The picture to be detected of other video is divided according to the first dividing mode, obtains the first sprite and the second sprite, and every group is drawn
A kind of video format of the mode of dividing correspondence;
Computing module, for calculating first sprite the first evaluation parameter corresponding with second sprite;
Judge module, if meeting first condition for first evaluation parameter, the form of the video to be identified is and institute
State the corresponding video format of the first dividing mode;
Second processing module, it is surplus from multigroup dividing mode if being unsatisfactory for first condition for first evaluation parameter
A kind of dividing mode is chosen in remaining mode as the second dividing mode, by the picture to be detected of the video to be identified according to institute
State the second dividing mode to be divided, obtain the 3rd sprite and the 4th sprite, calculate the 3rd sprite with described the
Corresponding second evaluation parameter of four sprites, if second evaluation parameter meets second condition, the video to be identified
Form is video format corresponding with second dividing mode, otherwise, is continued from multigroup remaining mode of dividing mode
A kind of middle dividing mode of selection is divided, the form until determining the video to be identified.
10. a kind of player, it is characterised in that the player includes memory and processor, the memory is couple to institute
State processor, the memory store instruction, when executed by the processor so that the computing device with
Lower operation:
A kind of dividing mode is chosen from multigroup dividing mode as the first dividing mode, by the picture to be detected of video to be identified
Divided according to the first dividing mode, obtained the first sprite and the second sprite, a kind of every group of dividing mode video of correspondence
Form;
Calculate first sprite the first evaluation parameter corresponding with second sprite;
If first evaluation parameter meets first condition, the form of the video to be identified is and first dividing mode
Corresponding video format;
If first evaluation parameter is unsatisfactory for first condition, choose a kind of from multigroup remaining mode of dividing mode
Dividing mode is carried out the picture to be detected of the video to be identified according to second dividing mode as the second dividing mode
Divide, obtain the 3rd sprite and the 4th sprite, calculate the 3rd sprite and the 4th sprite corresponding second
Evaluation parameter, if second evaluation parameter meets second condition, the form of the video to be identified is and described second stroke
Divide the corresponding video format of mode, otherwise, continuation chooses a kind of dividing mode from multigroup remaining mode of dividing mode
Divided, the form until determining the video to be identified.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710101681.4A CN106851168A (en) | 2017-02-23 | 2017-02-23 | Video format recognition methods, device and player |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710101681.4A CN106851168A (en) | 2017-02-23 | 2017-02-23 | Video format recognition methods, device and player |
Publications (1)
Publication Number | Publication Date |
---|---|
CN106851168A true CN106851168A (en) | 2017-06-13 |
Family
ID=59134546
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201710101681.4A Pending CN106851168A (en) | 2017-02-23 | 2017-02-23 | Video format recognition methods, device and player |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN106851168A (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108830198A (en) * | 2018-05-31 | 2018-11-16 | 上海玮舟微电子科技有限公司 | Recognition methods, device, equipment and the storage medium of video format |
CN109640180A (en) * | 2018-12-12 | 2019-04-16 | 上海玮舟微电子科技有限公司 | Method, apparatus, equipment and the storage medium of video 3D display |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102340676A (en) * | 2010-07-16 | 2012-02-01 | 深圳Tcl新技术有限公司 | Method and device for automatically recognizing 3D video formats |
CN103152535A (en) * | 2013-02-05 | 2013-06-12 | 华映视讯(吴江)有限公司 | Method for automatically judging three-dimensional (3D) image format |
US20150054914A1 (en) * | 2013-08-26 | 2015-02-26 | Amlogic Co. Ltd. | 3D Content Detection |
CN106067966A (en) * | 2016-05-31 | 2016-11-02 | 上海易维视科技股份有限公司 | Video 3 dimensional format automatic testing method |
-
2017
- 2017-02-23 CN CN201710101681.4A patent/CN106851168A/en active Pending
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102340676A (en) * | 2010-07-16 | 2012-02-01 | 深圳Tcl新技术有限公司 | Method and device for automatically recognizing 3D video formats |
CN103152535A (en) * | 2013-02-05 | 2013-06-12 | 华映视讯(吴江)有限公司 | Method for automatically judging three-dimensional (3D) image format |
US20150054914A1 (en) * | 2013-08-26 | 2015-02-26 | Amlogic Co. Ltd. | 3D Content Detection |
CN106067966A (en) * | 2016-05-31 | 2016-11-02 | 上海易维视科技股份有限公司 | Video 3 dimensional format automatic testing method |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108830198A (en) * | 2018-05-31 | 2018-11-16 | 上海玮舟微电子科技有限公司 | Recognition methods, device, equipment and the storage medium of video format |
CN109640180A (en) * | 2018-12-12 | 2019-04-16 | 上海玮舟微电子科技有限公司 | Method, apparatus, equipment and the storage medium of video 3D display |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN104756491B (en) | Depth cue based on combination generates depth map from monoscopic image | |
US8036451B2 (en) | Creating a depth map | |
JP3740065B2 (en) | Object extraction device and method based on region feature value matching of region-divided video | |
US8520935B2 (en) | 2D to 3D image conversion based on image content | |
US20130162629A1 (en) | Method for generating depth maps from monocular images and systems using the same | |
KR20090084563A (en) | Method and apparatus for generating the depth map of video image | |
US20100067863A1 (en) | Video editing methods and systems | |
US20080170067A1 (en) | Image processing method and apparatus | |
US10115127B2 (en) | Information processing system, information processing method, communications terminals and control method and control program thereof | |
CN1331822A (en) | System and method for creating 3D models from 2D sequential image data | |
CN106599892A (en) | Television station logo identification system based on deep learning | |
CN106341676B (en) | Depth image pretreatment and depth gap filling method based on super-pixel | |
Pearson et al. | Plenoptic layer-based modeling for image based rendering | |
KR20070026403A (en) | Creating depth map | |
KR20060129371A (en) | Creating a depth map | |
CN108597003A (en) | A kind of article cover generation method, device, processing server and storage medium | |
CN109408672A (en) | A kind of article generation method, device, server and storage medium | |
CN110827312A (en) | Learning method based on cooperative visual attention neural network | |
CN105868683A (en) | Channel logo identification method and apparatus | |
CN105007475B (en) | Produce the method and apparatus of depth information | |
CN105844251A (en) | Cartoon video identification method and device | |
CN107832359A (en) | A kind of picture retrieval method and system | |
AU2021240205B1 (en) | Object sequence recognition method, network training method, apparatuses, device, and medium | |
CN106851168A (en) | Video format recognition methods, device and player | |
CN106446889B (en) | A kind of local recognition methods of logo and device |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20170613 |