CN103051913A - Automatic 3D (three-dimensional) film source identification method - Google Patents

Automatic 3D (three-dimensional) film source identification method Download PDF

Info

Publication number
CN103051913A
CN103051913A CN2013100026632A CN201310002663A CN103051913A CN 103051913 A CN103051913 A CN 103051913A CN 2013100026632 A CN2013100026632 A CN 2013100026632A CN 201310002663 A CN201310002663 A CN 201310002663A CN 103051913 A CN103051913 A CN 103051913A
Authority
CN
China
Prior art keywords
image
pixel
row
value
video
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN2013100026632A
Other languages
Chinese (zh)
Inventor
孙冰晶
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
BEIJING BAOFENG TECHNOLOGY Co Ltd
Original Assignee
BEIJING BAOFENG TECHNOLOGY Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by BEIJING BAOFENG TECHNOLOGY Co Ltd filed Critical BEIJING BAOFENG TECHNOLOGY Co Ltd
Priority to CN2013100026632A priority Critical patent/CN103051913A/en
Publication of CN103051913A publication Critical patent/CN103051913A/en
Pending legal-status Critical Current

Links

Images

Abstract

The invention discloses an automatic 3D film source identification method, which includes the following steps: a frame of image in a video is first acquired and converted into a format containing the brightness information of the image; according to the brightness information of the converted image, the boundary information of pixels in the video image is then extracted, the frame of image is symmetrically split into a left image and a right image, and according to the boundary values of the pixels, the left-right overall similarity value of the left image and the right image is worked out; the frame of image is also symmetrically split into an upper image and a lower image, and according to the boundary values of the pixels, the upper-lower overall similarity value of the upper image and the lower image is worked out; and finally, according to the left-right overall similarity value and the upper-lower overall similarity value, the format of the 3D film source is identified. The embodiment of the invention can rapidly and accurately identify whether the current video to be played is a 3D video or a 2D video, so that a terminal device can automatically choose a correct playing mode to play the corresponding video for a user, and thereby the application experience of the user viewing the video is improved.

Description

A kind of 3D sheet source format is the method for identification automatically
Technical field
The present invention relates to technical field of video processing, relate in particular to the automatically method of identification of a kind of 3D sheet source format.
Background technology
The 3D video comprises two pictures usually, and two pictures specifically can by about or be arranged above and below.The 3D video is in playing process, if adopt the ordinary playing device to play, then the user it will be appreciated that two pictures, and there is not the effect of 3D, if adopt the 3D player plays, then the 3D player can be processed two picture datas, thereby so that the user can watch the effect of 3D by the 3D glasses.
At present, the 3D video that the user in the network can download to is more and more, when the user wishes to experience the video of 3D effect, then needs the support of corresponding 3D player just can see the video of 3D effect.Yet, because the user is to the shortage of the relevant knowledge of 3D video, so it possibly can't finish corresponding setting to guarantee the correct output of 3D video when playing the 3D video, for this reason, then need a kind of technological means can automatically carry out the identification of 3D video, thereby reduce user's operation, make things convenient for the user to experience the 3D video.
In general, the 3D video generally includes three kinds of sheet source formats, about cut apart form, cut apart form and common 2D video format up and down, to the film source of these three kinds of forms, the 3D player can adopt different processing methods, thus output 3D video.Yet, in the prior art, need the form of user oneself conjecture 3D film source, and then manually input corresponding processing method, thus output 3D video.The sheet source format if the user guesses wrong then can't be exported the 3D video, thereby needs the user to test one by one three kinds of processing methods that film source is corresponding, thereby determines the sheet source format, output 3D video.
In view of this, how providing a kind of method of automatic identification of 3D film source, thereby save the trouble that user's manual identification is brought, is those skilled in the art's urgent problems.
Summary of the invention
The purpose of this invention is to provide the automatically method of identification of a kind of 3D sheet source format, the method can automatically identify the video format of 3D film source, thereby can save the trouble that user's manual identification is brought, thereby can significantly improve user's experience; Simultaneously, the accuracy rate of the automatic identification of the method is also very high.
Whether thereby can automatically identify rapidly and accurately current video is the 3D video.
The objective of the invention is to be achieved through the following technical solutions:
A kind of 3D sheet source format is the method for identification automatically, comprises the steps:
S11: obtain the two field picture in the video and convert thereof into the form of the monochrome information that comprises image;
S12: the boundary value that extracts pixel in the video image according to the monochrome information of the image after the conversion;
S13: an above-mentioned two field picture is carried out left-right symmetric cut apart, try to achieve the left and right sides overall similarity value of left image and right image according to the boundary value of pixel; Also an above-mentioned two field picture is carried out up and down symmetry division, try to achieve the up and down overall similarity value of upper graph picture and lower edge image according to the boundary value of pixel;
S14: in step S13, if left and right sides overall similarity value is less than or equal to the first predetermined value, and the overall similarity value is more than or equal to the second predetermined value up and down, and then described video is cut apart the 3D film source of form about being; If the overall similarity value is less than or equal to the 3rd predetermined value up and down, and left and right sides overall similarity value is more than or equal to the 4th predetermined value, and then described video is for cutting apart up and down the 3D film source of form; If left and right sides overall similarity value and the overall similarity value is all more than or equal to the 5th predetermined value up and down, then described video is common 2D video.
Further, also comprise the steps: after the step S14
S15: in step S14, if the left and right sides similarity value that obtains and up and down the similarity value all can't satisfy in above-mentioned three kinds of relativities any, then again obtain the next frame image of video, then turn to again execution in step S11.
Further, in step S13, this two field picture is carried out the cross five equilibrium, be divided into the picture left above picture, lower-left image, top right plot picture and bottom-right graph as four; Try to achieve the left and right sides similarity value of the picture left above picture and top right plot picture and the left and right sides similarity value of lower-left image and bottom-right graph picture according to the boundary value of pixel, the average of these two left and right sides similarity values is described left and right sides overall similarity value;
Try to achieve the up and down similarity value of the picture left above picture and lower-left image and the up and down similarity value of top right plot picture and bottom-right graph picture according to the boundary value of pixel, the average of similarity value is described up and down overall similarity value about in the of these two.
Further, in step S11, the two field picture in the video that obtains is dwindled first processing, and the image transitions that will dwindle after the processing becomes yuv format, and then keep the monochrome information in the yuv format image.
Further, in step S12, in arbitrary pixel and adjacent four pixels form up and down five pixels thereof, the difference of brightness maximum and brightness minimum value is the boundary value of this position pixel.
Further, between step S12 and step S13, also comprise the steps:
Sa: take 0-255 rank brightness value as abscissa, the quantity of the corresponding pixel of boundary value of every single order brightness value coupling is that ordinate is set up coordinate system, counts the distribution histogram of pixel boundary value; In this distribution histogram, according to predetermined pixel quantity, choose successively according to boundary value is descending, thus the boundary threshold of definite pixel; Be to enter the pixel that compares among the step S13 more than or equal to the corresponding pixel of the boundary value of this boundary threshold.
Further, in step S13,
When on the left side image and right image or upper graph picture and lower edge image compare, then the capable pixel of N of first's image is compared respectively with the capable pixel of N-n to the N+n of second portion image respectively, and the capable pixel of N of second portion image compared respectively with the capable pixel of N-n to the N+n of first image respectively, then obtain 4n+1 luminance difference value corresponding to the capable pixel of N; In this 4n+1 luminance difference value, get wherein minimum luminance difference value as the luminance difference value of the capable pixel of N of two parts image;
The average of the luminance difference value of each row pixel is left and right sides overall similarity value or overall similarity value up and down in two parts image;
Wherein, left image or upper graph look like to be defined as first's image, and right image or lower edge image are defined as the second portion image.
Alternatively, in above-mentioned steps S13,
In the process that the one-row pixels of first's image another row pixel corresponding with the second portion image compares, M boundary value in the described one-row pixels compared with M-m to the M+m pixel of described another row pixel respectively more than or equal to the pixel of boundary threshold, and with M pixel in described another row pixel respectively with described one-row pixels in M-m to a M+m pixel compare, obtain 4m+1 the luminance difference value corresponding to M pixel of the corresponding row of this two parts image, in this 4m+1 luminance difference value, get minimum luminance difference value as the luminance difference value of M pixel of described one-row pixels and described another row pixel;
The average of the luminance difference value of each pixel is the luminance difference value of this two row pixel in described one-row pixels and described another row pixel.
Further, in step S13,
When on the left side image and right image or upper graph picture and lower edge image advance to be listed as relatively, then the N row pixel of first's image is advanced respectively the row contrast with N-n to the N+n row pixel of second portion image respectively, and the N row pixel of second portion image advanced respectively the row contrast with N-n to the N+n row pixel of first image respectively, then obtain 4n+1 luminance difference value corresponding to N row pixel; In this 4n+1 luminance difference value, get wherein minimum luminance difference value as the luminance difference value of the N row pixel of two parts image;
The average of the luminance difference value of each row pixel is left and right sides overall similarity value or overall similarity value up and down in two parts image;
Wherein, left image or upper graph look like to be defined as first's image, and right image or lower edge image are defined as the second portion image.
Alternatively, in above-mentioned steps S13,
Advance in the row process relatively in a row pixel of first's image another row pixel corresponding with the second portion image, M boundary value in the described row pixel advanced row relatively with M-m to a M+m pixel of described another row pixel respectively more than or equal to the pixel of boundary threshold, and with M pixel in described another row pixel respectively with a described row pixel in M-m to a M+m pixel advance row relatively, obtain 4m+1 the luminance difference value corresponding to M pixel of the corresponding row of this two parts image, in this 4m+1 luminance difference value, get minimum luminance difference value as the luminance difference value of M pixel of a described row pixel and described another row pixel;
The average of the luminance difference value of each pixel is the luminance difference value of this two row pixel in a described row pixel and described another row pixel.
As seen from the above technical solution provided by the invention, in the embodiment of the invention owing to having adopted BORDER PROCESSING mode based on monochrome information, and based on the image comparison process of monochrome information, thereby can identify rapidly and accurately the sheet source format of 3D video to be played, thereby adopt corresponding processing method on the backstage, output 3D video so that the user can carry out watching of 3D video easily in the situation that need not Attended Operation, improves the experience that the user watches the 3D video.
In addition, the present invention is based on the boundary value of each pixel that is obtained by monochrome information, and then obtain the left and right sides overall similarity value of image and overall similarity value up and down according to this boundary value, carry out at last left and right sides similarity value and the up and down comparison of similarity value and predetermined value, thereby determine the sheet source format of 3D video, thereby can significantly improve the automatically accuracy rate of identification of sheet source format.
In sum, the 3D sheet source format provided by the present invention automatically method of identification can automatically identify the video format of 3D film source, thereby can save the trouble that user's manual identification is brought, thereby can significantly improve user's experience; Simultaneously, the accuracy rate of the automatic identification of the method is also very high.
Description of drawings
In order to be illustrated more clearly in the technical scheme of the embodiment of the invention, the accompanying drawing of required use was done to introduce simply during the below will describe embodiment, apparently, accompanying drawing in the following describes only is some embodiments of the present invention, for those of ordinary skill in the art, under the prerequisite of not paying creative work, can also obtain other accompanying drawings according to these accompanying drawings.
The processing procedure schematic diagram of the identification 3D video that Fig. 1 provides for the embodiment of the invention;
The processing procedure schematic diagram of the concrete application of the identification 3D video that Fig. 2 provides for the embodiment of the invention;
Fig. 3 is the pending original image schematic diagram in the embodiment of the invention;
Fig. 4 is the image schematic diagram after the extraction monochrome information in the embodiment of the invention;
Fig. 5 is the schematic diagram of cutting apart about the image for after the extraction monochrome information in the embodiment of the invention carries out;
Fig. 6 is the schematic diagram that the image for after the extraction monochrome information in the embodiment of the invention is cut apart up and down;
Fig. 7 is the schematic diagram that the image for after the extraction monochrome information in the embodiment of the invention is cut apart up and down;
Fig. 8 be in the embodiment of the invention for about image after cutting apart reserve the schematic diagram of frame;
Fig. 9 is the schematic diagram for the reservation of the image after cutting apart up and down frame in the embodiment of the invention;
Figure 10 is the schematic diagram of the capable comparison procedure of the dislocation in the embodiment of the invention.
Embodiment
Below in conjunction with the accompanying drawing in the embodiment of the invention, the technical scheme in the embodiment of the invention is clearly and completely described, obviously, described embodiment only is the present invention's part embodiment, rather than whole embodiment.Based on embodiments of the invention, those of ordinary skills belong to protection scope of the present invention not making the every other embodiment that obtains under the creative work prerequisite.
The embodiment of the invention can be used for the process at displaying video, identifies fast the sheet source format of 3D video,, thus can be in the situation that need not the user participate in directly playing out correct 3D video for the user.In addition, the accuracy rate of automatic video frequency sheet source format of the present invention is also very high.
As shown in Figure 1, a kind of 3D sheet source format automatic identifying method that the embodiment of the invention provides specifically can may further comprise the steps:
Step S11 obtains the two field picture in the video and converts thereof into the form of the monochrome information that comprises image, for example, image transitions can be become yuv format; Need to prove at this, image includes only the form of zero degree information, that is to say the gray-scale map of image, thereby in the present invention, monochrome information and brightness value, with half-tone information and gray value be equivalent concepts.
Alternatively, in this step, can also dwindle processing according to predetermined mode to the two field picture in the video that obtains, and the image transitions that will dwindle after the processing becomes yuv format, and only keep the monochrome information in the yuv format image.
Step S12 extracts the boundary value of pixel in the video image according to the monochrome information of image after the conversion;
Alternatively, in this step S12, determine that accordingly the mode of the boundary value of pixel specifically can comprise:
With arbitrary pixel and in five pixels that adjacent four pixels form up and down, the difference of brightness maximum and brightness minimum value is as the boundary value of the pixel of this position;
Step S13 carries out left-right symmetric to an above-mentioned two field picture and cuts apart, and tries to achieve the left and right sides overall similarity value of left image and right image according to the boundary value of pixel; Also an above-mentioned two field picture is carried out up and down symmetry division, try to achieve the up and down overall similarity value of upper graph picture and lower edge image according to the boundary value of pixel;
Particularly, in this step S13, both a two field picture can be split into respectively symmetrical two parts image and laterally zygomorphic two parts image, also can carry out the cross five equilibrium to this two field picture, and be about to a two field picture and be divided into the picture left above picture, lower-left image, top right plot picture and bottom-right graph as four; At this moment, calculate corresponding left and right sides overall similarity value and up and down the overall similarity value mode respectively can for:
Try to achieve the left and right sides similarity value of the picture left above picture and top right plot picture and the left and right sides similarity value of lower-left image and bottom-right graph picture according to the boundary value of pixel, the average of these two left and right sides similarity values is described left and right sides overall similarity value;
Try to achieve the up and down similarity value of the picture left above picture and lower-left image and the up and down similarity value of top right plot picture and bottom-right graph picture according to the boundary value of pixel, the average of similarity value is described up and down overall similarity value about in the of these two;
Further, for improving the treatment effeciency among this step S13, the boundary value of the pixel that can also extract according to this step S12 carries out filtration treatment to the pixel that comprises in the image, corresponding filter type specifically can but be not limited to comprise:
Take 0-255 rank brightness value as abscissa, the quantity of the corresponding pixel of boundary value of every single order brightness value coupling is that ordinate is set up coordinate system, counts the distribution histogram of pixel boundary value; In this distribution histogram, according to predetermined pixel quantity, choose successively according to boundary value is descending, thus the boundary threshold of definite pixel; Will be as the pixel of the comparison process among the execution in step S13 more than or equal to the corresponding pixel of the boundary value of this boundary threshold, namely can be only based on carry out left and right sides overall similarity value and the up and down computing of overall similarity value more than or equal to the corresponding pixel of the boundary value of this boundary threshold.Such as, suppose to choose 1000 pixels, in above-mentioned histogram, the highest brightness value is 250, and the brightness value of this ladder has 5 pixels, next is 249 brightness value, the brightness value of this ladder has 10 pixels, chooses down successively, until the 60th rank brightness value, then just in time chosen 1000 pixels, thereby be boundary threshold this moment 60.The pixel that will compare more than or equal to the corresponding pixel of 60 boundary value.Only the pixel of the pre-provisioning request of border information conforms is carried out subsequent treatment by such filter process, thereby the pure color that can filter out in the image partly waits the unconspicuous part of contrast, reduce follow-up work for the treatment of amount, with the efficient of Effective Raise video identification processing.
Further, in above-mentioned processing procedure, if can't obtain the pixel that described boundary information meets pre-provisioning request, then show the identifying processing that can't carry out video by current frame image, at this moment, can again obtain the new two field picture in the video, and re-execute follow-up image recognition processing process, until obtain recognition result.
Step S14: to the left and right sides overall similarity value that obtains among the step S13 and up and down the overall similarity value judge, carry out the identifying processing of video according to judged result, concrete RM can comprise:
(1) if left and right sides overall similarity value is less than or equal to the first predetermined value, and the overall similarity value is more than or equal to the second predetermined value up and down, and then video to be identified is cut apart the 3D film source of form about being;
(2) if the overall similarity value is less than or equal to the 3rd predetermined value up and down, and left and right sides overall similarity value is more than or equal to the 4th predetermined value, and then video to be identified is for cutting apart up and down the 3D film source of form;
Wherein, the second predetermined value and the 4th predetermined value can be same value, such as being 40; Corresponding the first predetermined value and the 3rd predetermined value can be same value, such as being 10; Need to prove, the choosing of above-mentioned four predetermined values, those skilled in the art can obtain according to common practise and the experience of this area, such as, in general, when two Image similarity values are less than or equal to 10, unanimous on the whole when then thinking this left and right sides image; When two Image similarity values more than or equal to 40 the time, different when then thinking two width of cloth images.
(3) if left and right sides overall similarity value and the overall similarity value is all more than or equal to the 5th predetermined value up and down, then video to be identified is common 2D video;
Wherein, corresponding the 5th predetermined value can be the second predetermined value or the 4th predetermined value, perhaps, also can be other predefined values.
In the judgement processing procedure of above-mentioned steps S14, if the left and right sides similarity value that obtains and up and down the similarity value all can't satisfy in three kinds of relativities of above-mentioned (1)-(3) any, then again obtain the next frame image of video, then turn to and again carry out above-mentioned steps S11, to re-start the identification of 3D sheet source format.
Particularly, in above-mentioned steps S13, if determining left and right sides overall similarity value then needs left image and the left image of a two field picture are compared, if determine that the overall similarity value then needs upper graph picture and the lower edge image of a two field picture are compared up and down, and can be only in process relatively by boundary value more than or equal to boundary threshold pixel participate in comparison, corresponding comparison process can comprise:
At first, on the left side image and left image, and in upper graph picture and the lower edge image, the a plurality of pixels that meet pre-provisioning request for boundary information line by line or carry out by column the contrast of the brightness value of each pixel, obtain the luminance difference value (being the difference of the brightness value between the pixel of correspondence position) of a plurality of pixels of left image and left image, and the luminance difference value of a plurality of pixels of upper graph picture and lower edge image;
Afterwards, just can calculate the mean difference value of left image and left image as the comparing result of described left image and left image according to the luminance difference value of a plurality of pixels of the left image that obtains and left image, it is left and right sides overall similarity value, and calculate the mean difference value of upper graph picture and lower edge image as the comparing result of described upper graph picture and lower edge image according to the luminance difference value of a plurality of pixels of the upper graph picture that obtains and lower edge image, i.e. overall similarity value up and down; The left and right sides overall similarity value that obtains and up and down the overall similarity value can be used as follow-up foundation of carrying out the video identification processing.
Further, suppose that two parts image comprises respectively first's image (can be above-mentioned left image or upper graph picture) and second portion image (can be above-mentioned right image or lower edge image), and the pixel that comprises respectively the capable X row of Y in first's image and the second portion image, then the process nature of corresponding line by line contrast is for carrying out respectively the comparison process of the capable pixel of Y, and the array that it is X that the comparison procedure of every row is equivalent to two length (array that namely comprises X pixel) compares, wherein, the comparison procedure of the array that two length is X specifically can be less than or equal to Y with the N(N in first's image) first pixel in X the pixel comprising of row and first pixel in the capable X that the comprises pixel of the N in the second portion image compare the luminance difference value between two pixels of acquisition, process successively the capable X that the comprises pixel of N, until the capable X that the comprises pixel of this N is relatively finished, afterwards, proceed the capable comparison process of N+1, the like, until finish the contrast operation of whole two parts image.In like manner, corresponding by column comparison procedure essence is the comparison process of carrying out respectively the pixel of X row, and every row comprise Y pixel, and the concrete processing procedure that compares by column is similar with the processing procedure that compares line by line, therefore repeat no more.
In above-mentioned processing procedure, consider the offset characteristic of 3D video, then corresponding line by line or the process of contrast of carrying out by column the brightness value of each pixel can comprise following any processing mode:
(1) adopts the mode of the contrast of the brightness value carry out line by line each pixel
With the capable pixel of N in first's image (can be left image or upper graph picture) respectively with second portion image (can be right image or lower edge image) in the capable pixel of N-n to the N+n compare respectively (n for more than or equal to 1 positive integer), and with the capable pixel of N in the second portion image respectively with the first image in the capable pixel of N-n to the N+n compare respectively, obtain 4n+1 the luminance difference value (namely obtaining 2n+1 luminance difference value corresponding to the capable pixel of N in first's image) corresponding to the capable pixel of N of two parts image, and with the luminance difference value of wherein minimum luminance difference value as the capable pixel of N of two parts image, the luminance difference value of supposing the capable pixel of N in first's image and the capable pixel of N+n in the second portion image is minimum, show that then the capable pixel of N in first's image is the most similar to the capable pixel of N+n in the second portion image, namely show to have the capable skew of n between two parts image;
Particularly, can be with the average of the luminance difference value of each row pixel in two parts image (being first's image and second portion image) as left and right sides overall similarity value or overall similarity value up and down.
Further, in the process that this compares line by line, when the one-row pixels of first's image another row pixel corresponding with the second portion image compares can also with M pixel in the one-row pixels in first's image respectively with the second portion image in M-m to the M+m pixel of another row pixel compare (m for more than or equal to 1 positive integer), and with M pixel in another row pixel in the second portion image respectively with the first image in M-m to the M+m pixel of corresponding one-row pixels compare, obtain 4m+1 the luminance difference value corresponding to M pixel of the corresponding row of two parts image, and with the luminance difference value of wherein minimum luminance difference value as M pixel of the one-row pixels of first's image another the row pixel corresponding with the second portion image, the luminance difference value of supposing M+m pixel in another corresponding in M pixel and the second portion image in the one-row pixels in first's image row pixel is minimum, show that then M+m pixel in another corresponding in M pixel and the second portion image in the one-row pixels in first's image row pixel is the most similar, namely show the skew that on the direction of row, may have m location of pixels between two parts image;
Wherein, the average of the luminance difference value of each pixel can be used as the luminance difference value of this two row pixel in corresponding one-row pixels and another row pixel.
In a word, in the processing procedure that compares line by line, right lateral in left lateral in first's image and the second portion image is not that complete matching compares, but the row of the some that also staggers on alignment basis relatively carries out comparison, the quantity n that supposes the row that staggers can be 10, and then the N in two parts image is about to and can compares processing in capable upwards 10 row of N and downward 10 line ranges.Simultaneously, in the processing procedure that between each row and row, compares, the dislocation of also carrying out between the pixel is compared, suppose that corresponding left lateral and right lateral are 2 arrays, then in the process of the comparison of M pixel in carrying out array, at first, whether the brightness value of M pixel in left lateral and the right lateral is greater than boundary threshold, if any one brightness value in M the pixel in left lateral or the right lateral is greater than boundary threshold, then in left lateral and right lateral respectively centered by this M pixel, in M-m pixel to a M+m pixel coverage, carry out respectively the comparison process of pixel, in a plurality of comparative results that obtain, get minimum value as the luminance difference value of M pixel, continue backward to loop successively the comparison process of each pixel that comprises in this row according to this principle, until finish the comparison process operation of whole row.
The below will be by different level to accordingly line by line comparison procedure be described.When namely comparing for two parts image employing mode line by line, corresponding processing procedure can comprise following processing level:
The processing of the first level: the comparison process between the row between two parts image and the row
Be expert at and row between comparison procedure in, the current relatively row in the piece image that had both needed two parts image is comprised carries out comparison with the same row in another width of cloth image, also needs current relatively row is carried out comparison with capable, the lower n of upper n in another width of cloth image is capable; Suppose current relatively behavior the 5th row, n=1, and two parts image comprises left figure and right figure, then need the 5th row of left figure is carried out comparison with the 4th, 5,6 row of right figure respectively, also need with the 5th row of right figure respectively and the 4th, 6 row of left figure carry out comparison (relatively crossing between the 5th row and the 5th row), namely need to carry out altogether the comparison between 5 row and the row, obtain 5 comparative results, just can select minimum value in the comparative result as the comparative result of the 5th row at this moment;
All need to carry out comparison process between above line and the row for the every delegation in two parts image, namely when the every delegation in two parts image all finish above line and capable between comparison procedure the time just finished comparison process operation between two parts image.Further, in the comparison process between above line and row, participate in needing to comprise brightness value greater than the pixel of boundary threshold in the row relatively, and when not having the capable or lower n of upper n capable (be current relatively row approaching or be in the edge) in another width of cloth image, then ignore.
Adopt the manner of comparison of upper and lower translation in the comparison process between above line and row, thereby can adapt to the characteristic of the existing upper and lower translation of 3D picture.
The processing of the second level: the pixel in the row between row and the row and the comparison process between the pixel
In the comparison process between pixel and pixel, both needed the same pixel in the delegation of the current compared pixels in the delegation of piece image and another width of cloth image is compared, and also needed the left m in the delegation of current compared pixels and another width of cloth image, a right m pixel are compared; Suppose the left figure of two parts image and right figure, the 5th row among the left figure and the 6th row among the right figure need to be carried out comparison at present, and m=1, current compared pixels is 6, then need the 5th in the 6th pixel in left figure the 5th row and right figure the 6th row, 6,7 pixels compare, also need the 5th of the 6th pixel of the 6th row among the right figure and the 5th row among the left figure, 7 pixels relatively, namely need to carry out altogether the comparison between 5 pixels and the pixel, obtain 5 comparative results, afterwards, just can in 5 comparative results, select minimum value as the luminance difference value of the 6th pixel of the 5th row among the left figure and the 6th row among the right figure;
Particularly, suppose m=1, then relatively corresponding relation can be shown in following table 1, table 2 and table 3 between corresponding pixel and the pixel:
Table 1
Table 2
0 1 2 3 4 5 6 7 8 9
0 1 2 3 4 5 6 7 8 9
Table 3
Figure BDA00002704619600102
Suppose table 1, left figure the 5th row of a behavior above in table 2 and the table 3, below right figure the 6th row of a behavior, then as can be seen from Table 1, the 6th pixel in left figure the 5th row need to right figure the 6th row in the 7th pixel compare, the 5th pixel in the 6th pixel in right figure the 6th row and left figure the 5th row compares, as can be seen from Table 2, the 6th pixel in left figure the 5th row need to right figure the 6th row in the 6th pixel compare, as can be seen from Table 3, the 5th pixel in the 6th pixel in left figure the 5th row and right figure the 6th row need to be compared, the 6th pixel in right figure the 6th row need to left figure the 5th row in the 7th pixel compare, thereby can obtain totally 5 comparative results.
Each brightness value that comprises for the 5th row among the left figure and the 6th row among the right figure all needs to carry out comparison process between above-mentioned pixel and the pixel greater than the pixel of boundary threshold, has just finished the 5th row among the left figure and the comparison process operation of the 6th row among the right figure when each brightness value that namely comprises when the 5th row among the left figure and the 6th row among the right figure has all been finished comparison process between corresponding pixel and the pixel greater than the pixel of boundary threshold.Further, in the comparison process between pixel and pixel, if when not having left m or a right m pixel (being that current compared pixels is approaching or be in the edge, such as the edge pixel in table 1 and the table 3) in the delegation of another width of cloth image, then ignore.
Adopt the manner of comparison of left and right sides translation in the comparison process between above-mentioned pixel and pixel, thereby can adapt to the characteristic of 3D picture existing left and right sides translation.
Need to prove, by in the comparison process between above line and the row and in the comparison process between pixel and the pixel, can adapt to well the existing characteristic that reaches up and down left and right sides translation of 3D picture, namely for the 3D video of two parts image generation translation, still can effectively identify by above-mentioned processing procedure.
(2) adopt the mode of the contrast of the brightness value carry out by column each pixel
With the N row pixel in first's image (can be left image or upper graph picture) respectively with second portion image (can be right image or lower edge image) in N-n to the N+n row pixel compare respectively (n for more than or equal to 1 positive integer), and with the N row pixel in the second portion image respectively with the first image in N-n to the N+n row pixel compare respectively, obtain 4n+1 luminance difference value corresponding to the N row pixel of two parts image (namely obtaining 2n+1 luminance difference value corresponding to N row pixel in first's image and 2n+1 luminance difference value corresponding to N row pixel in the second portion image), and with the luminance difference value of wherein minimum luminance difference value as the N row pixel of two parts image, the luminance difference value of supposing N row pixel in first's image and the N+n row pixel in the second portion image is minimum, show that then the N row pixel in first's image is the most similar to the N+n row pixel in the second portion image, namely show the skew that may have the n row between two parts image;
Particularly, can be with the average of the luminance difference value of each row pixel in two parts image (being first's image and second portion image) as left and right sides overall similarity value or overall similarity value up and down.
Further, in the process that this compares by column, when a row pixel of first's image another row pixel corresponding with the second portion image compares also with M pixel in the row pixel in first's image respectively with the second portion image in M-m to the M+m pixel of corresponding another row compare (m for more than or equal to 1 positive integer), and with M pixel in another the corresponding row pixel in the second portion image respectively with the first image in M-m to the M+m pixel of a row pixel compare, obtain 4m+1 the luminance difference value corresponding to M pixel of the corresponding row of two parts image, and with the luminance difference value of wherein minimum luminance difference value as M pixel of a described row pixel and described another row pixel, the luminance difference value of supposing M pixel in the N row pixel in first's image and M+m pixel in the N row pixel in the second portion image is minimum, show that then M+m pixel in the N row pixel in M pixel and the second portion image in the N row pixel in first's image is the most similar, namely show the skew that may have m location of pixels on the direction of being expert between two parts image;
Wherein, the average of the luminance difference value of each pixel can be used as the luminance difference value of this two row pixel in a corresponding row pixel and another row pixel.
In a word, in comparison procedure by column, right row in left column in first's image and the second portion image are not that complete matching compares, but the row of the some that also staggers on alignment basis relatively compare, the quantity n that supposes the row that stagger can be 10, and then the row of the N in two parts image will and carry out the comparison process of each row to the right in the 10 row scopes at 10 row left of N row.Simultaneously, in the processing procedure that between each row and row, compares, the dislocation of also carrying out between the pixel is compared, for example, any one brightness value in M the pixel in left column and the right row is greater than boundary threshold, then in left column and right row respectively centered by this M pixel, in M-m pixel to a M+m pixel coverage, carry out respectively the comparison process of pixel, in a plurality of comparative results that obtain, get minimum value as the luminance difference value of M pixel, continue backward to loop successively the comparison process of each pixel that comprises in these row according to this principle, until finish the comparison process operation of whole row.
Similar with the line by line comparison procedure of describing before, this by column comparison procedure can comprise too: the row between two parts image and row between comparison process, and the pixel in the row between row and the row and the comparison process between the pixel, corresponding concrete comparison procedure and line by line comparison procedure before are similar, therefore following is carried out brief description.
The processing of the first level: the comparison process between the row between two parts image and the row
In the comparison procedure between row and row, the same row in the current comparison array in the piece image that had both needed two parts image is comprised and another width of cloth image compare, and also need left n row, right n row in current comparison array and another width of cloth image are compared; Suppose current the 5th row of relatively classifying as, n=1, and two parts image comprises left figure and right figure, then need with left figure the 5th row respectively with right figure the 4th, 5,6 row compare, also need with right figure the 5th row respectively with left figure the 4th, 6 row compare, namely need to carry out 5 comparisons altogether, obtain 5 comparative results, this moment just can select in the comparative result minimum value as the 5th row comparative result;
Comparison process between all needing to carry out above-mentioned row and be listed as for each row in two parts image.Further, in the comparison process between above-mentioned row and row, participate in needing to comprise brightness value greater than the pixel of boundary threshold in the row relatively, and when not having left n row or right n row in another width of cloth image, then ignore.
Adopt the manner of comparison of left and right sides translation in the comparison process between above-mentioned row and row, thereby can adapt to the characteristic of 3D picture existing left and right sides translation.
The processing of the second level: the pixel in the row between row and the row and the comparison process between the pixel
In the comparison process between pixel and pixel, both needed the same pixel in the row of current compared pixels and another width of cloth image in the row of piece image is compared, the upper m in also needing one of current compared pixels and another width of cloth image be listed as is individual, a lower m pixel compares; Suppose the left figure of two parts image and right figure, the 6th row among among the left figure the 5th row and the right figure need to be compared at present, and m=1, current compared pixels is 6, then need the 5th in the 6th pixel in left figure the 5th row and right figure the 6th row, 6,7 pixels compare, also need with among the 6th pixel of the 6th among right figure row and the left figure the 5th be listed as the 5th, 7 pixels relatively, namely need to carry out altogether the comparison between 5 pixels and the pixel, obtain 5 comparative results, afterwards, just can in 5 comparative results, select minimum value as the luminance difference value of the 6th pixel of the 6th among the row of the 5th among the left figure and the right figure;
Each brightness value that comprises for the 6th row among the 5th among left figure row and the right figure all needs to carry out comparison process between above-mentioned pixel and the pixel greater than the pixel of boundary threshold.Further, in the comparison process between pixel and pixel, if do not have m or lower m pixel in the row of another width of cloth image, then ignore.
Adopt the manner of comparison of upper and lower translation in the comparison process between above-mentioned pixel and pixel, thereby can adapt to the characteristic of the existing upper and lower translation of 3D picture.
Equally, by in the comparison process between above-mentioned row and the row and in the comparison process between pixel and the pixel, can adapt to well the existing characteristic that reaches up and down left and right sides translation of 3D picture, namely for the 3D video of two parts image generation translation, still can effectively identify by above-mentioned processing procedure.
In the processing procedure for the ease of above-mentioned offset characteristic based on the 3D video, can also be in the process that the pixel of extracting described boundary information and meet pre-provisioning request from two parts image compares in the embodiment of the invention, reserve respectively the pixel of predetermined line number or columns in the marginal portion of two parts image and do not carry out described extraction operation, so that the compare operation of misplace in the subsequent processes row or dislocation row.
Further, in the process that the pixel of extracting described boundary information and meet pre-provisioning request from two parts image compares, if current line or meet the pixel of pre-provisioning request when there is not boundary information in the prostatitis then continues to seek the pixel that boundary information meets pre-provisioning request in next line or next column.
Whether similar the identification processing procedure of the 3D video that provides by the invention described above embodiment can identify whole image, thereby can guarantee the accuracy of recognition result.And, in identification processing procedure, for two parts image of cutting apart rear acquisition, up and down within the specific limits translation contrast, thus even if so that exist skew can obtain accurately comparative result too between two parts image of 3D video.Moreover, owing to also carrying out the processing of image boundary in the identifying that the embodiment of the invention provides based on the monochrome information of pixel, namely can identify as main take the content of the easy perception of human eye, thereby can identify accurately and efficiently the similarity that both comprises between the picture that pure color also comprises literal.Can also be by the predetermined value of the difference of determining maximum similar between two parts image reasonably be set in the embodiment of the invention, thereby so that can be exactly in the identifying difference between two parts image is identified as between two parts image similar less than the situation of predetermined value, and then be convenient to identify exactly the sheet source format of 3D video.
The specific implementation process of the method for a kind of 3D of identification video that the embodiment of the invention is provided below in conjunction with accompanying drawing is described in further detail.
As shown in Figure 2, the processing procedure of identifying accordingly the 3D video can comprise:
Step 21 is processed a two field picture that obtains in the video, so that improve follow-up treatment effeciency;
Particularly, can dwindle processing to this two field picture, specifically narrowing down to what degree can determine according to follow-up calculation process ability, if the calculation process ability is weak, that then this two field picture can be dwindled is more, to improve the efficient in the follow-up 3D video identification processing procedure, if the calculation process ability is strong, that then this two field picture can be dwindled lacks some or does not dwindle processing, with the effect that guarantees that the 3D video identification is processed;
Step 22 changes into yuv format (being gray-scale map) with the image after processing, so that follow-uply can carry out subsequent treatment according to the monochrome information in the yuv format image;
Step 23 in the image of the yuv format after conversion, extracts the boundary information of image according to the monochrome information of each pixel in the image, it is bright only having the border in the image that obtains after the extraction process, and all the other positions are black;
For example, the corresponding mode of extracting image boundary information can but be not limited to comprise: with the monochrome information of each pixel and its taking-up maximum and minimum value poor in the monochrome information of 4 pixels up and down, and determine boundary information in the image according to the size of corresponding difference; Perhaps, also can calculate the difference of the monochrome information of the monochrome information of each pixel pixel upper left with it, and determine boundary information in the image according to corresponding difference; Obviously, can also adopt other to be used for asking the algorithm at edge (being the border) to carry out the determining of boundary information in the image at image at this;
Step 24, count the Luminance Distribution histogram on the border of image based on the boundary information of the image that extracts, the boundary number (being pixel quantity) of processing as required calculates needs boundary threshold to be processed, perhaps, corresponding boundary threshold also can pre-determine, and this boundary threshold is specially a brightness value;
Particularly, if need to determine corresponding boundary threshold, then the processing procedure of concrete definite corresponding boundary threshold can but be not limited to comprise:
(1) can carry out statistics with histogram to image, can know the pixel quantity that every kind of brightness (different brightness value) is corresponding, can draw namely what pixels each different brightness value of 0-255 respectively have in the image; In the statistics with histogram process, consider the offset characteristic of 3D rendering, can the skip pictures frame portion, namely framing mask is not partly carried out corresponding statistics with histogram operation, the image border width of skipping can be determined as required;
(2) in order to extract some, the most obvious boundary pixel, specifically can begin from the brightest boundary pixel accumulated counts, can determine satisfactory minimum luminance value when count results reaches some (namely needing boundary number to be processed), this minimum luminance value is boundary threshold; Wherein, need boundary number to be processed can be subjected to 2 parameter limit, a parameter is to need the total quantity subsistence level of boundary number to be processed to reach what pixels, another parameter is to need the percentage of the total pixel of pixel quantity point to be processed to require what reach, for example, the boundary number of requirement 50% participates in relatively and participates in boundary number relatively being lower than 1000, etc.; Perhaps, need also can arrange boundary number to be processed only to be subjected to the restriction of an above-mentioned parameter;
If corresponding boundary threshold is for setting in advance, then specifically can determine corresponding boundary threshold to the perception of image brightness according to human eye, for example, be lower than 30 obviously perception of pixel human eye for brightness value, then can corresponding boundary threshold be set to certain numerical value greater than 30;
Processing by this step can filter out some unconspicuous borders, so that in subsequent processes, do not process for the pixel that is lower than boundary threshold, thereby the workload of minimizing subsequent treatment is conducive to improve treatment effeciency;
Particularly, the former figure of a two field picture that obtains is with reference to shown in Figure 3, can obtain the image information that font is partly lighted that only comprises shown in Figure 4 after the processing through step 23 and step 24; Need to prove, in Fig. 3 and Fig. 4 and follow-up image schematic diagram, corresponding black part divides in the presentation video brightness value larger, and the brightness value in the white portion presentation video is less, for example, the edge black lines of word partly is the larger part (being that corresponding brightness value is greater than above-mentioned definite boundary threshold) of brightness value among Fig. 4;
Need to prove: for do not do the processing (monochrome information that namely keeps this pixel) of removing less than the pixel of boundary threshold, just carry out in the image ratio processing procedure follow-up, do not think that the pixel less than boundary threshold is the border, namely initiatively not extracting this monochrome information less than the respective pixel among the monochrome information of the pixel of boundary threshold and another figure compares, for example, if the boundary threshold that calculates is 100, then be lower than 100 pixel and will can not think that it is the border, but in the process of follow-up comparison, if the pixel of extracting among pixel and the right figure in left figure compares, then among the left figure less than 100 pixel can not extract with right figure in pixel compare, but the boundary pixel among the left figure with right figure in pixel process relatively in, the brightness value of its corresponding pixel in right figure may be lower than 100 (be that the brightness value of a certain pixel is 100 such as left figure, the brightness value of corresponding pixel is 99 among the right figure, at this moment, the difference of the brightness value between two pixels that obtain is 1, and very two pixels are similar in expression);
Step 25, it (is left image and right image that an above-mentioned two field picture is divided into two parts image, and upper graph picture and lower edge image), and carry out the comparison of the brightness value of pixel in two parts image according to corresponding boundary threshold, according to comparative result, if in two parts image greater than the brightness value difference between the pixel of boundary threshold less than predetermined value, i.e. two parts image similarity determines that then corresponding video is the 3D video;
That is to say, image construction mode according to common 3D video, the mode of corresponding partitioned image is also different, for example, if cut apart about the 3D video is, then as shown in Figure 5, in this step, image can be divided into left and right sides two parts image, if being up and down, the 3D video cuts apart, then as shown in Figure 6, in this step, image can be divided into up and down two parts image, if the 3D video both may for about cut apart also may be for cutting apart up and down, then as shown in Figure 7, in this step, image can be divided into respectively left and right sides two parts image and two parts image up and down; Obviously, if one or more cut mode to the 3D video for other minutes, then in this step, need to adopt other one or more partitioning schemes that image is divided processing, to obtain corresponding two parts image;
For dividing two parts image that obtains, such as Fig. 8 and shown in Figure 9, can also stay at the edge of image a part up and down blank, a part of image in the middle of only processing, namely only carry out the comparison of two parts image for a part of image in centre, so that in subsequent processes, can guarantee still to have the image pixel that can compare after the translation;
For two parts image that obtains after dividing, then specifically can adopt line by line or mode by column compares.The below will be relatively to describe corresponding comparison procedure as example line by line, in the process that compares line by line, mainly be according to the up and down hunting zone of setting (a part of image namely), line by line search in two parts image, obtain the similarity between going in two parts image and going, the corresponding processing procedure that compares line by line can comprise:
Suppose that two parts image is respectively A image and B image, at first select in the top line pixel brightness value greater than the pixel of boundary threshold in then can the hunting zone in the A image, and the brightness value of this brightness value greater than the pixel (brightness value of this pixel may be not more than boundary threshold) of correspondence position in the pixel of boundary threshold and the B image compared, specifically can calculate the difference (being the luminance difference value) that obtains brightness value corresponding to this pixel, individual element (being that brightness value is greater than the pixel of boundary threshold) relatively obtains one or more differences corresponding to this row; After delegation relatively finishes, continue to select new delegation to carry out as above comparison process from top to bottom, obtain one or more differences corresponding to this new delegation, until each row in the hunting zone of A image is all relatively finished;
After obtaining a plurality of differences in the above-mentioned comparative result, then further obtain the mean difference value of two parts image, the account form of corresponding mean difference value can comprise: a plurality of difference summations that will obtain, and utilizing the result who sues for peace divided by the quantity of total participation brightness value relatively greater than the pixel of boundary threshold, the result of acquisition is the mean difference value; Corresponding mean difference value can be weighed two parts Image similarity, for example, if corresponding mean difference value is then determined two parts image similarity less than predetermined value (such as 10 etc.), otherwise, determine that two parts image is dissimilar.
Based on being the above-mentioned processing procedure that compares line by line, the processing procedure that compares by column accordingly is similar, therefore no longer describe in detail.
Need to prove, in above-mentioned comparison procedure, consider between two parts image that the 3D video comprises and to have certain position translation, therefore, a certain row in the A image is being carried out in the process of comparison, can with the B image in the delegation of correspondence position and the up and down predetermined line number of this row compare respectively, for example, as shown in figure 10, allow to move up and down 1 row, then the 0th, 1,2 of the 1st row of A image and B figure totally three row carry out the ratio of similitude, in comparison procedure with the comparative result of the delegation of the difference minimum that the obtains difference as the delegation of current comparison.In like manner, in the process that compares by column, then in the process that a certain row in the A image are compared, can with the B image in row of correspondence position and these row about predetermined columns compare respectively, with the comparative result of the row that obtain the difference minimum difference as row of current comparison.
In above-mentioned comparison process, if in certain delegation of A image, do not find brightness value greater than the pixel of boundary threshold, then this row is not carried out comparison process, to filter out corresponding pure color part.For example, when need in the A image, extracting pixel in pixel and the B image relatively the time, then at first need to judge the pixel that whether comprises in the delegation of current comparison of A image more than or equal to the boundary threshold that calculates before, if comprise, then continue the comparison procedure of this follow-up one-row pixels, if do not comprise, then this one-row pixels is not compared operation.
In above-mentioned steps 25, according to the different partitioning schemes of 3D video, the video of needs identification is cut apart, and the image of cutting apart acquisition is compared, whether similar between two parts image after cutting apart with judgement; If partitioning scheme be multiple (as cut apart about both comprising also comprise up and down cut apart), the multiple partitioning scheme comparative result of analysis-by-synthesis then, to judge that this video is 3D video or 2D video, if and its concrete partitioning scheme of 3D video, for example, be the similar 3D video in the left and right sides or similar 3D video or common 2D video up and down.Particularly, mean difference value with the mean difference value of the image cut apart about obtaining respectively and the image cut apart up and down, if the mean difference value of the image of cutting apart then is less than predetermined value, the mean difference value of the image of cutting apart is up and down determined the 3D video of cutting apart about this video is greater than predetermined value; If about the mean difference value of the image cut apart greater than predetermined value, the mean difference value of the image of cutting apart is up and down determined the 3D video of this video for cutting apart up and down less than predetermined value; If about the mean difference value of the mean difference value of the image cut apart and the image cut apart up and down all greater than predetermined value, determine that then this video is the 2D video; If about the mean difference value of the mean difference value of the image cut apart and the image cut apart up and down all less than predetermined value, then show according to this comparative result and can't determine that current video is 3D video or 2D video, need from video, again to obtain the identifying processing operation that a two field picture re-starts corresponding 3D video, until determine that corresponding video is 3D video or 2D video for this reason.
Need to prove, if do not identify video 3D video or 2D video by above-mentioned steps 21 to step 25, then can again from video, extract a two field picture, and re-execute corresponding step 21 to the processing procedure continuation image recognition processing process of step 25, until identify the result.
Specific embodiments by above-mentioned identification 3D video or 2D video, can identify rapidly and accurately the current video that needs to play is 3D video or 2D video, thereby be convenient to terminal equipment can automatically select correct broadcast mode for the user plays corresponding video, improve the user and watch the application in the video process to experience.
The above; only for the better embodiment of the present invention, but protection scope of the present invention is not limited to this, anyly is familiar with those skilled in the art in the technical scope that the present invention discloses; the variation that can expect easily or replacement all should be encompassed within protection scope of the present invention.Therefore, protection scope of the present invention should be as the criterion with the protection range of claims.

Claims (10)

1. the method that 3D sheet source format is identified automatically is characterized in that, comprises the steps:
S11: obtain the two field picture in the video and convert thereof into the form of the monochrome information that comprises image;
S12: the boundary value that extracts pixel in the video image according to the monochrome information of the image after the conversion;
S13: an above-mentioned two field picture is carried out left-right symmetric cut apart, try to achieve the left and right sides overall similarity value of left image and right image according to the boundary value of pixel; Also an above-mentioned two field picture is carried out up and down symmetry division, try to achieve the up and down overall similarity value of upper graph picture and lower edge image according to the boundary value of pixel;
S14: in step S13, if left and right sides overall similarity value is less than or equal to the first predetermined value, and the overall similarity value is more than or equal to the second predetermined value up and down, and then described video is cut apart the 3D film source of form about being; If the overall similarity value is less than or equal to the 3rd predetermined value up and down, and left and right sides overall similarity value is more than or equal to the 4th predetermined value, and then described video is for cutting apart up and down the 3D film source of form; If left and right sides overall similarity value and the overall similarity value is all more than or equal to the 5th predetermined value up and down, then described video is common 2D video.
2. the method for the automatic identification of a kind of 3D sheet source format as claimed in claim 1 is characterized in that, also comprises the steps: after the step S14
S15: in step S14, if the left and right sides similarity value that obtains and up and down the similarity value all can't satisfy in above-mentioned three kinds of relativities any, then again obtain the next frame image of video, then turn to again execution in step S11.
3. the method for the automatic identification of a kind of 3D sheet source format as claimed in claim 1 is characterized in that, in step S13, this two field picture is carried out the cross five equilibrium, is divided into the picture left above picture, lower-left image, top right plot picture and bottom-right graph as four; Try to achieve the left and right sides similarity value of the picture left above picture and top right plot picture and the left and right sides similarity value of lower-left image and bottom-right graph picture according to the boundary value of pixel, the average of these two left and right sides similarity values is described left and right sides overall similarity value;
Try to achieve the up and down similarity value of the picture left above picture and lower-left image and the up and down similarity value of top right plot picture and bottom-right graph picture according to the boundary value of pixel, the average of similarity value is described up and down overall similarity value about in the of these two.
4. the automatic method of identification of a kind of 3D sheet source format as claimed in claim 1, it is characterized in that, in step S11, the two field picture in the video that obtains is dwindled first processing, and the image transitions that will dwindle after the processing becomes yuv format, and then keeps the monochrome information in the yuv format image.
5. the automatic method of identification of a kind of 3D sheet source format as claimed in claim 1, it is characterized in that, in step S12, in arbitrary pixel and adjacent four pixels form up and down five pixels thereof, the difference of brightness maximum and brightness minimum value is the boundary value of this position pixel.
6. the method for automatically identifying such as each described a kind of 3D sheet source format of claim 1-5 is characterized in that, also comprises the steps: between step S12 and step S13
Sa: take 0-255 rank brightness value as abscissa, the quantity of the corresponding pixel of boundary value of every single order brightness value coupling is that ordinate is set up coordinate system, counts the distribution histogram of pixel boundary value; In this distribution histogram, according to predetermined pixel quantity, choose successively according to boundary value is descending, thus the boundary threshold of definite pixel; Be to enter the pixel that compares among the step S13 more than or equal to the corresponding pixel of the boundary value of this boundary threshold.
7. the method for the automatic identification of a kind of 3D sheet source format as claimed in claim 6 is characterized in that, in step S13,
When on the left side image and right image or upper graph picture and lower edge image compare, then the capable pixel of N of first's image is compared respectively with the capable pixel of N-n to the N+n of second portion image respectively, and the capable pixel of N of second portion image compared respectively with the capable pixel of N-n to the N+n of first image respectively, then obtain 4n+1 luminance difference value corresponding to the capable pixel of N; In this 4n+1 luminance difference value, get wherein minimum luminance difference value as the luminance difference value of the capable pixel of N of two parts image;
The average of the luminance difference value of each row pixel is left and right sides overall similarity value or overall similarity value up and down in two parts image;
Wherein, left image or upper graph look like to be defined as first's image, and right image or lower edge image are defined as the second portion image.
8. the method for the automatic identification of a kind of 3D sheet source format as claimed in claim 7 is characterized in that, in above-mentioned steps S13,
In the process that the one-row pixels of first's image another row pixel corresponding with the second portion image compares, M boundary value in the described one-row pixels compared with M-m to the M+m pixel of described another row pixel respectively more than or equal to the pixel of boundary threshold, and with M pixel in described another row pixel respectively with described one-row pixels in M-m to a M+m pixel compare, obtain 4m+1 the luminance difference value corresponding to M pixel of the corresponding row of this two parts image, in this 4m+1 luminance difference value, get minimum luminance difference value as the luminance difference value of M pixel of described one-row pixels and described another row pixel;
The average of the luminance difference value of each pixel is the luminance difference value of this two row pixel in described one-row pixels and described another row pixel.
9. the method for the automatic identification of a kind of 3D sheet source format as claimed in claim 6 is characterized in that, in step S13,
When on the left side image and right image or upper graph picture and lower edge image advance to be listed as relatively, then the N row pixel of first's image is advanced respectively the row contrast with N-n to the N+n row pixel of second portion image respectively, and the N row pixel of second portion image advanced respectively the row contrast with N-n to the N+n row pixel of first image respectively, then obtain 4n+1 luminance difference value corresponding to N row pixel; In this 4n+1 luminance difference value, get wherein minimum luminance difference value as the luminance difference value of the N row pixel of two parts image;
The average of the luminance difference value of each row pixel is left and right sides overall similarity value or overall similarity value up and down in two parts image;
Wherein, left image or upper graph look like to be defined as first's image, and right image or lower edge image are defined as the second portion image.
10. the method for the automatic identification of a kind of 3D sheet source format as claimed in claim 9 is characterized in that, in above-mentioned steps S13,
Advance in the row process relatively in a row pixel of first's image another row pixel corresponding with the second portion image, M boundary value in the described row pixel advanced row relatively with M-m to a M+m pixel of described another row pixel respectively more than or equal to the pixel of boundary threshold, and with M pixel in described another row pixel respectively with a described row pixel in M-m to a M+m pixel advance row relatively, obtain 4m+1 the luminance difference value corresponding to M pixel of the corresponding row of this two parts image, in this 4m+1 luminance difference value, get minimum luminance difference value as the luminance difference value of M pixel of a described row pixel and described another row pixel;
The average of the luminance difference value of each pixel is the luminance difference value of this two row pixel in a described row pixel and described another row pixel.
CN2013100026632A 2013-01-05 2013-01-05 Automatic 3D (three-dimensional) film source identification method Pending CN103051913A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN2013100026632A CN103051913A (en) 2013-01-05 2013-01-05 Automatic 3D (three-dimensional) film source identification method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN2013100026632A CN103051913A (en) 2013-01-05 2013-01-05 Automatic 3D (three-dimensional) film source identification method

Publications (1)

Publication Number Publication Date
CN103051913A true CN103051913A (en) 2013-04-17

Family

ID=48064396

Family Applications (1)

Application Number Title Priority Date Filing Date
CN2013100026632A Pending CN103051913A (en) 2013-01-05 2013-01-05 Automatic 3D (three-dimensional) film source identification method

Country Status (1)

Country Link
CN (1) CN103051913A (en)

Cited By (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104185023A (en) * 2014-09-16 2014-12-03 上海通途半导体科技有限公司 Automatic detecting method and device for three-dimensional video format
CN104185012A (en) * 2014-09-16 2014-12-03 上海通途半导体科技有限公司 Automatic detecting method and device for three-dimensional video formats
CN104469440A (en) * 2014-04-16 2015-03-25 成都理想境界科技有限公司 Vide playing method, video player and corresponding video playing device
WO2015043301A1 (en) * 2013-09-26 2015-04-02 深圳市亿思达科技集团有限公司 Method and apparatus for recognizing 3d image
CN104683787A (en) * 2015-03-12 2015-06-03 青岛歌尔声学科技有限公司 Method and device for identifying video types, display equipment and video projecting method thereof
CN104735531A (en) * 2015-02-04 2015-06-24 四川长虹电器股份有限公司 Image quality analysis based 3D signal automatic identification method
CN104767985A (en) * 2014-01-07 2015-07-08 冠捷投资有限公司 Method of using region distribution analysis to automatically detect 3D image format
CN104883559A (en) * 2015-06-06 2015-09-02 深圳市虚拟现实科技有限公司 Video playing method and video playing device
CN104994374A (en) * 2015-07-03 2015-10-21 宁波易维视显示技术有限公司 Method for automatically detecting three-dimensional format of video
CN105635715A (en) * 2016-01-14 2016-06-01 深圳维爱特科技有限公司 Video format identification method and device
CN105898270A (en) * 2015-12-27 2016-08-24 乐视致新电子科技(天津)有限公司 Video format distinguishing method and system
CN105898269A (en) * 2015-12-27 2016-08-24 乐视致新电子科技(天津)有限公司 Video play method and device
CN106303496A (en) * 2016-08-16 2017-01-04 京东方科技集团股份有限公司 Picture format determines method and device, display device
CN107622250A (en) * 2017-09-27 2018-01-23 深圳市得色科技有限公司 3D rendering recognition methods and its system based on machine learning
CN108475341A (en) * 2017-04-11 2018-08-31 深圳市柔宇科技有限公司 The recognition methods of 3-D view and terminal
CN108615042A (en) * 2016-12-09 2018-10-02 炬芯(珠海)科技有限公司 The method and apparatus and player of video format identification
WO2018188297A1 (en) * 2017-04-12 2018-10-18 中兴通讯股份有限公司 Identification method and device for three-dimensionally displayed picture
CN109672881A (en) * 2019-01-04 2019-04-23 南京大学 A kind of method of automatic identification 2D/3D video format

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102298708A (en) * 2011-08-19 2011-12-28 四川长虹电器股份有限公司 3D mode identification method based on color and shape matching
CN102395037A (en) * 2011-06-30 2012-03-28 深圳超多维光电子有限公司 Format recognition method and device
CN102547344A (en) * 2011-12-23 2012-07-04 Tcl集团股份有限公司 Video format identification method and video format identification device
CN102665085A (en) * 2012-03-15 2012-09-12 广州嘉影软件有限公司 Automatic identification method and automatic identification device of 3D movie format
CN102802012A (en) * 2012-07-19 2012-11-28 彩虹集团公司 Three-dimensional automatic signal identifying method

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102395037A (en) * 2011-06-30 2012-03-28 深圳超多维光电子有限公司 Format recognition method and device
CN102298708A (en) * 2011-08-19 2011-12-28 四川长虹电器股份有限公司 3D mode identification method based on color and shape matching
CN102547344A (en) * 2011-12-23 2012-07-04 Tcl集团股份有限公司 Video format identification method and video format identification device
CN102665085A (en) * 2012-03-15 2012-09-12 广州嘉影软件有限公司 Automatic identification method and automatic identification device of 3D movie format
CN102802012A (en) * 2012-07-19 2012-11-28 彩虹集团公司 Three-dimensional automatic signal identifying method

Cited By (24)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2015043301A1 (en) * 2013-09-26 2015-04-02 深圳市亿思达科技集团有限公司 Method and apparatus for recognizing 3d image
CN104767985A (en) * 2014-01-07 2015-07-08 冠捷投资有限公司 Method of using region distribution analysis to automatically detect 3D image format
CN104469440A (en) * 2014-04-16 2015-03-25 成都理想境界科技有限公司 Vide playing method, video player and corresponding video playing device
CN104185023A (en) * 2014-09-16 2014-12-03 上海通途半导体科技有限公司 Automatic detecting method and device for three-dimensional video format
CN104185012A (en) * 2014-09-16 2014-12-03 上海通途半导体科技有限公司 Automatic detecting method and device for three-dimensional video formats
CN104185012B (en) * 2014-09-16 2016-08-17 上海通途半导体科技有限公司 3 D video form automatic testing method and device
CN104735531A (en) * 2015-02-04 2015-06-24 四川长虹电器股份有限公司 Image quality analysis based 3D signal automatic identification method
CN104735531B (en) * 2015-02-04 2017-06-23 四川长虹电器股份有限公司 3D signal automatic-identifying methods based on image quality analysis
CN104683787A (en) * 2015-03-12 2015-06-03 青岛歌尔声学科技有限公司 Method and device for identifying video types, display equipment and video projecting method thereof
CN104883559A (en) * 2015-06-06 2015-09-02 深圳市虚拟现实科技有限公司 Video playing method and video playing device
CN104994374A (en) * 2015-07-03 2015-10-21 宁波易维视显示技术有限公司 Method for automatically detecting three-dimensional format of video
CN105898269A (en) * 2015-12-27 2016-08-24 乐视致新电子科技(天津)有限公司 Video play method and device
CN105898270A (en) * 2015-12-27 2016-08-24 乐视致新电子科技(天津)有限公司 Video format distinguishing method and system
WO2017113735A1 (en) * 2015-12-27 2017-07-06 乐视控股(北京)有限公司 Video format distinguishing method and system
CN105635715A (en) * 2016-01-14 2016-06-01 深圳维爱特科技有限公司 Video format identification method and device
CN106303496A (en) * 2016-08-16 2017-01-04 京东方科技集团股份有限公司 Picture format determines method and device, display device
CN106303496B (en) * 2016-08-16 2018-06-26 京东方科技集团股份有限公司 Picture format determines method and device, display equipment
CN108615042A (en) * 2016-12-09 2018-10-02 炬芯(珠海)科技有限公司 The method and apparatus and player of video format identification
CN108475341A (en) * 2017-04-11 2018-08-31 深圳市柔宇科技有限公司 The recognition methods of 3-D view and terminal
WO2018187939A1 (en) * 2017-04-11 2018-10-18 深圳市柔宇科技有限公司 Method for identifying three-dimensional image, and terminal
CN108475341B (en) * 2017-04-11 2021-08-17 深圳市柔宇科技股份有限公司 Three-dimensional image recognition method and terminal
WO2018188297A1 (en) * 2017-04-12 2018-10-18 中兴通讯股份有限公司 Identification method and device for three-dimensionally displayed picture
CN107622250A (en) * 2017-09-27 2018-01-23 深圳市得色科技有限公司 3D rendering recognition methods and its system based on machine learning
CN109672881A (en) * 2019-01-04 2019-04-23 南京大学 A kind of method of automatic identification 2D/3D video format

Similar Documents

Publication Publication Date Title
CN103051913A (en) Automatic 3D (three-dimensional) film source identification method
CN101692269B (en) Method and device for processing video programs
CN110210360B (en) Rope skipping counting method based on video image target recognition
US20150178899A1 (en) Non-local mean-based video denoising method and apparatus
CN102917232A (en) Face recognition based 3D (three dimension) display self-adaptive adjusting method and face recognition based 3D display self-adaptive adjusting device
CN107274373B (en) Code printing method and device in live streaming
CN102271262B (en) Multithread-based video processing method for 3D (Three-Dimensional) display
JP2015156607A (en) Image processing method, image processing apparatus, and electronic device
CN101604325A (en) Method for classifying sports video based on key frame of main scene lens
CN102402918B (en) Method for improving picture quality and liquid crystal display (LCD)
CN104574404A (en) Three-dimensional image relocation method
CN102347015A (en) Image brightness compensation method and device as well as liquid crystal television
KR20150000434A (en) Method and apparatus for inserting a virtual object in a video
CN105100895B (en) The matching process and device of video and screen resolution without video resolution information
CN104581127A (en) Method, terminal and head-worn display equipment for automatically adjusting screen brightness
CN110110778A (en) Image processing method, device, electronic equipment and computer readable storage medium
CN106910195A (en) A kind of web page layout monitoring method and device
KR20230017601A (en) Apparatus and Method for Low-power Region of Interest Detection
CN102802074A (en) Method for extracting and displaying text messages from television signal and television
CN105678301A (en) Method, system and device for automatically identifying and segmenting text image
CN104282013B (en) A kind of image processing method and device for foreground target detection
CN106446889B (en) A kind of local recognition methods of logo and device
CN106340278A (en) Driving method and apparatus of display panel
CN109493293A (en) A kind of image processing method and device, display equipment
CN105913418B (en) A kind of Pupil Segmentation method based on multi-threshold

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C12 Rejection of a patent application after its publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20130417