CN108694031B - Identification method and device for three-dimensional display picture - Google Patents

Identification method and device for three-dimensional display picture Download PDF

Info

Publication number
CN108694031B
CN108694031B CN201710236021.7A CN201710236021A CN108694031B CN 108694031 B CN108694031 B CN 108694031B CN 201710236021 A CN201710236021 A CN 201710236021A CN 108694031 B CN108694031 B CN 108694031B
Authority
CN
China
Prior art keywords
pixel
view
value
difference
perspective
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201710236021.7A
Other languages
Chinese (zh)
Other versions
CN108694031A (en
Inventor
李聪
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
ZTE Corp
Original Assignee
ZTE Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by ZTE Corp filed Critical ZTE Corp
Priority to CN201710236021.7A priority Critical patent/CN108694031B/en
Priority to JP2019556179A priority patent/JP2020517025A/en
Priority to PCT/CN2017/106811 priority patent/WO2018188297A1/en
Priority to KR1020197032801A priority patent/KR20190136068A/en
Publication of CN108694031A publication Critical patent/CN108694031A/en
Application granted granted Critical
Publication of CN108694031B publication Critical patent/CN108694031B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/14Digital output to display device ; Cooperation and interconnection of the display device with other functional units
    • G06F3/1415Digital output to display device ; Cooperation and interconnection of the display device with other functional units with means for detecting differences between the image stored in the host and the images displayed on the displays
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/14Digital output to display device ; Cooperation and interconnection of the display device with other functional units
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/30Image reproducers
    • H04N13/356Image reproducers having separate monoscopic and stereoscopic modes

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Human Computer Interaction (AREA)
  • General Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Testing, Inspecting, Measuring Of Stereoscopic Televisions And Televisions (AREA)
  • Image Analysis (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The invention discloses an identification method for three-dimensional display pictures, which comprises the following steps: extracting a first view angle diagram and a second view angle diagram of a target picture; acquiring a difference degree parameter of the first perspective view and the second perspective view, wherein the difference degree parameter is used for representing the difference degree of the first perspective view and the second perspective view; and when the difference degree parameter is within a preset range value, identifying the target picture as a picture for three-dimensional display. The invention also discloses a device for identifying the three-dimensional display picture.

Description

Identification method and device for three-dimensional display picture
Technical Field
The invention relates to the technical field of images, in particular to a method and a device for identifying a three-dimensional display picture.
Background
With the development of science and technology, the three-dimensional image display technology develops from the condition that three-dimensional glasses are needed to be worn to the condition that the three-dimensional glasses are not needed to be worn, namely, the naked eye three-dimensional image display technology develops, and the naked eye three-dimensional image display technology enables people to get rid of the constraint of the three-dimensional glasses and has great advantages.
In the related art, more and more display devices have both a two-dimensional picture display mode and a three-dimensional picture display mode. Then, before displaying the picture, the display device needs to select a corresponding display mode according to the type of the picture to be displayed, such as a two-dimensional picture, a three-dimensional picture, and the like. However, in the current display device, before displaying a picture, a picture to be three-dimensionally displayed is manually marked or named, and then three-dimensional display is used for the marked picture during display, and ordinary two-dimensional display is used for unmarked pictures. And the identification mark for three-dimensional display pictures is only stored in a database, and the pictures do not have corresponding attributes. If the picture is moved out of the device and then moved in again, and the previously marked three-dimensional picture mark disappears, the display device needs to manually re-mark or name the picture to be three-dimensionally displayed before displaying the picture. Therefore, the current electronic device with three-dimensional display is cumbersome to operate when processing three-dimensional display pictures.
Disclosure of Invention
In view of this, embodiments of the present invention are expected to provide a method and an apparatus for recognizing a three-dimensional display picture, which can automatically recognize the picture for three-dimensional display and improve the operation convenience.
In order to achieve the purpose, the technical scheme of the invention is realized as follows:
the embodiment of the invention provides an identification method for a three-dimensional display picture, which comprises the following steps:
extracting a first view angle diagram and a second view angle diagram of a target picture;
acquiring a difference degree parameter of the first perspective view and the second perspective view, wherein the difference degree parameter is used for representing the difference degree of the first perspective view and the second perspective view;
and when the difference degree parameter is within a preset range value, identifying the target picture as a picture for three-dimensional display.
Optionally, the method further includes:
and when the difference degree parameter is not within the preset range value, identifying the target picture as a picture which is not used for three-dimensional display.
Optionally, the acquiring a difference degree parameter between the first perspective view and the second perspective view includes:
acquiring the sum of pixel difference values of the first view angle diagram and the second view angle diagram for each deviation value based on preset N deviation values, wherein N is an integer greater than or equal to 1;
obtaining the minimum value of each pixel difference sum to obtain the minimum pixel difference sum;
determining the minimum pixel difference sum as the difference degree parameter.
Optionally, the obtaining a sum of pixel differences of the first view map and the second view map for each deviation value includes:
acquiring a first graphic parameter value of each pixel of the first visual angle diagram;
acquiring a second graphic parameter value of each pixel of the second visual angle diagram;
and calculating the pixel difference sum according to a preset operation based on the deviation value, the first graphic parameter value and the second graphic parameter value.
Optionally, the calculating the pixel difference sum according to a preset operation based on the deviation value, the first graphic parameter value, and the second graphic parameter value includes:
forming a pixel pair by an ith pixel in the first visual angle image and a jth pixel in the second visual angle image, wherein the value ranges of i and j are the pixel coordinate value ranges of the first visual angle image and the second visual angle image respectively, the abscissa of the ith pixel and the abscissa of the jth pixel have a difference of d deviation values, the ordinate is the same, and the value of d is within a preset deviation interval;
calculating, for each pixel pair, a square of a difference between a first graphics parameter value for the ith pixel and a second graphics parameter value for the jth pixel;
summing the squares of each of the differences to obtain the pixel difference value sum.
An embodiment of the present invention further provides an apparatus for identifying a three-dimensional display picture, where the apparatus includes:
the extraction module is used for extracting a first view angle image and a second view angle image of the target picture;
an obtaining module, configured to obtain a difference degree parameter of the first perspective view and the second perspective view, where the difference degree parameter is used to indicate a difference degree between the first perspective view and the second perspective view;
and the first identification module is used for identifying the target picture as a picture for three-dimensional display when the difference degree parameter is within a preset range value.
Optionally, the apparatus further comprises:
and the second identification module is used for identifying the target picture as a picture which is not used for three-dimensional display when the difference degree parameter is not within a preset range value.
Optionally, the obtaining module includes:
the first obtaining submodule is used for obtaining the sum of pixel difference values of the first visual angle diagram and the second visual angle diagram for each deviation value based on preset N deviation values, wherein N is an integer larger than or equal to 1;
the second obtaining submodule is used for obtaining the minimum value of each pixel difference sum to obtain the minimum pixel difference sum;
a determining submodule for determining the minimum pixel difference sum as the difference degree parameter.
Optionally, the first obtaining sub-module includes:
the first acquisition unit is used for acquiring a first graphic parameter value of each pixel of the first perspective diagram;
the second acquisition unit is used for acquiring a second graphic parameter value of each pixel of the second perspective view;
and the calculating unit is used for calculating the pixel difference sum according to preset operation based on the deviation value, the first graphic parameter value and the second graphic parameter value.
Optionally, the computing unit includes:
the composition subunit is used for composing a pixel pair by an ith pixel in the first visual angle diagram and a jth pixel in the second visual angle diagram, wherein the value ranges of i and j are the pixel coordinate value ranges of the first visual angle diagram and the second visual angle diagram respectively, the abscissa of the ith pixel and the abscissa of the jth pixel have a difference of d deviation values, the ordinate is the same, and the value of d is within a preset deviation interval;
a calculation subunit, configured to calculate, for each pixel pair, a square of a difference between a first graphics parameter value of the ith pixel and a second graphics parameter value of the jth pixel;
and the summation subunit is used for summing the squares of the differences to obtain the pixel difference value sum.
According to the identification method and the identification device for the three-dimensional display picture, provided by the embodiment of the invention, the first visual angle image and the second visual angle image of the target picture are obtained, and when the difference degree of the first visual angle image and the second visual angle image is within a preset range value, the target picture is identified as the picture for three-dimensional display; therefore, the picture for three-dimensional display can be automatically identified, and the problem of inconvenient operation caused by the fact that the picture for three-dimensional display cannot be automatically identified is solved; the effect of improving the operation convenience is achieved.
Drawings
Fig. 1 is a schematic diagram of an identification method for three-dimensional display pictures according to an embodiment of the present invention;
fig. 2A is a schematic diagram of an identification method for three-dimensional display pictures according to a second embodiment of the present invention;
fig. 2B is a schematic diagram of a method for obtaining a pixel difference value according to a second embodiment of the present invention;
fig. 2C is a schematic diagram of a method for calculating a pixel difference value according to a second embodiment of the present invention;
fig. 3A is a schematic structural diagram of an identification apparatus for displaying a picture in three dimensions according to a third embodiment of the present invention;
fig. 3B is a schematic structural diagram of another identification apparatus for displaying a picture in three dimensions according to a third embodiment of the present invention;
fig. 3C is a schematic structural diagram of an apparatus for obtaining a difference degree parameter according to a third embodiment of the present invention;
fig. 3D is a schematic structural diagram of a device for obtaining a sum of pixel differences according to a third embodiment of the present invention;
fig. 3E is a schematic structural diagram of a pixel difference calculating device according to a third embodiment of the present invention.
Detailed Description
The technical solution in the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present invention.
In the following description, suffixes such as "module", "component", or "unit" used to denote elements are used only for facilitating the explanation of the present invention, and have no specific meaning in themselves. Thus, "module" and "component" may be used in a mixture.
The terminal may be implemented in various forms. For example, the terminal described in the present invention may include a mobile terminal such as a mobile phone, a smart phone, a notebook computer, a digital broadcast receiver, a Personal Digital Assistant (PDA), a tablet computer (PAD), a Portable Multimedia Player (PMP), a navigation device, etc., and a stationary terminal such as a digital TV, a desktop computer, etc. In the following, it is assumed that the terminal is a mobile terminal. However, it will be understood by those skilled in the art that the configuration according to the embodiment of the present invention can be applied to a fixed type terminal in addition to elements particularly used for moving purposes.
Example one
The embodiment provides an identification method for three-dimensional display pictures, and referring to fig. 1, the method includes:
step 101, extracting a first view angle diagram and a second view angle diagram of a target picture.
The target picture may be a picture for two-dimensional display, a picture for three-dimensional display, or a picture for other-dimensional display.
Further, for pictures for three-dimensional display, the first perspective view and the second perspective view are a left eye perspective view and a right eye perspective view. The left eye view diagram and the right eye view diagram are two diagrams seen from two different views of the left eye and the right eye for the same scene. Further, the left-eye perspective view is a view seen by the left eye in the three-dimensional display effect, and the right-eye perspective view is a view seen by the right eye in the three-dimensional display effect. The left-eye view angle image and the right-eye view angle image may be laid out in the same picture, or may be laid out in a plurality of pictures. Further, when the left-eye perspective view and the right-eye perspective view are laid out in the same target picture, the left-eye perspective view and the right-eye perspective view may be laid out in a picture for three-dimensional display in a left-right view format or a top-bottom view format. Of course, the images may be laid out in the same picture according to other formats, which is not limited in this embodiment of the present invention. Further, when the left-eye viewing angle map and the right-eye viewing angle map are laid out in a plurality of pictures, the left-eye viewing angle map and the right-eye viewing angle map may be laid out in two pictures. And for pictures for two-dimensional display or pictures for other-dimensional display, the first perspective view and the second perspective view are only two pictures extracted according to the extraction method, and the relationship among the pictures for three-dimensional display does not exist.
In the embodiment of the present invention, when the left-eye perspective view and the right-eye perspective view are laid out in the same picture according to the left-right view format or the top-bottom view format, the first perspective view and the second perspective view of the target picture may be extracted by the following method: acquiring the content of the left half part of a target picture to obtain a first view angle picture; and acquiring the content of the right half part of the target picture to obtain a second view angle diagram, wherein the first view angle diagram and the second view angle diagram are a first group of view angle diagrams. In addition, acquiring the content of the upper half part of the target picture to obtain a first view angle picture; and acquiring the content of the lower half part of the target picture to obtain a second view angle diagram, wherein the first view angle diagram and the second view angle diagram are a second group of view angle diagrams.
Further, the content of each half of the target picture can be obtained by the following method: copying the target picture to obtain two identical target pictures, and recording the two identical target pictures as a first target picture and a second target picture; left and right cutting is carried out on the first target picture to obtain the contents of the left and right halves of the target picture; and cutting the second target picture from top to bottom to obtain the contents of the upper half part and the lower half part of the target picture.
Further, the left-right cutting may be left-right symmetrical cutting, and the up-down cutting may be up-down symmetrical cutting.
In addition, the content of each half of the target picture can be directly obtained from the target picture by using the existing software technology, and is not detailed here.
In addition, it should be noted that, when the left-eye perspective view and the right-eye perspective view are laid out in the same picture according to other view formats, the first perspective view and the second perspective view may be acquired according to the view formats corresponding to the first perspective view and the second perspective view instead of the first perspective view and the second perspective view extracted according to the content of each half of the acquisition target picture.
In addition, when the left-eye view diagram and the right-eye view diagram are laid out in a plurality of different pictures, the first view diagram and the second view diagram can be directly extracted from the plurality of pictures.
Of course, besides the above method, other methods may be used to extract the first perspective view and the second perspective view, which is not limited in this embodiment.
In addition, it should be noted that, in the picture for three-dimensional display, the left-eye perspective view and the right-eye perspective view are laid out in the picture for three-dimensional display in the left-right view format or the top-bottom view format. However, before the first perspective view and the second perspective view are extracted, it is not determined whether the target picture is a picture for three-dimensional display or a picture for two-dimensional display, and it is certainly not possible to determine the target picture as a left-right view format or a top-bottom view format. Therefore, when extracting the target picture, both left and right extraction and up and down extraction are required.
In addition, according to the above method for extracting the first perspective view and the second perspective view of the target picture, two sets of perspective views are extracted from one target picture, and therefore, the subsequent steps 102 to 104 are required for both sets of perspective views.
In the implementation of the present invention, in order to reduce the operation steps and improve the operation efficiency, the target picture may be roughly determined, and after the rough determination, only one set of the first perspective view and the second perspective view is extracted, and only the steps 102 to 104 are performed on the set of the first perspective view and the second perspective view.
Further, only one set of the first perspective view and the second perspective view may be extracted by: before extracting the target picture, carrying out primary analysis on the target picture, and roughly judging whether the target picture is similar left and right or up and down; when the left view and the right view are similar, determining that the target picture is in a left view and right view format, and extracting the target picture left and right to obtain a group of first view angle pictures and second view angle pictures; and when the upper view and the lower view are similar, determining that the target picture is in an upper view and lower view format, and extracting the upper view and the lower view of the target picture to obtain only one group of the first view angle graph and the second view angle graph.
In addition, it should be noted that, since the target picture may be a rgb (red Green blue) picture, a hue, Saturation and brightness color mode picture, i.e. an hsi (hue Saturation) picture, and of course, other color mode pictures. Different color modes correspond to different picture characteristics, so different rough judgment methods can be determined according to the color mode of the target picture.
Further, when the target picture is an RGB picture, the target picture can be roughly determined by the following method: acquiring the gray value of each pixel of the target picture in a red channel to obtain a red gray value matrix; acquiring the gray value of each pixel of the target picture in a green channel to obtain a green gray value matrix; acquiring the gray value of each pixel of the target picture in a blue channel to obtain a blue gray value matrix; and respectively comparing the red gray value matrix, the green gray value matrix and the blue gray value matrix up and down and left and right, and determining that the target picture has left and right or upper and lower similar conditions when the values in the gray value matrices have a vertical or left and right symmetrical relationship. Wherein the symmetry relationship is such that the gray values are close in value.
When the target picture is an HSI picture, the target picture can be roughly determined by using a method similar to that when the target picture is an RGB picture, which is not described in detail herein.
Further, when only one set of the first perspective view and the second perspective view is extracted and the target picture is identified as not being used for the three-dimensional display picture after the steps 102 to 104 are performed on the set of the first perspective view and the second perspective view, the steps 102 to 104 may be performed on the other set of the first perspective view and the second perspective view from which the target picture is newly extracted to identify whether the target picture is used for the three-dimensional display picture, so as to improve the accuracy of identifying the target picture.
In addition, the picture for three-dimensional display in the embodiment of the invention can be a picture for naked eye three-dimensional display.
And 102, acquiring a difference degree parameter of the first perspective view and the second perspective view.
The difference degree parameter is used for representing the difference degree between the first view angle diagram and the second view angle diagram. Specifically, the parameter of the degree of difference may indicate whether there is a difference between the first perspective view and the second perspective view, and when there is a difference, indicate the magnitude of the difference.
Further, the difference degree parameter may be expressed by using a difference degree of the pixels of the first and second view angle maps in terms of graphic features such as gray scale values, hues, brightness, or saturation.
Illustratively, when the target picture is an RGB picture, the difference degree is the difference degree of the gray-scale values of the corresponding pixels of the first view angle diagram and the second view angle diagram in each color channel.
Further, the parameter of the degree of difference may be a number, and the magnitude of the degree of difference is represented by the magnitude of a numerical value. For example, when the difference degree parameter is in the range of 0 to 3000, 0 may be used to indicate that there is no difference between the first view angle diagram and the second view angle diagram, and the difference degree increases with the increase of the value for the remaining numbers.
Further, the parameter of the degree of difference may be a numerical value obtained according to a preset operation. For example, the difference degree parameter may be a minimum pixel difference sum of the first view angle map and the second view angle map calculated according to a preset operation.
Further, the difference degree parameter may be obtained by obtaining respective graphic parameter values of the first perspective view and the second perspective view and substituting the graphic parameter values into a preset operation. The graphic parameter values are parameters for characterizing the graphic features of the first perspective view and the second perspective view. Further, the graphic parameter value may be determined according to a color mode of the target picture. For example, when the target picture is an RGB picture, the graphic parameter value may be a gray value of each color channel; when the target picture is HIS, the graphic parameter values may be hue, saturation, brightness, and the like. Of course, the parameter of the degree of difference may also be other parameters capable of reflecting the degree of difference between the first perspective view and the second perspective view.
And 103, when the difference degree parameter is within a preset range value, identifying the target picture as a picture for three-dimensional display.
First, the pictures for three-dimensional display and the pictures for two-dimensional display have the following characteristics in terms of degree of difference:
for pictures for three-dimensional display: if the target picture is a picture for three-dimensional display, one of the first perspective view and the second perspective view is a left-eye perspective view, and the other is a right-eye perspective view. Since the left-eye view diagram and the right-eye view diagram are the same scene, the images are seen from two different viewing angles of the left eye and the right eye. Therefore, in the picture for three-dimensional display, the first perspective view and the second perspective view are different, and the difference is within a certain range.
For pictures for two-dimensional display: generally, a picture for two-dimensional display is used to represent a complete single picture, and it does not happen that a left-eye view and a right-eye view of a scene are placed within a picture for two-dimensional display. Therefore, for a picture for two-dimensional display, the difference between the first perspective view and the second perspective view is larger, and the difference degree is larger than that between the first perspective view and the second perspective view in the picture for three-dimensional display. In addition, for some special pictures for two-dimensional display, such as a monochrome picture, a picture that is completely symmetrical left and right, or a picture that is completely symmetrical top and bottom, there is no difference between the first view angle picture and the second view angle picture.
Because the picture for three-dimensional display and the picture for two-dimensional display have the above characteristics in the difference degree, the preset range value meeting the difference degree condition of the picture for three-dimensional display can be set according to the difference degree parameter selected by the difference degree characteristics.
It should be noted that, for each difference parameter, the preset range value can be obtained through a large number of experiments to ensure the accuracy of the difference parameter.
Illustratively, the value range of the difference degree parameter is [0-3000], and the preset range value is (0-1000). When the acquired difference degree parameter value is in the range of (0-1000), identifying the target picture as a picture for three-dimensional display; and when the acquired difference degree parameter value is not in the range of (0-1000), identifying the target picture as a picture which is not used for three-dimensional display.
In addition, when the target picture is identified as a picture for three-dimensional display, the target picture may be marked and related information may be stored. Further, the marking can be performed by modifying the picture name. Specifically, the stored related information may include a view format of the picture, that is, a left-right view format or a top-bottom view format. Of course, other ways may be used to mark and store other related information, which is not limited in this embodiment.
In addition, the marking and storing of the related information can be performed by a media asset library.
On the other hand, when the difference degree parameter is not within the preset range value, the target picture is identified as a picture not used for three-dimensional display.
In addition, whether to turn on the auto-selection display mode function may be set for the terminal. When the terminal starts the function of automatically selecting the display mode and receives a target (a picture to be displayed), the terminal automatically identifies the type of the target picture so as to select a corresponding display frame, drive equipment and control hardware to realize the corresponding display mode; when the terminal closes the function of automatically selecting the display mode, the terminal does not identify the type of the target picture when receiving the target picture, needs manual identification, and then selects the corresponding display mode according to the identification result.
In addition, the case that the terminal receives the target picture includes: initializing pictures, changing picture states, and operating and displaying pictures. Further, initializing the picture comprises loading the picture for the first time when the terminal is started; the picture state change comprises creation, replacement or other modification of the picture; the operation of displaying the pictures comprises that a user clicks and displays a certain picture, displays the pictures in batch and slides to the certain picture.
In addition, for video playing, whether the video adopts a three-dimensional video playing mode can be determined by identifying whether a picture included in the video is a picture for three-dimensional display. When the pictures included in the video are pictures for three-dimensional display, determining to adopt a three-dimensional video playing mode; when the pictures included in the video are not pictures for three-dimensional display, other video playing modes are determined to be adopted.
In summary, in the identification method for three-dimensional display pictures provided in the embodiments of the present invention, a first view image and a second view image of a target picture are obtained, and when a difference degree between the first view image and the second view image is within a preset range value, the target picture is identified as a picture for three-dimensional display; therefore, the picture for three-dimensional display can be automatically identified, and the problem of inconvenient operation caused by the fact that the picture for three-dimensional display cannot be automatically identified is solved; the effect of improving the operation convenience is achieved.
Example two
Compared with the first embodiment, the present embodiment determines the difference degree parameter in the first embodiment as the minimum pixel difference sum, and identifies whether the target picture is a picture for three-dimensional display according to the minimum pixel difference sum. Referring to fig. 2A, the method includes:
step 201, extracting a first view angle diagram and a second view angle diagram of a target picture.
This step is the same as or similar to step 101, and is not described herein again.
Step 202, based on the preset N deviation values, for each deviation value, a pixel difference sum of the first view angle map and the second view angle map is obtained.
The deviation value is used to represent the parallax range for the left and right eyes of the same scene. The deviation value may be based on a large number of data acquisitions, resulting in a more reasonable value or range of values. Alternatively, the offset value may be 58 mm to 72 mm. In this embodiment, the offset value may be converted into the number of pixels for each pixel size.
Wherein, N is an integer greater than or equal to 1, namely at least one deviation value is set. In addition, the more the number of the deviation values is, the more accurate the deviation values are, but the too many deviation values cause the increase of the calculation complexity, so that the number of the deviation values can be determined according to the actual requirement.
Alternatively, referring to fig. 2B, for one offset value, the sum of pixel difference values of the first view angle map and the second view angle map may be obtained by:
step 2021, obtain a first graphic parameter value of each pixel of the first view map.
When the target picture is a picture in an RGB color space, factors influencing the graphic parameter value of the target picture comprise the gray value of the target picture in each color channel; when the target picture is a picture of the HSI color space, the factors affecting the graphic parameter values of the target picture are the hue, saturation, and brightness of the picture. For target pictures in other color spaces, the values of the graphic parameters affecting the target pictures are other corresponding factors, and are not described in detail herein. In this embodiment, a picture in a common RGB color space is taken as an example for explanation.
The graphical parameter value may comprise, for each pixel, an average of the grey values of the at least one colour value. Specifically, the first graphic parameter values may be the following:
first, a first gray value of R color;
second, a second gray value for color G;
third, a third gray value of color B;
fourth, a first average of the first gray value and the second gray value;
fifthly, a second average value of the first gray value and the third gray value;
sixth, a third average of the second gray value and the third gray value;
seventh, a fourth average of the first gray value, the second gray value, and the third gray value.
It is clear that the more colour classes that are involved in the calculation of the first graphical parameter values, the more accurate the first graphical parameter values are obtained. Specifically, the accuracy of the first, second and third types is similar; the fourth, fifth and sixth types have similar accuracy, but higher accuracy than the first, second and third types; the seventh has the highest accuracy.
Illustratively, the first view angle diagram includes 4 pixels, the first graphic parameter value is a seventh type, and the gray values of the three colors of red, green and blue, which are obtained for each pixel, are respectively:
pixel 1: 100. 150 and 200;
pixel 2: 150. 120 and 120;
pixel No. 3: 120. 130 and 140;
pixel No. 4: 100. 100 and 100.
Calculating the average value of all gray values for each pixel, wherein the acquired first graphic parameter values of 4 pixels are respectively as follows:
pixel 1: 150;
pixel 2: 130, 130;
pixel No. 3: 130, 130;
pixel No. 4: 100.
it should be noted that, in actual situations, in general, the number of pixels included in the first perspective view is much greater than 4, and it is assumed here that the first perspective view includes 4 pixels only for convenience of description.
Step 2022, obtain a second graphic parameter value of each pixel of the second view map.
The second graphic parameter values of the pixels of the second perspective view are obtained in the same way as the first graphic parameter values of the pixels of the first perspective view are obtained in step 2021.
Illustratively, the gray values of the three colors of red, green and blue included in each pixel are sequentially obtained as follows:
pixel 1: 110. 160 and 210;
pixel 2: 160. 130 and 130;
pixel No. 3: 130. 140 and 150;
pixel No. 4: 110. 110 and 110.
Calculating the average value of all gray values for each pixel, wherein the acquired second graphic parameter values of the 4 pixels are respectively as follows:
pixel 1: 160;
pixel 2: 140 of a solvent;
pixel No. 3: 140 of a solvent;
pixel No. 4: 110.
step 2023, calculating a pixel difference sum according to a preset operation based on the deviation value, the first graphic parameter value and the second graphic parameter value.
Alternatively, referring to fig. 2C, step 2023 may be implemented by:
in step 2023a, the ith pixel in the first view angle diagram and the jth pixel in the second view angle diagram are combined into a pixel pair.
The value ranges of i and j are respectively the pixel coordinate value ranges of the first visual angle diagram and the second visual angle diagram, the difference between the abscissa of the ith pixel and the abscissa of the jth pixel is d deviation values, the ordinate is the same, and the value of d is within a preset deviation interval.
Step 2023b, for each pixel pair, calculates the square of the difference between the first graphic parameter value for the ith pixel and the second graphic parameter value for the jth pixel.
Step 2023c, sum the squares of the differences to obtain the pixel difference sum.
Alternatively, steps 2023a to 2023c may be expressed by the following two formulas, that is, the preset operation may be the following two formulas:
DL(d)=∑x,y|FL(x,y)-FR(x-d,y)|2 (1)
DR(d)=∑x,y|FL(x,y)-FR(x+d,y)|2 (2)
wherein d is a deviation value; DL (d) and DR (d) are the corresponding pixel difference sum when the deviation value is d; FL (x, y) is a first graphic parameter value of a pixel at coordinate position (x, y) when the first perspective view is placed in the two-dimensional rectangular coordinate system; FR (x, y) is a second graphic parameter value for the pixel at coordinate position (x, y) when the second perspective view is placed in the two-dimensional rectangular coordinate system; wherein the value range of x and y is the coordinate value range of the left view.
In addition, for the case that x-d or x + d exceeds the right view range, the graphic parameter value is obtained by using the boundary patch method. The boundary patch method is prior art and will not be described in detail herein.
In addition, it should be noted that, although the target picture may be a picture for three-dimensional display, after the first perspective view and the second perspective view are extracted, the corresponding relationship between the first perspective view and the second perspective view and the left-eye perspective view and the right-eye perspective view is not determined; therefore, when calculating the pixel difference sum, the position of the first view map in the two-dimensional rectangular coordinate system needs to be kept unchanged, the second view map is moved along the positive half axis of the x-axis according to the deviation value to obtain a pixel difference sum, and then the second view map is moved along the negative half axis of the x-axis according to the deviation value to obtain another pixel difference sum. The pixel difference sum is calculated by using the two formulas, so that the pixel difference sum under different conditions can be obtained, and the precision of the pixel difference sum is improved.
Since 2 pixel difference sums can be obtained by step 202 for each d value, N d values can result in 2N pixel difference sums.
Illustratively, N is equal to 1, that is, includes a d value, assuming that d is equal to 1 (representing 1 pixel), taking the example in step 2021 and step 2022 as an example, the sum of 2 pixel difference values obtained by the four acquired first graphic parameter values (150, 130, and 100) and the four acquired second graphic parameter values (160, 140, and 110) according to the three steps or equations (1) and (2) included in step 2023 is: dl (d) 2000 and dr (d) 500.
In addition, although d includes one value in the present embodiment, N may be determined to be another value according to how many pixels the target picture includes. For example, when N is equal to 3, i.e. d includes 3 values, the sum of the difference values of 6 pixels can be obtained by the above method.
Step 203, obtaining the minimum value of the pixel difference sums to obtain the minimum pixel difference sum, and determining the minimum pixel difference sum as the difference degree parameter.
Different parallax ranges (d values) correspond to different pixel difference value sums, and the pixel difference value sums are used for representing the difference degree between the first perspective view and the second perspective view; furthermore, the first perspective view and the second perspective view in the pictures for three-dimensional display are two pictures with different perspectives in the same scene, and the difference is only affected by parallax, so that the difference degree between the first perspective view and the second perspective view is small. Therefore, the pixel difference value and the minimum value are taken as difference degree parameters to improve the selection of the first visual angle diagram and the second visual angle diagram, and further improve the accuracy of picture identification.
Illustratively, still taking the example in step 202 as an example, the minimum value is obtained from the two pixel difference sums 500 and 2000, i.e. 500, then 500 is the minimum pixel difference sum, and 500 is determined as the difference degree parameter.
Optionally, a maximum value of the pixel difference sum may be obtained as the difference degree parameter, or an average value of a plurality of pixel difference sums may be obtained as the difference degree parameter, or a plurality of pixel difference sums may be selected as the difference degree parameter at the same time.
And step 204, when the minimum pixel difference sum is within the preset range value, identifying the target picture as a picture for three-dimensional display.
Illustratively, the preset range value is 400 to 3000, and since 500 is within the preset range value, the target picture is identified as a picture for three-dimensional display.
Optionally, when a plurality of pixel difference values are selected as the difference degree parameters, the target picture is identified as the picture for three-dimensional display only when all the difference degree parameters are within the preset range values.
And step 205, when the minimum pixel difference sum is not within the preset range value, identifying the target picture as a picture which is not used for three-dimensional display.
Illustratively, the preset range value is 1000 to 3000, and since 500 is not within the preset range value, the target picture is identified as a picture not used for three-dimensional display.
Optionally, when a plurality of pixel difference values are selected as the difference degree parameters, when one difference degree parameter is not within the preset range value, the target picture is identified as a picture not used for three-dimensional display.
Of course, the difference degree parameter is not limited to the minimum pixel difference sum, and may also be other parameters related to the picture, which are not described in detail in the embodiment of the present invention.
In addition, it should be noted that, contents that are not described in this embodiment refer to related contents in the first embodiment, and are not described in detail in this embodiment.
In summary, in the identification method for three-dimensional display pictures provided in the embodiments of the present invention, a first view image and a second view image of a target picture are obtained, and when a difference degree between the first view image and the second view image is within a preset range value, the target picture is identified as a picture for three-dimensional display; therefore, the picture for three-dimensional display can be automatically identified, and the problem of inconvenient operation caused by the fact that the picture for three-dimensional display cannot be automatically identified is solved; the effect of improving the operation convenience is achieved.
EXAMPLE III
The embodiment provides a recognition apparatus 300 for three-dimensionally displaying pictures, and referring to fig. 3A, the apparatus 300 includes: an extraction module 301, an acquisition module 302 and a first identification module 303.
The extracting module 301 is configured to extract a first view map and a second view map of a target picture.
An obtaining module 302, configured to obtain a difference degree parameter of the first perspective view and the second perspective view, where the difference degree parameter is used to indicate a difference degree between the first perspective view and the second perspective view.
A first identifying module 303, configured to identify the target picture as a picture for three-dimensional display when the difference degree parameter is within a preset range value.
Optionally, referring to fig. 3B, the apparatus 300 further includes: and a second identification module 304, configured to identify the target picture as a picture not used for three-dimensional display when the difference degree parameter is not within the preset range value.
Optionally, referring to fig. 3C, the obtaining module 302 includes:
the first obtaining sub-module 3021 is configured to obtain, for each offset value, a sum of pixel difference values of the first view angle map and the second view angle map based on N preset offset values, where N is an integer greater than or equal to 1.
The second obtaining sub-module 3022 is configured to obtain a minimum value in the pixel difference sums, so as to obtain a minimum pixel difference sum.
A determination sub-module 3023 for determining the minimum pixel difference value sum as the difference degree parameter.
Optionally, referring to fig. 3D, the first obtaining sub-module 3021 includes:
a first acquiring unit 3021a, configured to acquire a first graphic parameter value of each pixel of the first perspective view.
A second acquiring unit 3021b, configured to acquire a second graphic parameter value of each pixel of the second perspective view.
A calculating unit 3021c for calculating a pixel difference sum according to a preset operation based on the deviation value, the first graphic parameter value, and the second graphic parameter value.
Alternatively, referring to fig. 3E, the calculation unit 3021c includes:
a composing subunit 3021c1, configured to compose a pixel pair from the ith pixel in the first view and the jth pixel in the second view.
The value ranges of i and j are respectively the pixel coordinate value ranges of the first visual angle diagram and the second visual angle diagram, the difference between the abscissa of the ith pixel and the abscissa of the jth pixel is d deviation values, the ordinate is the same, and the value of d is within a preset deviation interval.
A calculation subunit 3021c2 for calculating, for each pixel pair, a square of a difference between a first graphics parameter value for the ith pixel and a second graphics parameter value for the jth pixel.
A summing subunit 3021c3 for summing the squares of the differences to obtain a pixel difference sum.
The present embodiment is an embodiment of an apparatus corresponding to the first embodiment and the second embodiment, and related contents of the present embodiment refer to related contents of the first embodiment and the second embodiment, which are not described in detail in the present embodiment.
In summary, in the recognition apparatus for three-dimensional display pictures provided in the embodiments of the present invention, a first view map and a second view map of a target picture are obtained, and when a difference degree between the first view map and the second view map is within a preset range value, the target picture is identified as a picture for three-dimensional display; therefore, the picture for three-dimensional display can be automatically identified, and the problem of inconvenient operation caused by the fact that the picture for three-dimensional display cannot be automatically identified is solved; the effect of improving the operation convenience is achieved.
As will be appreciated by one skilled in the art, embodiments of the present invention may be provided as a method, system, or computer program product. Accordingly, the present invention may take the form of a hardware embodiment, a software embodiment, or an embodiment combining software and hardware aspects. Furthermore, the present invention may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, optical storage, and the like) having computer-usable program code embodied therein.
The present invention is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the invention. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
The above description is only a preferred embodiment of the present invention, and is not intended to limit the scope of the present invention.

Claims (4)

1. An identification method for three-dimensional display pictures, the method comprising:
extracting a first view angle diagram and a second view angle diagram of a target picture;
acquiring a difference degree parameter of the first perspective view and the second perspective view, wherein the difference degree parameter is used for representing the difference degree of the first perspective view and the second perspective view;
when the difference degree parameter is within a preset range value, identifying the target picture as a picture for three-dimensional display;
wherein the acquiring the difference degree parameter of the first perspective view and the second perspective view comprises:
acquiring a first graphic parameter value of each pixel of the first visual angle diagram;
acquiring a second graphic parameter value of each pixel of the second visual angle diagram;
calculating a pixel difference sum according to a preset operation based on preset N deviation values, the first graphic parameter value and the second graphic parameter value, wherein N is an integer greater than or equal to 1;
obtaining the minimum value of each pixel difference sum to obtain the minimum pixel difference sum;
determining the minimum pixel difference sum as the difference degree parameter;
wherein the calculating the pixel difference sum according to a preset operation based on the preset N deviation values, the first graphic parameter value and the second graphic parameter value comprises:
forming a pixel pair by an ith pixel in the first visual angle image and a jth pixel in the second visual angle image, wherein the value ranges of i and j are the pixel coordinate value ranges of the first visual angle image and the second visual angle image respectively, the abscissa of the ith pixel and the abscissa of the jth pixel have a difference of d deviation values, the ordinate is the same, and the value of d is within a preset deviation interval;
calculating, for each pixel pair, a square of a difference between a first graphics parameter value for the ith pixel and a second graphics parameter value for the jth pixel;
summing the squares of each of the differences to obtain the pixel difference value sum.
2. The method of claim 1, further comprising:
and when the difference degree parameter is not within the preset range value, identifying the target picture as a picture which is not used for three-dimensional display.
3. An identification device for displaying pictures in three dimensions, the device comprising:
the extraction module is used for extracting a first view angle image and a second view angle image of the target picture;
an obtaining module, configured to obtain a difference degree parameter of the first perspective view and the second perspective view, where the difference degree parameter is used to indicate a difference degree between the first perspective view and the second perspective view;
the first identification module is used for identifying the target picture as a picture for three-dimensional display when the difference degree parameter is within a preset range value;
wherein the acquisition module comprises:
the first acquisition unit is used for acquiring a first graphic parameter value of each pixel of the first perspective diagram;
the second acquisition unit is used for acquiring a second graphic parameter value of each pixel of the second perspective view;
the calculation unit is used for calculating pixel difference sum according to preset operation based on the preset N deviation values, the first graphic parameter value and the second graphic parameter value;
the second obtaining submodule is used for obtaining the minimum value of each pixel difference sum to obtain the minimum pixel difference sum;
a determination submodule for determining the minimum pixel difference sum as the difference degree parameter;
wherein the calculation unit includes:
the composition subunit is used for composing a pixel pair by an ith pixel in the first visual angle diagram and a jth pixel in the second visual angle diagram, wherein the value ranges of i and j are the pixel coordinate value ranges of the first visual angle diagram and the second visual angle diagram respectively, the abscissa of the ith pixel and the abscissa of the jth pixel have a difference of d deviation values, the ordinate is the same, and the value of d is within a preset deviation interval;
a calculation subunit, configured to calculate, for each pixel pair, a square of a difference between a first graphics parameter value of the ith pixel and a second graphics parameter value of the jth pixel;
and the summation subunit is used for summing the squares of the differences to obtain the pixel difference value sum.
4. The apparatus of claim 3, further comprising:
and the second identification module is used for identifying the target picture as a picture which is not used for three-dimensional display when the difference degree parameter is not within a preset range value.
CN201710236021.7A 2017-04-12 2017-04-12 Identification method and device for three-dimensional display picture Active CN108694031B (en)

Priority Applications (4)

Application Number Priority Date Filing Date Title
CN201710236021.7A CN108694031B (en) 2017-04-12 2017-04-12 Identification method and device for three-dimensional display picture
JP2019556179A JP2020517025A (en) 2017-04-12 2017-10-19 Method and apparatus for identifying three-dimensional display image
PCT/CN2017/106811 WO2018188297A1 (en) 2017-04-12 2017-10-19 Identification method and device for three-dimensionally displayed picture
KR1020197032801A KR20190136068A (en) 2017-04-12 2017-10-19 Identification Method and Device for 3D Display Image

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710236021.7A CN108694031B (en) 2017-04-12 2017-04-12 Identification method and device for three-dimensional display picture

Publications (2)

Publication Number Publication Date
CN108694031A CN108694031A (en) 2018-10-23
CN108694031B true CN108694031B (en) 2021-05-04

Family

ID=63793102

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710236021.7A Active CN108694031B (en) 2017-04-12 2017-04-12 Identification method and device for three-dimensional display picture

Country Status (4)

Country Link
JP (1) JP2020517025A (en)
KR (1) KR20190136068A (en)
CN (1) CN108694031B (en)
WO (1) WO2018188297A1 (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102395037A (en) * 2011-06-30 2012-03-28 深圳超多维光电子有限公司 Format recognition method and device
CN102710953A (en) * 2012-05-08 2012-10-03 深圳Tcl新技术有限公司 Method and device for automatically identifying 3D (three-dimentional) video playing mode
WO2012141425A2 (en) * 2011-04-11 2012-10-18 (주)케이티 Method for updating a 3d object on a mobile terminal
CN104657966A (en) * 2013-11-19 2015-05-27 江苏宜清光电科技有限公司 3D format analysis method
CN104767985A (en) * 2014-01-07 2015-07-08 冠捷投资有限公司 Method of using region distribution analysis to automatically detect 3D image format

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102724521A (en) * 2011-03-29 2012-10-10 青岛海信电器股份有限公司 Method and apparatus for stereoscopic display
CN103179426A (en) * 2011-12-21 2013-06-26 联咏科技股份有限公司 Method for detecting image formats automatically and playing method by using same
CN103051913A (en) * 2013-01-05 2013-04-17 北京暴风科技股份有限公司 Automatic 3D (three-dimensional) film source identification method

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2012141425A2 (en) * 2011-04-11 2012-10-18 (주)케이티 Method for updating a 3d object on a mobile terminal
CN102395037A (en) * 2011-06-30 2012-03-28 深圳超多维光电子有限公司 Format recognition method and device
CN102710953A (en) * 2012-05-08 2012-10-03 深圳Tcl新技术有限公司 Method and device for automatically identifying 3D (three-dimentional) video playing mode
CN104657966A (en) * 2013-11-19 2015-05-27 江苏宜清光电科技有限公司 3D format analysis method
CN104767985A (en) * 2014-01-07 2015-07-08 冠捷投资有限公司 Method of using region distribution analysis to automatically detect 3D image format

Also Published As

Publication number Publication date
CN108694031A (en) 2018-10-23
KR20190136068A (en) 2019-12-09
JP2020517025A (en) 2020-06-11
WO2018188297A1 (en) 2018-10-18

Similar Documents

Publication Publication Date Title
EP3660703B1 (en) Method, apparatus, and system for identifying device, storage medium, processor, and terminal
US8594376B2 (en) Computer-readable storage medium having information processing program stored therein, information processing method, information processing apparatus, and information processing system
US9491366B2 (en) Electronic device and image composition method thereof
US20120050305A1 (en) Apparatus and method for providing augmented reality (ar) using a marker
US8649603B2 (en) Computer-readable storage medium having information processing program stored therein, information processing method, information processing apparatus, and information processing system
EP2733674A1 (en) Object display device, object display method, and object display program
US8542324B2 (en) Efficient image and video recoloring for colorblindness
CN106530219B (en) Image splicing method and device
CN104539868A (en) Information processing method and electronic equipment
CN109615583B (en) Game map generation method and device
CN113238692B (en) Region selection method, map division method, device and computer equipment
CN108694031B (en) Identification method and device for three-dimensional display picture
CN106815598B (en) 360-degree panoramic picture identification method and device
JP2013210793A (en) System, method, and program for optimizing ar display
CN110177216B (en) Image processing method, image processing device, mobile terminal and storage medium
US20130141458A1 (en) Image processing device and method
CN111540060A (en) Display calibration method and device of augmented reality equipment and electronic equipment
CN116612518A (en) Facial expression capturing method, system, electronic equipment and medium
CN108305235B (en) Method and device for fusing multiple pictures
KR101849696B1 (en) Method and apparatus for obtaining informaiton of lighting and material in image modeling system
JPH07146937A (en) Pattern matching method
EP2932466B1 (en) Method and apparatus for segmentation of 3d image data
CN105843972B (en) Product attribute information comparison method and device
KR100529721B1 (en) 3-dimensino image generating apparatus using multi-joint robot and method thereof
CN116824055B (en) Picture-based rapid three-dimensional interactive modeling method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant