WO2019047896A1 - Image processing method and device - Google Patents

Image processing method and device Download PDF

Info

Publication number
WO2019047896A1
WO2019047896A1 PCT/CN2018/104381 CN2018104381W WO2019047896A1 WO 2019047896 A1 WO2019047896 A1 WO 2019047896A1 CN 2018104381 W CN2018104381 W CN 2018104381W WO 2019047896 A1 WO2019047896 A1 WO 2019047896A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
area
parallax
image pixel
pixel point
Prior art date
Application number
PCT/CN2018/104381
Other languages
French (fr)
Chinese (zh)
Inventor
冯凯
Original Assignee
西安中兴新软件有限责任公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 西安中兴新软件有限责任公司 filed Critical 西安中兴新软件有限责任公司
Publication of WO2019047896A1 publication Critical patent/WO2019047896A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2200/00Indexing scheme for image data processing or generation, in general
    • G06T2200/04Indexing scheme for image data processing or generation, in general involving 3D image data

Definitions

  • the present disclosure relates to image processing techniques, for example, to an image processing method and apparatus.
  • Three-Dimension (3D) display has become a hot spot for display products.
  • 3D content There are two main sources of 3D content: one is to capture 3D sources through 3D shooting devices (such as 3D cameras, 3D cameras, etc.); the other is to convert two-dimensional (2D) content into two-dimensional (2D) content.
  • 3D content For the second method, the entire 2D image is usually processed according to a 3D algorithm to convert into a 3D image, and the details and parts in the image cannot be refined, and the different display effects of the region cannot be realized.
  • the embodiment of the present application provides an image processing method and device, which can implement adjustment of a three-dimensional display effect.
  • an embodiment of the present application provides an image processing method, including: adjusting a disparity attribute value of an image pixel point in at least one region of an image to be processed, and generating a disparity image of the to-be-processed image; The image and the parallax image of the image to be processed generate a three-dimensional image.
  • an embodiment of the present application provides an image processing apparatus, including: a parallax image generating module, configured to adjust a parallax attribute value of an image pixel point in at least one region of an image to be processed, and generate a parallax of the image to be processed And a three-dimensional image generating module configured to generate a three-dimensional image according to the image to be processed and the parallax image of the image to be processed.
  • an embodiment of the present application provides a terminal, including: a memory, a processor, and an image processing program stored in the memory and executable on the processor, where the image processing program is executed by the processor.
  • an embodiment of the present application provides an image processing method, including: determining a selected area of an image to be processed according to an instruction; and adjusting a display effect of the selected area to an outgoing screen or an on-screen effect.
  • an embodiment of the present application provides a computer readable medium storing an image processing program, where the image processing program is executed by a processor to implement the steps of the image processing method provided by the first aspect or the fourth aspect.
  • FIG. 1 is a schematic diagram of a hardware structure of a terminal for implementing an image processing method according to an embodiment of the present application
  • FIG. 2 is a flowchart of an image processing method according to an embodiment of the present application.
  • FIG. 3 is an exemplary flowchart of an image processing method according to an embodiment of the present disclosure
  • FIG. 4 is a schematic diagram of an editing interface of 2D to 3D image conversion according to an embodiment of the present application
  • FIG. 5 is a schematic diagram of area segmentation of a 2D image according to an embodiment of the present application.
  • FIG. 6 is another schematic diagram of region segmentation of a 2D image according to an embodiment of the present application.
  • FIG. 7 is a schematic diagram showing a principle of forming a stereoscopic display effect of a 3D image according to an embodiment of the present application.
  • FIG. 8 is a schematic diagram of image interleaving according to an embodiment of the present application.
  • FIG. 9 is a schematic diagram of displaying a 3D image through a grating according to an embodiment of the present application.
  • FIG. 10 is a schematic diagram of a voltage applied to a grating cylinder of an embodiment of the present application.
  • FIG. 11 is a schematic diagram of an image processing apparatus according to an embodiment of the present application.
  • FIG. 12 is another schematic diagram of an image processing apparatus according to an embodiment of the present application.
  • FIG. 1 is a schematic diagram of a hardware structure of a terminal for implementing an image processing method according to an embodiment of the present application.
  • the terminal of this embodiment may include, but is not limited to, a mobile terminal such as a laptop computer, a tablet computer, a mobile phone, a media player, a personal digital assistant (PDA), a projector, and the like, and a digital television (Television, TV).
  • Fixed terminals such as desktop computers.
  • the above terminal can support 3D video and picture capturing and playing functions.
  • the terminal 10 of this embodiment includes a memory 14 and a processor 12.
  • the terminal structure shown in FIG. 1 does not constitute a limitation to the terminal, and the terminal may include more or less components than those illustrated, and the terminal may combine certain components or different component arrangements.
  • the processor 12 may include, but is not limited to, a processing device such as a Micro Controller Unit (MCU) or a Field-Programmable Gate Array (FPGA).
  • the memory 14 may be provided as a software program and a module for storing application software, such as program instructions or modules corresponding to the image processing method in the embodiment, and the processor 12 executes a plurality of types by executing a software program and a module stored in the memory 14. Functional application and data processing, that is, the image processing method of the present embodiment is implemented.
  • Memory 14 may include high speed random access memory and may also include non-volatile memory such as at least one magnetic storage device, flash memory, or other non-volatile solid state memory.
  • memory 14 may include memory remotely located relative to processor 12, which may be coupled to terminal 10 via a network. Examples of such networks include, but are not limited to, the Internet, intranets, local area networks, mobile communication networks, and combinations thereof.
  • the terminal 10 may further include a communication unit 16; the communication unit 16 may receive or transmit data via a network.
  • communication unit 16 can be a Radio Frequency (RF) module configured to communicate with the Internet wirelessly.
  • RF Radio Frequency
  • the terminal 10 may further include a display unit configured to display information input by the user or information provided to the user.
  • the display unit may include a display panel, and the display panel may be configured in the form of a liquid crystal display (LCD), an organic light-emitting diode (OLED), or the like.
  • LCD liquid crystal display
  • OLED organic light-emitting diode
  • the usual 2D display is to image the left eye image and the right eye image without parallax at the position of the screen, and this display does not have a stereoscopic effect.
  • a stereoscopic effect can be produced when the left eye image and the right eye image are parallaxly imaged at the position of the screen. If the right eye image is located at the right side of the screen where the left eye image is located, then the convergence point of the two (ie, the image point formed in the human brain) will be located behind the screen, resulting in a depression at the screen.
  • the stereo effect is the screen input effect; if the right eye image is located at the left side of the screen where the left eye image is located, the convergence point of the two will be located in front of the screen position, resulting in a prominent position on the screen.
  • the stereo effect is the screen effect.
  • T is the distance between the left and right eyes of the person, and the value of T can be obtained according to the average spacing between the left and right eyes of the person, that is, it is usually constant.
  • f is the distance between the human eye and the screen, f can be fixed or can be changed in real time. If f is changed in real time, the eye tracking function of the front camera of the terminal can be turned on, and the human eye and the screen can be detected in real time. the distance.
  • L S represents the position of the image pixel in the left eye image at the screen
  • R S represents the image pixel in the right eye image At the location of the screen.
  • L S 1 and R S 1 represent the positions of the image pixel points 1 in the left and right eye images on the screen
  • L S 2 and R S 2 represent the positions of the image pixel points 2 in the left and right eye images on the screen.
  • the P1 point is an on-screen imaging point of a 3D image
  • the P2 point is an out-of-screen imaging point of a 3D image
  • M1 represents the parallax attribute value of the image pixel point 1, that is, the vertical distance between the in-screen imaging position and the human eye.
  • FIG. 2 is a flowchart of an image processing method according to an embodiment of the present application.
  • the image processing method provided by the embodiment is for converting a to-be-processed image into a 3D image including a left-eye image and a right-eye image.
  • the image to be processed may be a 2D image, a left eye image in a 3D image, or a right eye image in a 3D image.
  • the image processing method provided by this embodiment can be used to convert a 2D image into a 3D image, or to perform editing modification on a 3D image.
  • the image processing method provided by this embodiment can also be used to convert 2D video into 3D video, or to edit and modify 3D video.
  • a 3D video is obtained, thereby presenting a 3D effect during image display or video playback; or by adjusting any frame left eye image in the 3D video , the updated right eye image is obtained, thereby presenting a display effect different from the original 3D video.
  • the image processing method provided in this embodiment includes step S201 and step S202.
  • step S201 the disparity attribute value of the image pixel point in at least one region of the image to be processed is adjusted, and a disparity image of the image to be processed is generated.
  • step S202 a 3D image is generated based on the image to be processed and the parallax image of the image to be processed.
  • step S201 may include: dividing an image to be processed into at least one region; and determining, for an area of the at least one region, an image pixel point in the region according to a target display effect of the region Parallax attribute value.
  • Each image pixel may include a plurality of attributes, such as Red Green Blue (RGB) attributes, parallax attributes, and the like.
  • RGB Red Green Blue
  • the 2D to 3D conversion is realized by adjusting the parallax property of each image pixel, or the 3D display effect of the region is adjusted.
  • the current parallax attribute value is used to replace the original parallax attribute value. If the parallax attribute value is not set in the image pixel, the parallax attribute is added to the image pixel, and the value is the currently determined disparity attribute value of the image pixel.
  • the target display effect may include one of the following: the screen effect and the screen entry effect.
  • the descriptions of the screen effect and the screen entry effect are as described above, and thus will not be described again.
  • the target display effects of different regions may be the same or different. However, this application is not limited thereto.
  • the image processing method of the embodiment may further include: determining, according to the received instruction or preset configuration information, a region segmentation manner in the image to be processed and a target display effect of the at least one region.
  • the area division manner of the image to be processed and the target display effect of the area may be set by the user, or may be determined according to a preset configuration. However, this application is not limited thereto.
  • At least one region may be selected in the segmented region to adjust the target display effect.
  • the step S201 may further include: when the target display effect is not set in an area, the original parallax attribute value of each image pixel in the area is kept unchanged; or, each image pixel in the area is updated.
  • the parallax attribute value is a preset value.
  • the parallax attribute value of the image pixel point in the area is maintained. If the image pixel of the image to be processed does not have the parallax property value, the parallax property value may be added to the image pixel, and the parallax property value of each image pixel in the region in the image to be processed may be equal to A preset value, such as the distance between the human eye and the screen.
  • the parallax attribute value of each image pixel in the area of the image to be processed may be updated to be equal to a preset value, for example, a human eye and a screen. the distance between.
  • the parallax attribute values of the selected area of the image to be processed and the image pixel points in the unselected area may be determined in different manners, and the parallax attribute values of the image pixel points in different areas of the image to be processed may be different, so that Different areas produce different display effects.
  • determining the disparity attribute value of the image pixel point in the area according to the target display effect of the area may include: determining an offset of each image pixel point in the area according to the target display effect of the area; For each image pixel in the region, the parallax attribute value of the image pixel is determined according to the offset of the image pixel.
  • the parallax attribute value of the image pixel point in one region can be determined according to the offset of the image pixel point.
  • the offset of one image pixel refers to the absolute value of the difference between the position of the image pixel on the display screen in the left eye image and the position of the image pixel on the display screen in the right eye image.
  • determining the offset of each image pixel in the region according to the target display effect of the region may include: determining, when the target display effect of the region is set to the on-screen effect, determining each of the regions The offset value d of an image pixel is greater than 0 and less than T; when the target display effect of one region is set to the screen effect, the offset d of each image pixel in the region is determined.
  • the value ranges from greater than T and less than 2T; wherein T is the spacing between the left and right eyes of the person.
  • determining the disparity attribute value of the image pixel point according to the offset of the image pixel point may include: according to the offset of the image pixel point, the spacing between the left and right eyes of the image, and the image to be processed. Pixel density, calculating a parallax attribute value of the image pixel point; or calculating a parallax attribute of the image pixel point according to the offset of the image pixel point, the spacing between the left and right eyes of the person, and the distance between the human eye and the screen value.
  • calculating the disparity attribute value of the image pixel point according to the offset of the image pixel point, the spacing between the left and right eyes of the person, and the pixel density of the image to be processed may include: calculating an image according to the following formula: The parallax attribute value of the pixel:
  • M is the parallax attribute value of the pixel of the image
  • d is the offset of the pixel of the image
  • T is the spacing between the left and right eyes of the person
  • PPI is the pixel density
  • the value of T can be obtained according to the average spacing between the left and right eyes of the person.
  • the PPI can be determined according to the image resolution and the screen size, with the resolution being a ⁇ b and the screen size being c as an example.
  • M is the parallax attribute value of the pixel of the image
  • d is the offset of the pixel of the image
  • T is the distance between the left and right eyes of the person
  • f is the distance between the human eye and the screen.
  • the target display effect of the entire area of the image to be processed when the target display effect of the entire area of the image to be processed is set to the screen effect, it may be determined that the offset of each image pixel in the image to be processed is 1.5T; when all regions of the image to be processed are When the target display effect is set to the on-screen effect, it can be determined that the offset of each image pixel in the image to be processed is 0.5T, where T is the distance between the left and right eyes of the person.
  • this application is not limited thereto.
  • determining the offset of each image pixel in the region according to the target display effect of the region may include: displaying the target according to the region when the distance between the human eye and the screen has not changed. The effect is to determine an initial parallax attribute value of each image pixel in the area; after the distance between the human eye and the screen is changed, according to the initial parallax attribute value of the image pixel point, the changed human eye and the screen The distance between the person and the distance between the left and right eyes determines the offset of the pixel of the image after the distance between the human eye and the screen changes.
  • the distance between the human eye and the screen can be acquired in real time through the eyeball tracking function of the imaging device.
  • the initial offset of each image pixel in the region may be determined according to the target display effect of the region; then, for each image pixel in the region Calculating an initial parallax attribute value of the image pixel point according to an initial offset of the pixel of the image, a spacing between the left and right eyes of the person, and a pixel density of the image to be processed; after the distance between the human eye and the screen is changed
  • the deviation of the pixel point of the image after the distance between the human eye and the screen is changed is calculated.
  • T is the distance between the left and right eyes of the person; after that, the offset of the pixel of the image may be changed according to the distance between the human eye and the screen, and the person
  • the distance between the left and right eyes and the pixel density of the image to be processed are calculated as the parallax attribute value of the pixel of the image after the distance between the human eye and the screen is changed.
  • determining the disparity attribute value of the image pixel point in the area according to the target display effect of the area may include: determining, according to the target display effect of the area, applying a voltage to the grating cylinder mirror corresponding to the area on the screen. In order to shift the alignment direction of the liquid crystal molecules; according to the voltage, the parallax property value of each image pixel in the region is determined.
  • the target display effect of the region of the image to be processed it may be determined that a voltage is applied to the grating cylinder mirror corresponding to the region on the screen, so that the alignment direction of the liquid crystal molecules is shifted, thereby changing the refractive index of the light.
  • the larger the voltage the larger the refractive index; then, according to the voltage, the parallax attribute value of each image pixel in the region of the image to be processed is determined.
  • the parallax attribute value of each image pixel in the region of the image to be processed is determined according to the voltage applied to the grating cylinder.
  • an indium tin oxide (ITO) layer that is, a conductive glass layer, can generate an electric field by driving an external voltage, thereby changing the alignment direction of liquid crystal molecules in the liquid crystal layer to change the light.
  • the refractive index can provide a 3D display effect.
  • FIG. 10(b) in the case where the external voltage is 0, a 2D display effect can be provided.
  • step S202 may include: determining an image to be processed as a left eye image, determining a parallax image of the image to be processed as a right eye image; or determining the image to be processed as a right eye image, and the image to be processed The parallax image is determined as the left eye image.
  • a 3D image of the left and right eye format can be generated, thereby realizing conversion of the 2D image to the 3D image. It is also possible to generate a corresponding parallax image according to the left eye image or the right eye image of the original 3D image, thereby generating an updated 3D image, and implementing modified editing of the 3D image to adjust the display effect of the region.
  • the method of the embodiment may further include: compressing the left eye image and the right eye image respectively in the first direction; and compressing the compressed left eye image and the right eye image according to the predetermined
  • the format is interleaved to obtain an interlaced image; the interlaced image is stretched in a first direction; and the stretched interlaced image is displayed.
  • the first direction may be a lateral coordinate direction.
  • this application is not limited thereto.
  • the first direction can also be a longitudinal coordinate direction.
  • the left eye image and the right eye image are respectively compressed in the lateral coordinate direction, and the left eye image and the right eye image are spaced according to a column of left eye image pixels and a column of right eye image pixels.
  • the order is interleaved to obtain an interlaced image.
  • the interlaced image is stretched in the first direction to achieve a full screen effect.
  • FIG. 3 is a flowchart of an example of an image processing method according to an embodiment of the present application.
  • the image processing method provided in this embodiment may be applied to a mobile terminal, and the mobile terminal may provide an editing interface for 2D to 3D image conversion, and the editing interface may include: an image editing area and a control button.
  • the editing interface includes an image editing area 401, an out-of-screen button 402, and an on-screen button 403.
  • the out button button 402 is set to control the screen effect processing
  • the screen button 403 is set to control the screen effect processing.
  • this application is not limited thereto.
  • control interface may not be provided with a control button, and the user may use the combination key mode to perform on-screen or on-screen effect processing control; or the editing interface may further include: an automatic manual switching button, and an automatic manual switching button. It is set to control the mode of the current image conversion. For example, in the manual mode, the user needs to select an area having a corresponding display effect in the image editing area 401. In the automatic mode, the user can perform area selection without the user, and the mobile terminal can be configured according to the preset configuration information. Check the area.
  • the image processing method provided in this embodiment includes steps S301 to S307.
  • step S301 the slice source format and the presentation mode are determined.
  • the source format may include at least one of the following types: 2D image, 2D video, 3D image, 3D video; the presentation manner may include one of the following: 2D, 3D.
  • the 3D image may be a 3D image of a left and right eye format, including a left eye image and a right eye image.
  • the slice source format may be determined according to the slice source selected by the user, and the presentation mode is determined according to an instruction input by the user.
  • the mobile terminal may preset the configuration information to determine the presentation mode.
  • step S302 to step S307 may be performed to convert the 2D image into a 3D image and then display; if the source format is 2D image or 2D video, and each of the 2D videos may be displayed when the rendering mode is 3D.
  • the frame 2D image is converted into a 3D image, and the 3D video is formed and played; if the source format is 3D image or 3D video, and the rendering mode is 2D, the right eye image can be extracted from the 3D image and processed by image stretching and the like. Display; if the source format is 3D image or 3D video, when the rendering mode is 2D, the right eye image may also be extracted for each frame 3D image in the 3D video, and then played after the image stretching process; if the source format For a 3D image or a 3D video, the presentation mode is 3D, then steps S304 to S307 may be performed to present a 3D display effect. Wherein, if the 3D image or the 3D video provided by the source is an image that has been interlaced, it can be directly displayed or played after the image stretching process.
  • the source format is a 2D image and the 3D method is used as an example.
  • the user selects the area display effect of manually setting the 2D image; after the user selects the 2D image, the 2D image to be processed is displayed in the image editing area of the editing interface.
  • the 2D image to be processed in the image editing area 401 is a rabbit.
  • step S302 the selected area in the 2D image and the target display effect are determined.
  • the region segmentation mode and the target display effect of the selected region are determined.
  • the target display effect of the selected area of the dotted line frame is the screen output effect
  • the target display effect of the selected area of the dotted line frame is the screen entry effect.
  • step S303 3D image conversion is performed based on the determined area.
  • the distance f between the human eye and the screen is a fixed value.
  • the area of the smear area can be obtained by marking the area as indicated by the dotted line in FIG. , [20, 20, 40, 40], and add a parallax attribute value to each image pixel in the coordinate area. It should be noted that if each image pixel in the coordinate area has been set with a parallax attribute value, the original parallax attribute value is replaced with a new parallax attribute value.
  • the coordinates are represented by [abscissa, ordinate, abscissa, ordinate].
  • the present application does not limit the coordinate determination manner of the area marked by the broken line frame.
  • the coordinates may be determined according to an area intercepted or clicked by the mouse operated by the user in the image editing area 401, or the coordinates of the selected area may be determined according to the touch position of the user on the touch screen.
  • the user sets the target display effect of the dotted area in FIG. 5 as the screen-out effect. Therefore, the parallax attribute value of each image pixel in the area is greater than the distance between the human eye and the screen.
  • the parallax attribute value of each pixel in the unselected area ie, the area outside the dotted line frame in FIG. 5 may be equal to the distance f between the human eye and the screen.
  • the adjustment range of the offset d of each pixel in the selected area is T ⁇ d ⁇ 2T.
  • the area of the smear area can be obtained by marking the area as shown by the dotted line frame in FIG. For example, [0, 10, 30, 50], and add a disparity attribute value to each image pixel in the coordinate area.
  • the user sets the target display effect of the marked area of the dotted line frame in FIG. 6 as the on-screen effect. Therefore, the parallax attribute value of each image pixel in the area is smaller than between the human eye and the screen. Distance f. At this time, the parallax attribute value of each image pixel in the unselected area (ie, the area outside the dotted line and the dotted line frame in FIG. 6) may be equal to the distance f between the human eye and the screen. At this time, the adjustment range of the offset d of each image pixel in the selected area of the dotted line frame is 0 ⁇ d ⁇ T.
  • the effect of the 3D image can be smoothed by the probability density algorithm, and the parallax attribute value can be calculated according to the following formula:
  • a 6-inch screen image with a resolution of 1920 ⁇ 1080 is taken as an example, and a pixel density (Pixels Per Inch, PPI) of a 6-inch screen with a resolution of 1920 ⁇ 1080 is equal to 367.
  • the dashed box indicates that the parallax attribute value of each image pixel in the region is calculated according to the corresponding d value using the above formula.
  • the value of the parallax attribute of each image pixel in the dotted line frame is calculated according to the corresponding d value according to the above formula.
  • the parallax of each image pixel in the two regions is updated by adjusting the d in the horizontal direction of the pixel in the region coordinates [20, 20, 40, 40] and the region coordinates [0, 10, 30, 50].
  • the parallax attribute value of the image pixel points other than these two areas may be equal to f.
  • a parallax image is generated.
  • the generated parallax image is used as a right eye image
  • the 2D image is used as a left eye image to generate a 3D image of the left and right eye format, thereby realizing conversion of the 2D image to the 3D image, and the display effect can be regionalized.
  • the distance f between the human eye and the screen can vary in real time.
  • the d value can be adjusted in real time by the f value, and the screen and screen effects can be controlled by the d value.
  • step S304 the converted 3D image is subjected to compression processing.
  • the left-eye image and the right-eye image are respectively compressed in the lateral coordinate direction, that is, the horizontal pixels are compressed to half, for example, an image with a resolution of 1920 ⁇ 1080 is compressed to an image with a resolution of 960 ⁇ 1080.
  • step S305 image interleaving is performed on the compressed 3D image.
  • the left and right eye images in the compressed image processing (Image Signal Processor, ISP) channel are interlaced and merged.
  • the left eye image and the right eye image are side by side (side by side).
  • the arrangement is performed to generate a standard interlaced image with a resolution of 1920 ⁇ 1080.
  • step S306 the interlaced image is stretched.
  • the left eye and right eye data of the interleaved image are stretched back to 1920 ⁇ 1080 resolution.
  • step S307 a 3D image is displayed.
  • a 3D grating film can be attached to the display surface of the terminal, so that when the display screen plays a 3D image, the left and right eyes of the person can receive different left and right eye image data, as shown in FIG.
  • the frame mark portion is a left eye image
  • the white rectangular frame mark portion is a right eye image.
  • this application is not limited thereto.
  • the display screen of the terminal plays a 3D image or a 3D video
  • the user can watch by wearing the 3D glasses.
  • the entire 2D image may be processed.
  • the image processing method of the present embodiment can be used to convert a 2D image into a 3D image, and the display effects of different regions can be different.
  • the image processing method of the present embodiment can be used to edit a 3D image.
  • the left eye image in the original left and right eye format 3D image may be selected as the image to be processed, and the area division is performed in the image to be processed, and the target display effect is determined as an area for the screen or the screen effect;
  • the parallax attribute value of each image pixel in the area is adjusted, wherein if the image pixel point in the original left eye image is not set with the parallax attribute value, the parallax attribute value may be added to each image pixel point, if the original left The image pixel in the eye image has been set with the parallax attribute value, and can be updated according to the newly determined parallax attribute value.
  • the parallax attribute value of the image pixel except the selected area remains unchanged or set to a preset value, for example, equal to the distance between the human eye and the screen.
  • the original left eye image is still used as the left eye image
  • the parallax image of the original left eye image is used as the right eye image to obtain the updated left and right eye format 3D. image. In this way, the modification of the 3D image can be realized, and the display effect of different regions can be modified.
  • the image processing method provided by the embodiment enables the user to conveniently convert the 2D image into a 3D image, and can adjust the out-of-screen and on-screen effects of the region as needed to present a 3D effect.
  • FIG. 11 is a schematic diagram of an image processing apparatus according to an embodiment of the present application. As shown in FIG. 11, the image processing apparatus provided in this embodiment includes a parallax image generating module 1101 and a three-dimensional image generating module 1102.
  • the parallax image generating module 1101 is configured to adjust a parallax property value of an image pixel point in at least one region of the image to be processed, and generate a parallax image of the image to be processed.
  • the three-dimensional image generation module 1102 is configured to generate a three-dimensional image according to the image to be processed and the parallax image of the image to be processed.
  • the parallax image generating module 1101 may be configured to divide the image to be processed into at least one region; and determine an image pixel in the region according to the target display effect of the region for the region of the at least one region The parallax attribute value of the point.
  • the parallax image generating module 1101 may be configured to determine a parallax attribute value of an image pixel point in the area according to a target display effect of the area in the following manner:
  • the parallax image generating module 1101 may be configured to determine a disparity attribute value of the image pixel point according to an offset of the image pixel point by: an offset of the pixel point of one image, and a left and right eye of the person The spacing between the pixels and the pixel density of the image to be processed, the parallax property value of the pixel of the image is calculated; or, according to the offset of an image pixel point, the spacing between the left and right eyes of the person, and the distance between the human eye and the screen, The parallax attribute value of the pixel of the image is calculated.
  • the parallax image generating module 1101 may be configured to calculate a parallax attribute value of the image pixel point according to an offset of the image pixel point, a spacing between the left and right eyes of the person, and a pixel density of the image to be processed. : Calculate the parallax property value of an image pixel according to the following formula:
  • M is the parallax attribute value of the pixel of the image
  • d is the offset of the pixel of the image
  • T is the spacing between the left and right eyes of the person
  • PPI is the pixel density
  • M is the parallax attribute value of the pixel of the image
  • d is the offset of the pixel of the image
  • T is the distance between the left and right eyes of the person
  • f is the distance between the human eye and the screen.
  • the parallax image generating module 1101 may be configured to determine an offset of each image pixel point in the region according to the target display effect of the region in the following manner: the target display effect in one region is set to the on-screen effect. Determining, the value of the offset d of each image pixel in the region is greater than 0 and less than T; when the target display effect of one region is set to the screen effect, determining each image pixel in the region
  • the offset d ranges from greater than T to less than 2T; wherein T is the spacing between the left and right eyes of the person.
  • the parallax image generating module 1001 may further be configured to keep the original parallax property value of each image pixel in the region unchanged when the target display effect is not set in one region; or, update each region in the region.
  • the parallax attribute value of the image pixel is a preset value, for example, the distance between the human eye and the screen.
  • the parallax image generating module 1101 may be configured to determine an offset of each image pixel point in the region according to a target display effect of the region in such a manner that the distance between the human eye and the screen does not change. And determining an initial parallax attribute value of the image pixel in the area according to the target display effect of the area; after the distance between the human eye and the screen is changed, according to the initial parallax attribute value of the image pixel point, after the change The distance between the human eye and the screen and the spacing between the left and right eyes of the person determine the offset of the pixel of the image after the distance between the human eye and the screen changes.
  • the parallax image generating module 1101 may be configured to determine a parallax attribute value of the image pixel point in the area according to the target display effect of the area by determining the image to the screen according to the target display effect of the area. Applying a voltage to the corresponding grating cylinder of the region to shift the alignment direction of the liquid crystal molecules, thereby changing the refractive index of the light, wherein the larger the voltage, the larger the refractive index; according to the voltage, each image pixel in the region is determined. The parallax attribute value of the point.
  • the three-dimensional image generating module 1102 may be configured to generate a three-dimensional image according to the image to be processed and the parallax image of the image to be processed in the following manner: determining the image to be processed as a left-eye image, and distorting the image to be processed The image is determined to be a right eye image; or, the image to be processed is determined as a right eye image, and the parallax image of the image to be processed is determined as a left eye image.
  • the apparatus provided in this embodiment may further include an image compression module 1203, an image interfacing module 1204, an image stretching module 1205, and an image display module 1206.
  • the image compression module 1203 is configured to compress the left eye image and the right eye image included in the three-dimensional image, respectively, in the first direction.
  • the image interleaving module 1204 is configured to interleave the compressed left eye image and the right eye image in a predetermined format to obtain an interlaced image.
  • the image stretching module 1205 is configured to stretch the interlaced image in a first direction.
  • the image display module 1206 is configured to display the stretched interlaced image.
  • the image compression module 1203 can compress the left eye image and the right eye image according to the aspect ratio of the 3D screen display of the terminal, so as to perform the next image interleaving.
  • the image interleaving module 1204 can interleave the compressed left eye image and the right eye image in a specific format, respectively.
  • the image stretching module 1205 can stretch the interlaced images in equal proportions according to the aspect ratio of the screen to achieve a full screen effect.
  • the image display module 1206 can adopt a 3D grating technology to apply a 3D grating film on the display screen, so that the left and right eyes of the person receive different left and right eye image data.
  • the embodiment of the present application further provides an image processing method, including: determining a selected area of an image to be processed according to an instruction; and adjusting a display effect of the selected area to an out screen or an on-screen effect.
  • the selected area may include at least one first selected area and at least one second selected area; the display effect of the first selected area is a screen-out effect, and the display effect of the second selected area is an on-screen effect.
  • the first selected area may include a dotted line frame area in FIG. 6, and the second selected area may include a dotted line frame marked area in FIG.
  • the embodiment of the present application further provides a computer readable medium storing an image processing program, where the image processing program is executed by a processor to implement the steps of the image processing method.
  • All or some of the steps, systems, functional blocks or units in the methods disclosed above may be implemented as software, firmware, hardware, and suitable combinations thereof.
  • the division between functional modules or units mentioned in the above description does not necessarily correspond to the division of physical components.
  • one physical component can have multiple functions, or one function or step can be performed cooperatively by several physical components.
  • Some or all of the components may be implemented as software executed by a processor, such as a digital signal processor or microprocessor; some or all of the components may also be implemented as hardware; some or all of the components may also be implemented It is an integrated circuit, such as an application specific integrated circuit.
  • Such software may be distributed on a computer readable medium, which may include computer storage media (or non-transitory media) and communication media (or transitory media).
  • the term computer storage medium includes volatile and nonvolatile, implemented in any method or technology for storing information, such as computer readable instructions, data structures, program modules, or other data. , removable and non-removable media.
  • the computer storage medium includes, but is not limited to, a random access memory (RAM), a read only memory (ROM), and an electrically erasable programmable read-only memory (EEPROM). Flash memory or other memory technology, Compact Disc Read-Only Memory (CD-ROM), Digital Versatile Disc (DVD) or other optical disc storage, magnetic cassette, magnetic tape, disk storage or other magnetic storage device Or any other medium that can be used to store the desired information and that can be accessed by the computer.
  • communication media typically includes computer readable instructions, data structures, program modules or other data in a modulated data signal, such as a carrier wave or other transport mechanism, and can include any information delivery media. .

Abstract

An image processing method, comprising: adjusting a disparity attribute value of an image pixel point in at least one area of an image to be processed, and generating a disparity image of the image to be processed; and generating a three dimensional image according to the image to be processed and the disparity image of the image to be processed.

Description

一种图像处理方法及装置Image processing method and device
本公开要求申请日为2017年9月11日、申请号为201710813002.6的中国专利申请的优先权,该申请的全部内容通过引用结合在本公开中。The present application claims the benefit of priority to the benefit of the benefit of the benefit of the benefit of the benefit of the benefit of the benefit of the benefit of the benefit of the benefit of the benefit of the benefit of the benefit of the benefit of the benefit of the benefit of the benefit of the benefit of the disclosure of
技术领域Technical field
本公开涉及图像处理技术,例如涉及一种图像处理方法及装置。The present disclosure relates to image processing techniques, for example, to an image processing method and apparatus.
背景技术Background technique
随着显示技术和数字技术的不断发展,三维(Three-Dimension,3D)显示已经成为显示产品的热点。3D内容的主要来源有两种渠道:一种是通过3D拍摄设备(比如,3D摄影机、3D相机等)来摄制3D片源;另一种是将二维(Two-Dimension,2D)内容转换成3D内容。对于第二种方式,通常根据某个3D算法处理整个2D图像实现转换为3D图像,无法对图像中的细节和局部进行精细化处理,无法实现区域化的不同显示效果。With the continuous development of display technology and digital technology, Three-Dimension (3D) display has become a hot spot for display products. There are two main sources of 3D content: one is to capture 3D sources through 3D shooting devices (such as 3D cameras, 3D cameras, etc.); the other is to convert two-dimensional (2D) content into two-dimensional (2D) content. 3D content. For the second method, the entire 2D image is usually processed according to a 3D algorithm to convert into a 3D image, and the details and parts in the image cannot be refined, and the different display effects of the region cannot be realized.
发明内容Summary of the invention
以下是对本文详细描述的主题的概述。本概述并非是为了限制权利要求的保护范围。The following is an overview of the topics detailed in this document. This Summary is not intended to limit the scope of the claims.
本申请实施例提供一种图像处理方法及装置,可以实现三维显示效果的调整。The embodiment of the present application provides an image processing method and device, which can implement adjustment of a three-dimensional display effect.
第一方面,本申请实施例提供一种图像处理方法,包括:调整待处理图像的至少一个区域中图像像素点的视差属性值,并生成所述待处理图像的视差图像;根据所述待处理图像以及所述待处理图像的视差图像,生成三维图像。In a first aspect, an embodiment of the present application provides an image processing method, including: adjusting a disparity attribute value of an image pixel point in at least one region of an image to be processed, and generating a disparity image of the to-be-processed image; The image and the parallax image of the image to be processed generate a three-dimensional image.
第二方面,本申请实施例提供一种图像处理装置,包括:视差图像生成模块,设置为调整待处理图像的至少一个区域中图像像素点的视差属性值,并生成所述待处理图像的视差图像;三维图像生成模块,设置为根据所述待处理图像以及所述待处理图像的视差图像,生成三维图像。In a second aspect, an embodiment of the present application provides an image processing apparatus, including: a parallax image generating module, configured to adjust a parallax attribute value of an image pixel point in at least one region of an image to be processed, and generate a parallax of the image to be processed And a three-dimensional image generating module configured to generate a three-dimensional image according to the image to be processed and the parallax image of the image to be processed.
第三方面,本申请实施例提供一种终端,包括:存储器、处理器以及存储在所述存储器并可在所述处理器上运行的图像处理程序,所述图像处理程序被 所述处理器执行时实现上述第一方面提供的图像处理方法的步骤。In a third aspect, an embodiment of the present application provides a terminal, including: a memory, a processor, and an image processing program stored in the memory and executable on the processor, where the image processing program is executed by the processor The steps of the image processing method provided by the above first aspect are implemented.
第四方面,本申请实施例提供一种图像处理方法,包括:根据指令,确定待处理图像的选中区域;将所述选中区域的显示效果调整为出屏或入屏效果。In a fourth aspect, an embodiment of the present application provides an image processing method, including: determining a selected area of an image to be processed according to an instruction; and adjusting a display effect of the selected area to an outgoing screen or an on-screen effect.
第五方面,本申请实施例提供一种计算机可读介质,存储有图像处理程序,所述图像处理程序被处理器执行时实现上述第一方面或第四方面提供的图像处理方法的步骤。In a fifth aspect, an embodiment of the present application provides a computer readable medium storing an image processing program, where the image processing program is executed by a processor to implement the steps of the image processing method provided by the first aspect or the fourth aspect.
在阅读并理解了附图和详细描述后,可以明白其他方面。Other aspects will be apparent upon reading and understanding the drawings and detailed description.
附图说明DRAWINGS
图1为实施本申请实施例提供的图像处理方法的一种终端的硬件结构示意图;1 is a schematic diagram of a hardware structure of a terminal for implementing an image processing method according to an embodiment of the present application;
图2为本申请实施例提供的图像处理方法的流程图;2 is a flowchart of an image processing method according to an embodiment of the present application;
图3为本申请实施例提供的图像处理方法的一种示例性流程图;FIG. 3 is an exemplary flowchart of an image processing method according to an embodiment of the present disclosure;
图4为本申请实施例的2D至3D图像转换的编辑界面的示意图;4 is a schematic diagram of an editing interface of 2D to 3D image conversion according to an embodiment of the present application;
图5为本申请实施例的2D图像的区域分割的一种示意图;FIG. 5 is a schematic diagram of area segmentation of a 2D image according to an embodiment of the present application; FIG.
图6为本申请实施例的2D图像的区域分割的另一种示意图;FIG. 6 is another schematic diagram of region segmentation of a 2D image according to an embodiment of the present application; FIG.
图7为本申请实施例的3D图像的立体显示效果的形成原理示意图;FIG. 7 is a schematic diagram showing a principle of forming a stereoscopic display effect of a 3D image according to an embodiment of the present application; FIG.
图8为本申请实施例的图像交织示意图;FIG. 8 is a schematic diagram of image interleaving according to an embodiment of the present application;
图9为本申请实施例的3D图像通过光栅进行显示的示意图;9 is a schematic diagram of displaying a 3D image through a grating according to an embodiment of the present application;
图10为本申请实施例的光栅柱镜加上电压的示意图;FIG. 10 is a schematic diagram of a voltage applied to a grating cylinder of an embodiment of the present application; FIG.
图11为本申请实施例提供的图像处理装置的示意图;FIG. 11 is a schematic diagram of an image processing apparatus according to an embodiment of the present application;
图12为本申请实施例提供的图像处理装置的另一示意图。FIG. 12 is another schematic diagram of an image processing apparatus according to an embodiment of the present application.
具体实施方式Detailed ways
以下结合附图对本申请实施例进行详细说明,应当理解,以下所说明的实施例仅用于说明和解释本申请,并不用于限定本申请。The embodiments of the present application are described in detail below with reference to the accompanying drawings.
图1为实施本申请实施例提供的图像处理方法的一种终端的硬件结构示意图。本实施例的终端可以包括但不限于手提电脑、平板电脑、移动电话、媒体播放器、个人数字助理(Personal Digital Assistant,PDA)、投影仪等移动终端,以及诸如数字电视机(Television,TV)、台式计算机等固定终端。示例性地, 上述终端可以支持3D视频和图片拍摄、播放功能。FIG. 1 is a schematic diagram of a hardware structure of a terminal for implementing an image processing method according to an embodiment of the present application. The terminal of this embodiment may include, but is not limited to, a mobile terminal such as a laptop computer, a tablet computer, a mobile phone, a media player, a personal digital assistant (PDA), a projector, and the like, and a digital television (Television, TV). Fixed terminals such as desktop computers. Illustratively, the above terminal can support 3D video and picture capturing and playing functions.
如图1所示,本实施例的终端10包括:存储器14以及处理器12。图1中示出的终端结构并不构成对终端的限定,终端可以包括比图示更多或更少的部件,终端可以组合某些部件,或者不同的部件布置。As shown in FIG. 1, the terminal 10 of this embodiment includes a memory 14 and a processor 12. The terminal structure shown in FIG. 1 does not constitute a limitation to the terminal, and the terminal may include more or less components than those illustrated, and the terminal may combine certain components or different component arrangements.
其中,处理器12可以包括但不限于微处理器(Microcontroller Unit,MCU)或可编程逻辑器件(Field-Programmable Gate Array,FPGA)等的处理装置。存储器14可设置为存储应用软件的软件程序以及模块,如本实施例中的图像处理方法对应的程序指令或模块,处理器12通过运行存储在存储器14内的软件程序以及模块,从而执行多种功能应用以及数据处理,即实现本实施例的图像处理方法。存储器14可包括高速随机存储器,还可包括非易失性存储器,如至少一个磁性存储装置、闪存、或者其他非易失性固态存储器。在一些实例中,存储器14可包括相对于处理器12远程设置的存储器,这些远程存储器可以通过网络连接至上述终端10。上述网络的实例包括但不限于互联网、企业内部网、局域网、移动通信网及其组合。The processor 12 may include, but is not limited to, a processing device such as a Micro Controller Unit (MCU) or a Field-Programmable Gate Array (FPGA). The memory 14 may be provided as a software program and a module for storing application software, such as program instructions or modules corresponding to the image processing method in the embodiment, and the processor 12 executes a plurality of types by executing a software program and a module stored in the memory 14. Functional application and data processing, that is, the image processing method of the present embodiment is implemented. Memory 14 may include high speed random access memory and may also include non-volatile memory such as at least one magnetic storage device, flash memory, or other non-volatile solid state memory. In some examples, memory 14 may include memory remotely located relative to processor 12, which may be coupled to terminal 10 via a network. Examples of such networks include, but are not limited to, the Internet, intranets, local area networks, mobile communication networks, and combinations thereof.
在一实施例中,上述终端10还可以包括通信单元16;通信单元16可以经由一个网络接收或者发送数据。在一个实例中,通信单元16可以为射频(Radio Frequency,RF)模块,其设置为通过无线方式与互联网进行通信。In an embodiment, the terminal 10 may further include a communication unit 16; the communication unit 16 may receive or transmit data via a network. In one example, communication unit 16 can be a Radio Frequency (RF) module configured to communicate with the Internet wirelessly.
在一实施例中,上述终端10还可以包括显示单元,设置为显示由用户输入的信息或提供给用户的信息。显示单元可包括显示面板,可以采用液晶显示器(Liquid Crystal Display,LCD)、有机发光二极管(Organic Light-Emitting Diode,OLED)等形式来配置显示面板。In an embodiment, the terminal 10 may further include a display unit configured to display information input by the user or information provided to the user. The display unit may include a display panel, and the display panel may be configured in the form of a liquid crystal display (LCD), an organic light-emitting diode (OLED), or the like.
下面先参照图7说明3D图像的立体显示效果的形成原理。Next, the principle of forming a stereoscopic display effect of a 3D image will be described with reference to FIG.
通常的2D显示是将左眼图像和右眼图像无视差地成像于屏幕所在位置,这种显示不具有立体效果。在左眼图像和右眼图像有视差地成像于屏幕所在位置时,可以产生立体效果。如果右眼图像在屏幕所在位置位于左眼图像在屏幕所在位置的右边,那么两者的汇合点(即人脑中形成的像点)将位于屏幕所在位置的后方,从而产生凹陷于屏幕所在位置的立体效果,即入屏效果;如果右眼图像在屏幕所在位置是位于左眼图像在屏幕所在位置的左边,那么两者的汇合点将位于屏幕所在位置的前方,从而产生突出于屏幕所在位置的立体效果,即出屏效果。The usual 2D display is to image the left eye image and the right eye image without parallax at the position of the screen, and this display does not have a stereoscopic effect. A stereoscopic effect can be produced when the left eye image and the right eye image are parallaxly imaged at the position of the screen. If the right eye image is located at the right side of the screen where the left eye image is located, then the convergence point of the two (ie, the image point formed in the human brain) will be located behind the screen, resulting in a depression at the screen. The stereo effect is the screen input effect; if the right eye image is located at the left side of the screen where the left eye image is located, the convergence point of the two will be located in front of the screen position, resulting in a prominent position on the screen. The stereo effect is the screen effect.
如图7所示,T是人的左右眼间的距离,T的取值可以根据人左右眼之间的平均间距得到,即通常是恒定的。f是人眼到屏幕间的距离,f可以是固定的或者可以是实时变化的,若f为实时变化的,则可以开启终端的前置摄像头的眼球追踪功能,实时检测人眼与屏幕之间的距离。d是一个图像像素点的偏移量,d=|L S-R S|,其中,L S表示左眼图像中该图像像素点在屏幕的位置,R S表示右眼图像中该图像像素点在屏幕的位置。在图7中,L S1和R S1表示左右眼图像中图像像素点1在屏幕的位置,L S2和R S2表示左右眼图像中图像像素点2在屏幕的位置。在图7中,P1点为3D图像的入屏成像点,P2点为3D图像的出屏成像点。M1表示图像像素点1的视差属性值,即入屏成像位置和人眼之间的垂直距离。 As shown in Fig. 7, T is the distance between the left and right eyes of the person, and the value of T can be obtained according to the average spacing between the left and right eyes of the person, that is, it is usually constant. f is the distance between the human eye and the screen, f can be fixed or can be changed in real time. If f is changed in real time, the eye tracking function of the front camera of the terminal can be turned on, and the human eye and the screen can be detected in real time. the distance. d is the offset of an image pixel, d=|L S -R S |, where L S represents the position of the image pixel in the left eye image at the screen, and R S represents the image pixel in the right eye image At the location of the screen. In Fig. 7, L S 1 and R S 1 represent the positions of the image pixel points 1 in the left and right eye images on the screen, and L S 2 and R S 2 represent the positions of the image pixel points 2 in the left and right eye images on the screen. In FIG. 7, the P1 point is an on-screen imaging point of a 3D image, and the P2 point is an out-of-screen imaging point of a 3D image. M1 represents the parallax attribute value of the image pixel point 1, that is, the vertical distance between the in-screen imaging position and the human eye.
图2为本申请实施例提供的图像处理方法的流程图。本实施例提供的图像处理方法用于将一个待处理图像转换为包括左眼图像和右眼图像的3D图像。其中,待处理图像可以为2D图像,3D图像中的左眼图像,或3D图像中的右眼图像。换言之,本实施例提供的图像处理方法可以用于将2D图像转换为3D图像,或者用于对3D图像进行编辑修改。而且,本实施例提供的图像处理方法还可以用于将2D视频转化为3D视频,或者用于对3D视频进行编辑修改。其中,通过将2D视频中的任一帧2D图像转换为3D图像,得到3D视频,从而在图像显示或视频播放时,呈现3D效果;或者通过对3D视频中的任一帧左眼图像进行调整,得到更新后的右眼图像,从而呈现与原3D视频不同的显示效果。FIG. 2 is a flowchart of an image processing method according to an embodiment of the present application. The image processing method provided by the embodiment is for converting a to-be-processed image into a 3D image including a left-eye image and a right-eye image. The image to be processed may be a 2D image, a left eye image in a 3D image, or a right eye image in a 3D image. In other words, the image processing method provided by this embodiment can be used to convert a 2D image into a 3D image, or to perform editing modification on a 3D image. Moreover, the image processing method provided by this embodiment can also be used to convert 2D video into 3D video, or to edit and modify 3D video. Wherein, by converting any frame 2D image in the 2D video into a 3D image, a 3D video is obtained, thereby presenting a 3D effect during image display or video playback; or by adjusting any frame left eye image in the 3D video , the updated right eye image is obtained, thereby presenting a display effect different from the original 3D video.
如图2所示,本实施例提供的图像处理方法,包括步骤S201和步骤S202。As shown in FIG. 2, the image processing method provided in this embodiment includes step S201 and step S202.
在步骤S201中,调整待处理图像的至少一个区域中图像像素点的视差属性值,并生成待处理图像的视差图像。In step S201, the disparity attribute value of the image pixel point in at least one region of the image to be processed is adjusted, and a disparity image of the image to be processed is generated.
在步骤S202中,根据待处理图像以及待处理图像的视差图像,生成3D图像。In step S202, a 3D image is generated based on the image to be processed and the parallax image of the image to be processed.
在示例性实施方式中,步骤S201可以包括:将待处理图像分割为至少一个区域;针对所述至少一个区域中的一区域,根据该区域的目标显示效果,确定该区域中的图像像素点的视差属性值。In an exemplary embodiment, step S201 may include: dividing an image to be processed into at least one region; and determining, for an area of the at least one region, an image pixel point in the region according to a target display effect of the region Parallax attribute value.
其中,每个图像像素点可以包括多种属性,比如,红绿蓝(Red GreenBlue,RGB)属性、视差属性等。在本实施例中,通过调整每个图像像素点的视差属性,实现2D至3D的转换,或者调整区域的3D显示效果。Each image pixel may include a plurality of attributes, such as Red Green Blue (RGB) attributes, parallax attributes, and the like. In the present embodiment, the 2D to 3D conversion is realized by adjusting the parallax property of each image pixel, or the 3D display effect of the region is adjusted.
其中,根据区域的目标显示效果,确定该区域中每一图像像素点的视差属性值之后,若该图像像素点原本存在视差属性值,则采用当前确定的视差属性值替换原视差属性值,若该图像像素点原本没有设置视差属性值,则给该图像像素点添加视差属性,且取值为该图像像素点的当前确定的视差属性值。After determining the parallax attribute value of each image pixel in the area according to the target display effect of the area, if the image pixel has the parallax attribute value originally, the current parallax attribute value is used to replace the original parallax attribute value. If the parallax attribute value is not set in the image pixel, the parallax attribute is added to the image pixel, and the value is the currently determined disparity attribute value of the image pixel.
其中,目标显示效果可以包括以下之一:出屏效果和入屏效果。关于出屏效果和入屏效果的说明如前所述,故于此不再赘述。本实施例中,不同区域的目标显示效果可以相同或不同。然而,本申请对此并不限定。Among them, the target display effect may include one of the following: the screen effect and the screen entry effect. The descriptions of the screen effect and the screen entry effect are as described above, and thus will not be described again. In this embodiment, the target display effects of different regions may be the same or different. However, this application is not limited thereto.
在一实施例中,本实施例的图像处理方法还可以包括:根据接收到的指令或预设配置信息,确定待处理图像中的区域分割方式以及至少一个区域的目标显示效果。In an embodiment, the image processing method of the embodiment may further include: determining, according to the received instruction or preset configuration information, a region segmentation manner in the image to be processed and a target display effect of the at least one region.
其中,待处理图像的区域分割方式以及区域的目标显示效果可以由用户进行设置,也可以根据预设配置确定。然而,本申请对此并不限定。The area division manner of the image to be processed and the target display effect of the area may be set by the user, or may be determined according to a preset configuration. However, this application is not limited thereto.
其中,对待处理图像进行区域分割之后,可以在分割后的区域中选中至少一个区域进行目标显示效果的调整。After the region to be processed is segmented, at least one region may be selected in the segmented region to adjust the target display effect.
在一实施例中,步骤S201还可以包括:在一个区域未设置目标显示效果时,保持该区域内每个图像像素点的原视差属性值不变;或者,更新该区域内每个图像像素点的视差属性值为预设值。In an embodiment, the step S201 may further include: when the target display effect is not set in an area, the original parallax attribute value of each image pixel in the area is kept unchanged; or, each image pixel in the area is updated. The parallax attribute value is a preset value.
其中,针对未设置目标显示效果的区域(比如,未被选中的区域),若待处理图像的每个图像像素点本来就具有视差属性值,则保持该区域内的图像像素点的视差属性值不变;若待处理图像的每个图像像素点本来不具备视差属性值,则可以给图像像素点添加视差属性值,且待处理图像中该区域中每个图像像素点的视差属性值可以等于预设值,比如,人眼与屏幕之间的距离。或者,若待处理图像的每个图像像素点本来就具有视差属性值,可以将待处理图像中该区域中每个图像像素点的视差属性值更新为等于预设值,比如,人眼与屏幕之间的距离。Wherein, for an area where the target display effect is not set (for example, an unselected area), if each image pixel of the image to be processed originally has a parallax attribute value, the parallax attribute value of the image pixel point in the area is maintained. If the image pixel of the image to be processed does not have the parallax property value, the parallax property value may be added to the image pixel, and the parallax property value of each image pixel in the region in the image to be processed may be equal to A preset value, such as the distance between the human eye and the screen. Alternatively, if each image pixel of the image to be processed has a parallax attribute value, the parallax attribute value of each image pixel in the area of the image to be processed may be updated to be equal to a preset value, for example, a human eye and a screen. the distance between.
在本实施例中,可以采用不同方式确定待处理图像的选中区域以及未选中区域内图像像素点的视差属性值,待处理图像的不同区域内图像像素点的视差属性值可以不同,从而可以在不同区域产生不同的显示效果。In this embodiment, the parallax attribute values of the selected area of the image to be processed and the image pixel points in the unselected area may be determined in different manners, and the parallax attribute values of the image pixel points in different areas of the image to be processed may be different, so that Different areas produce different display effects.
在一实施例中,根据区域的目标显示效果,确定该区域中的图像像素点的视差属性值,可以包括:根据区域的目标显示效果,确定该区域中每个图像像 素点的偏移量;针对该区域中的每一图像像素点,根据该图像像素点的偏移量,确定该图像像素点的视差属性值。In an embodiment, determining the disparity attribute value of the image pixel point in the area according to the target display effect of the area may include: determining an offset of each image pixel point in the area according to the target display effect of the area; For each image pixel in the region, the parallax attribute value of the image pixel is determined according to the offset of the image pixel.
换言之,在本实施例中,一个区域中的图像像素点的视差属性值可以根据图像像素点的偏移量确定。其中,一个图像像素点的偏移量指左眼图像中该图像像素点在显示屏幕上的位置与右眼图像中该图像像素点在显示屏幕上的位置之间的差量的绝对值。In other words, in the present embodiment, the parallax attribute value of the image pixel point in one region can be determined according to the offset of the image pixel point. Wherein, the offset of one image pixel refers to the absolute value of the difference between the position of the image pixel on the display screen in the left eye image and the position of the image pixel on the display screen in the right eye image.
在一实施例中,根据区域的目标显示效果,确定该区域中每个图像像素点的偏移量,可以包括:在一个区域的目标显示效果设置为入屏效果时,确定该区域内的每一图像像素点的偏移量d的取值范围为大于0且小于T;在一个区域的目标显示效果设置为出屏效果时,确定该区域内的每一图像像素点的偏移量d的取值范围为大于T且小于2T;其中,T为人左右眼之间的间距。In an embodiment, determining the offset of each image pixel in the region according to the target display effect of the region may include: determining, when the target display effect of the region is set to the on-screen effect, determining each of the regions The offset value d of an image pixel is greater than 0 and less than T; when the target display effect of one region is set to the screen effect, the offset d of each image pixel in the region is determined. The value ranges from greater than T and less than 2T; wherein T is the spacing between the left and right eyes of the person.
在一实施例中,根据图像像素点的偏移量,确定该图像像素点的视差属性值,可以包括:根据该图像像素点的偏移量、人左右眼之间的间距以及待处理图像的像素密度,计算该图像像素点的视差属性值;或者,根据该图像像素点的偏移量、人左右眼之间的间距以及人眼与屏幕之间的距离,计算该图像像素点的视差属性值。In an embodiment, determining the disparity attribute value of the image pixel point according to the offset of the image pixel point may include: according to the offset of the image pixel point, the spacing between the left and right eyes of the image, and the image to be processed. Pixel density, calculating a parallax attribute value of the image pixel point; or calculating a parallax attribute of the image pixel point according to the offset of the image pixel point, the spacing between the left and right eyes of the person, and the distance between the human eye and the screen value.
在一实施例中,根据图像像素点的偏移量、人左右眼之间的间距以及待处理图像的像素密度,计算该图像像素点的视差属性值,可以包括:根据以下式子计算一个图像像素点的视差属性值:
Figure PCTCN2018104381-appb-000001
In an embodiment, calculating the disparity attribute value of the image pixel point according to the offset of the image pixel point, the spacing between the left and right eyes of the person, and the pixel density of the image to be processed may include: calculating an image according to the following formula: The parallax attribute value of the pixel:
Figure PCTCN2018104381-appb-000001
其中,M为该图像像素点的视差属性值,d为该图像像素点的偏移量,T为人左右眼之间的间距,PPI为像素密度。Where M is the parallax attribute value of the pixel of the image, d is the offset of the pixel of the image, T is the spacing between the left and right eyes of the person, and PPI is the pixel density.
其中,T的取值可以根据人左右眼之间的平均间距得到。PPI可以根据图像分辨率以及屏幕尺寸确定,以分辨率为a×b,屏幕尺寸为c为例,
Figure PCTCN2018104381-appb-000002
Among them, the value of T can be obtained according to the average spacing between the left and right eyes of the person. The PPI can be determined according to the image resolution and the screen size, with the resolution being a×b and the screen size being c as an example.
Figure PCTCN2018104381-appb-000002
在一实施例中,根据图像像素点的偏移量、人左右眼之间的间距以及人眼与屏幕之间的距离,计算该图像像素点的视差属性值,可以包括:根据以下式子计算一个图像像素点的视差属性值:M=f×T/d。In an embodiment, calculating the disparity attribute value of the image pixel point according to the offset of the image pixel point, the spacing between the left and right eyes of the person, and the distance between the human eye and the screen may include: calculating according to the following formula The parallax attribute value of an image pixel: M = f × T / d.
其中,M为该图像像素点的视差属性值,d为该图像像素点的偏移量,T为人左右眼之间的间距,f为人眼与屏幕之间的距离。Where M is the parallax attribute value of the pixel of the image, d is the offset of the pixel of the image, T is the distance between the left and right eyes of the person, and f is the distance between the human eye and the screen.
在一实施例中,当待处理图像的全部区域的目标显示效果设置为出屏效果 时,可以确定待处理图像中每个图像像素点的偏移量为1.5T;当待处理图像的全部区域的目标显示效果设置为入屏效果时,可以确定待处理图像中每个图像像素点的偏移量为0.5T,其中,T为人左右眼之间的间距。然而,本申请对此并不限定。In an embodiment, when the target display effect of the entire area of the image to be processed is set to the screen effect, it may be determined that the offset of each image pixel in the image to be processed is 1.5T; when all regions of the image to be processed are When the target display effect is set to the on-screen effect, it can be determined that the offset of each image pixel in the image to be processed is 0.5T, where T is the distance between the left and right eyes of the person. However, this application is not limited thereto.
在一实施例中,根据区域的目标显示效果,确定该区域中每个图像像素点的偏移量,可以包括:在人眼与屏幕之间的距离未发生改变时,根据该区域的目标显示效果,确定该区域内的每个图像像素点的初始视差属性值;在人眼与屏幕之间的距离发生改变后,根据该图像像素点的初始视差属性值、改变后的人眼与屏幕之间的距离以及人左右眼之间的间距,确定在人眼与屏幕之间的距离发生改变后该图像像素点的偏移量。In an embodiment, determining the offset of each image pixel in the region according to the target display effect of the region may include: displaying the target according to the region when the distance between the human eye and the screen has not changed. The effect is to determine an initial parallax attribute value of each image pixel in the area; after the distance between the human eye and the screen is changed, according to the initial parallax attribute value of the image pixel point, the changed human eye and the screen The distance between the person and the distance between the left and right eyes determines the offset of the pixel of the image after the distance between the human eye and the screen changes.
在一实施例中,可以通过摄像设备的眼球追踪功能,实时获取人眼与屏幕之间的距离。在人眼与屏幕之间的距离没有发生改变时,可以根据一个区域的目标显示效果,确定该区域中每个图像像素点的初始偏移量;然后,针对该区域中的每一图像像素点,根据该图像像素点的初始偏移量、人左右眼之间的间距以及待处理图像的像素密度,计算该图像像素点的初始视差属性值;在人眼与屏幕之间的距离发生改变之后,根据该图像像素点的初始视差属性值、改变后的人眼与屏幕之间的距离以及人左右眼之间的间距,计算人眼与屏幕之间的距离发生改变后该图像像素点的偏移量,即d=f×T/M0,其中,d为人眼与屏幕之间的距离发生改变后该图像像素点的偏移量,M0为该图像像素点的初始视差属性值,f为实时获取得到的人眼与屏幕之间的距离,T为人左右眼之间的间距;之后,可以根据人眼与屏幕之间的距离发生改变后该图像像素点的偏移量、人左右眼之间的间距以及待处理图像的像素密度,计算人眼与屏幕之间的距离发生改变后该图像像素点的视差属性值。In an embodiment, the distance between the human eye and the screen can be acquired in real time through the eyeball tracking function of the imaging device. When the distance between the human eye and the screen does not change, the initial offset of each image pixel in the region may be determined according to the target display effect of the region; then, for each image pixel in the region Calculating an initial parallax attribute value of the image pixel point according to an initial offset of the pixel of the image, a spacing between the left and right eyes of the person, and a pixel density of the image to be processed; after the distance between the human eye and the screen is changed According to the initial parallax attribute value of the pixel of the image, the distance between the human eye and the screen after the change, and the distance between the left and right eyes of the image, the deviation of the pixel point of the image after the distance between the human eye and the screen is changed is calculated. The shift amount, that is, d=f×T/M0, where d is the offset of the image pixel point after the distance between the human eye and the screen is changed, and M0 is the initial parallax attribute value of the image pixel point, and f is real time. Obtain the distance between the obtained human eye and the screen, T is the distance between the left and right eyes of the person; after that, the offset of the pixel of the image may be changed according to the distance between the human eye and the screen, and the person The distance between the left and right eyes and the pixel density of the image to be processed are calculated as the parallax attribute value of the pixel of the image after the distance between the human eye and the screen is changed.
在一实施例中,根据区域的目标显示效果,确定该区域中的图像像素点的视差属性值,可以包括:根据区域的目标显示效果,确定给屏幕上所述区域对应的光栅柱镜施加电压,以使液晶分子排列方向发生偏移;根据该电压,确定该区域内每个图像像素点的视差属性值。In an embodiment, determining the disparity attribute value of the image pixel point in the area according to the target display effect of the area may include: determining, according to the target display effect of the area, applying a voltage to the grating cylinder mirror corresponding to the area on the screen. In order to shift the alignment direction of the liquid crystal molecules; according to the voltage, the parallax property value of each image pixel in the region is determined.
其中,根据待处理图像的区域的目标显示效果,可以确定给屏幕上与该区域对应的光栅柱镜施加电压,使得液晶分子排列方向发生偏移,从而改变光线的折射率。其中,电压越大则折射率越大;然后,根据该电压,确定待处理图 像的该区域内每个图像像素点的视差属性值。Wherein, according to the target display effect of the region of the image to be processed, it may be determined that a voltage is applied to the grating cylinder mirror corresponding to the region on the screen, so that the alignment direction of the liquid crystal molecules is shifted, thereby changing the refractive index of the light. Wherein, the larger the voltage, the larger the refractive index; then, according to the voltage, the parallax attribute value of each image pixel in the region of the image to be processed is determined.
在一实施例中,如图10所示,待处理图像的区域内的每个图像像素点的视差属性值根据施加在光栅柱镜上的电压确定。如图10(a)所示,通过外部电压驱动,可以使得氧化铟锡(Indium Tin Oxides,ITO)层,即导电玻璃层产生电场,从而改变液晶层中液晶分子的排列方向,以改变光线的折射率,从而可以提供3D显示效果。如图10(b)所示,在外部电压为0的情况下,可以提供2D显示效果。In an embodiment, as shown in FIG. 10, the parallax attribute value of each image pixel in the region of the image to be processed is determined according to the voltage applied to the grating cylinder. As shown in FIG. 10( a ), an indium tin oxide (ITO) layer, that is, a conductive glass layer, can generate an electric field by driving an external voltage, thereby changing the alignment direction of liquid crystal molecules in the liquid crystal layer to change the light. The refractive index can provide a 3D display effect. As shown in FIG. 10(b), in the case where the external voltage is 0, a 2D display effect can be provided.
在一实施例中,步骤S202可以包括:将待处理图像确定为左眼图像,将待处理图像的视差图像确定为右眼图像;或者,将待处理图像确定为右眼图像,将待处理图像的视差图像确定为左眼图像。In an embodiment, step S202 may include: determining an image to be processed as a left eye image, determining a parallax image of the image to be processed as a right eye image; or determining the image to be processed as a right eye image, and the image to be processed The parallax image is determined as the left eye image.
在本实施例中,根据原2D图像以及根据2D图像得到的视差图像,可以生成左右眼格式的3D图像,从而实现2D图像至3D图像的转换。也可以根据原3D图像的左眼图像或右眼图像,生成对应的视差图像,从而生成更新后的3D图像,实现3D图像的修改编辑,以调整区域的显示效果。In the present embodiment, according to the original 2D image and the parallax image obtained from the 2D image, a 3D image of the left and right eye format can be generated, thereby realizing conversion of the 2D image to the 3D image. It is also possible to generate a corresponding parallax image according to the left eye image or the right eye image of the original 3D image, thereby generating an updated 3D image, and implementing modified editing of the 3D image to adjust the display effect of the region.
在一实施例中,步骤S202之后,本实施例的方法还可以包括:在第一方向上,分别对左眼图像和右眼图像进行压缩;将压缩后的左眼图像和右眼图像按照预定格式进行交织,得到交织图像;在第一方向上对交织图像进行拉伸;显示拉伸后的交织图像。In an embodiment, after the step S202, the method of the embodiment may further include: compressing the left eye image and the right eye image respectively in the first direction; and compressing the compressed left eye image and the right eye image according to the predetermined The format is interleaved to obtain an interlaced image; the interlaced image is stretched in a first direction; and the stretched interlaced image is displayed.
其中,第一方向可以为横向坐标方向。然而,本申请对此并不限定。在其他实现方式中,第一方向也可以是纵向坐标方向。Wherein, the first direction may be a lateral coordinate direction. However, this application is not limited thereto. In other implementations, the first direction can also be a longitudinal coordinate direction.
其中,在第一方向为横向坐标方向时,左眼图像和右眼图像分别在横向坐标方向上进行压缩之后,将左眼图像和右眼图像按照一列左眼图像像素和一列右眼图像像素间隔的顺序进行交织,得到交织图像。然后,在第一方向对交织图像进行拉伸,以达到全屏的效果。Wherein, when the first direction is the horizontal coordinate direction, the left eye image and the right eye image are respectively compressed in the lateral coordinate direction, and the left eye image and the right eye image are spaced according to a column of left eye image pixels and a column of right eye image pixels. The order is interleaved to obtain an interlaced image. Then, the interlaced image is stretched in the first direction to achieve a full screen effect.
图3为本申请实施例提供的图像处理方法的一种示例流程图。本实施例提供的图像处理方法可以应用于一移动终端,该移动终端可以提供2D至3D图像转换的编辑界面,该编辑界面上可以包括:图像编辑区以及控制按钮。如图4所示,编辑界面上包括:图像编辑区401、出屏按钮402以及入屏按钮403。其中,出屏按钮402设置为控制进行出屏效果处理,入屏按钮403设置为控制进行入屏效果处理。然而,本申请对此并不限定。在其他实施方式中,编辑界面 上也可以不设置控制按钮,用户可以采用组合键方式进行出屏或入屏效果处理控制;或者,编辑界面上还可以包括:自动手动切换按钮,自动手动切换按钮设置为控制当前图像转换的模式,比如,在手动模式下,需要用户在图像编辑区401选择具有相应显示效果的区域,在自动模式下,无需用户进行区域选择,移动终端可以根据预设配置信息选中区域。FIG. 3 is a flowchart of an example of an image processing method according to an embodiment of the present application. The image processing method provided in this embodiment may be applied to a mobile terminal, and the mobile terminal may provide an editing interface for 2D to 3D image conversion, and the editing interface may include: an image editing area and a control button. As shown in FIG. 4, the editing interface includes an image editing area 401, an out-of-screen button 402, and an on-screen button 403. The out button button 402 is set to control the screen effect processing, and the screen button 403 is set to control the screen effect processing. However, this application is not limited thereto. In other embodiments, the control interface may not be provided with a control button, and the user may use the combination key mode to perform on-screen or on-screen effect processing control; or the editing interface may further include: an automatic manual switching button, and an automatic manual switching button. It is set to control the mode of the current image conversion. For example, in the manual mode, the user needs to select an area having a corresponding display effect in the image editing area 401. In the automatic mode, the user can perform area selection without the user, and the mobile terminal can be configured according to the preset configuration information. Check the area.
如图3所示,本实施例提供的图像处理方法,包括步骤S301至步骤S307。As shown in FIG. 3, the image processing method provided in this embodiment includes steps S301 to S307.
在步骤S301中,确定片源格式以及呈现方式。In step S301, the slice source format and the presentation mode are determined.
其中,片源格式可以包括以下至少之一类型:2D图像、2D视频、3D图像、3D视频;呈现方式可以包括以下之一:2D、3D。其中,3D图像可以为左右眼格式的3D图像,包括左眼图像和右眼图像。The source format may include at least one of the following types: 2D image, 2D video, 3D image, 3D video; the presentation manner may include one of the following: 2D, 3D. Wherein, the 3D image may be a 3D image of a left and right eye format, including a left eye image and a right eye image.
在本实施例中,可以根据用户选择的片源确定片源格式,根据用户输入的指令,确定呈现方式。然而,本申请对此并不限定。在其他实现方式中,移动终端可以预设配置信息确定呈现方式。In this embodiment, the slice source format may be determined according to the slice source selected by the user, and the presentation mode is determined according to an instruction input by the user. However, this application is not limited thereto. In other implementation manners, the mobile terminal may preset the configuration information to determine the presentation mode.
需要说明的是,在本实施例中,若片源格式为2D图像或2D视频,呈现方式为2D,则可以正常显示2D图像或播放2D视频;若片源格式为2D图像或2D视频,呈现方式为3D,则可以执行步骤S302至步骤S307,将2D图像转换为3D图像,再进行显示;若片源格式为2D图像或2D视频,呈现方式为3D时也可以将2D视频中的每一帧2D图像转换为3D图像,形成3D视频后进行播放;若片源格式为3D图像或3D视频,呈现方式为2D,则可以从3D图像中提取右眼图像,并经过图像拉伸等处理后进行显示;若片源格式为3D图像或3D视频,呈现方式为2D时,也可以针对3D视频中每一帧3D图像提取右眼图像,然后经过图像拉伸处理后进行播放;若片源格式为3D图像或3D视频,呈现方式为3D,则可以执行步骤S304至步骤S307,以呈现3D显示效果。其中,若片源提供的3D图像或3D视频是已经经过交织后的图像,则可以经过图像拉伸处理后直接显示或播放。It should be noted that, in this embodiment, if the source format is 2D image or 2D video, and the presentation mode is 2D, the 2D image or the 2D video can be normally displayed; if the source format is 2D image or 2D video, the presentation is performed. If the mode is 3D, step S302 to step S307 may be performed to convert the 2D image into a 3D image and then display; if the source format is 2D image or 2D video, and each of the 2D videos may be displayed when the rendering mode is 3D. The frame 2D image is converted into a 3D image, and the 3D video is formed and played; if the source format is 3D image or 3D video, and the rendering mode is 2D, the right eye image can be extracted from the 3D image and processed by image stretching and the like. Display; if the source format is 3D image or 3D video, when the rendering mode is 2D, the right eye image may also be extracted for each frame 3D image in the 3D video, and then played after the image stretching process; if the source format For a 3D image or a 3D video, the presentation mode is 3D, then steps S304 to S307 may be performed to present a 3D display effect. Wherein, if the 3D image or the 3D video provided by the source is an image that has been interlaced, it can be directly displayed or played after the image stretching process.
下面以片源格式为2D图像、采用3D方式呈现为例进行说明。在本示例中,用户选择手动设置2D图像的区域显示效果;在用户选择2D图像之后,在编辑界面的图像编辑区会显示待处理的2D图像。如图4所示,图像编辑区401内的待处理2D图像为一只兔子。The following is an example in which the source format is a 2D image and the 3D method is used as an example. In this example, the user selects the area display effect of manually setting the 2D image; after the user selects the 2D image, the 2D image to be processed is displayed in the image editing area of the editing interface. As shown in FIG. 4, the 2D image to be processed in the image editing area 401 is a rabbit.
在步骤S302中,确定2D图像中的选中区域以及目标显示效果。In step S302, the selected area in the 2D image and the target display effect are determined.
在本步骤中,针对待处理的2D图像,确定区域分割方式以及选定的区域的目标显示效果。比如,在图6中,虚线框选中区域的目标显示效果为出屏效果,点划线框选中区域的目标显示效果为入屏效果。In this step, for the 2D image to be processed, the region segmentation mode and the target display effect of the selected region are determined. For example, in FIG. 6, the target display effect of the selected area of the dotted line frame is the screen output effect, and the target display effect of the selected area of the dotted line frame is the screen entry effect.
在步骤S303中,根据确定的区域,进行3D图像转换。In step S303, 3D image conversion is performed based on the determined area.
在一实施例中,以人眼与屏幕之间的距离f是固定值为例进行说明。In an embodiment, the distance f between the human eye and the screen is a fixed value.
当用户在图4所示的编辑界面选中出屏按钮402,并在图像编辑区401涂抹到兔子的头部区域时,如图5中的虚线框标出区域,可以获取涂抹区域的坐标,比如,[20,20,40,40],并给该坐标区域内的每个图像像素点添加视差属性值。需要说明的是,若该坐标区域内的每个图像像素点已设置有视差属性值,则采用新的视差属性值替换原视差属性值。When the user selects the screen button 402 in the editing interface shown in FIG. 4 and applies the image editing area 401 to the head area of the rabbit, the area of the smear area can be obtained by marking the area as indicated by the dotted line in FIG. , [20, 20, 40, 40], and add a parallax attribute value to each image pixel in the coordinate area. It should be noted that if each image pixel in the coordinate area has been set with a parallax attribute value, the original parallax attribute value is replaced with a new parallax attribute value.
其中,坐标采用[横坐标、纵坐标、横坐标、纵坐标]的方式表示。其中,本申请并不限定虚线框标出区域的坐标确定方式。比如,可以根据用户操作的鼠标在图像编辑区401截取或点选的区域确定上述坐标,也可以根据用户在触摸屏上的触控位置确定选择区域的坐标。Among them, the coordinates are represented by [abscissa, ordinate, abscissa, ordinate]. However, the present application does not limit the coordinate determination manner of the area marked by the broken line frame. For example, the coordinates may be determined according to an area intercepted or clicked by the mouse operated by the user in the image editing area 401, or the coordinates of the selected area may be determined according to the touch position of the user on the touch screen.
在本示例中,用户设置图5中的虚线框标出区域的目标显示效果为出屏效果,因此,该区域内每个图像像素点的视差属性值要大于人眼与屏幕之间的距离f,未选中区域(即图5中的虚线框之外的区域)内每个像素点的视差属性值可以等于人眼与屏幕之间的距离f。此时,选中区域内每个像素点的偏移量d的调节范围为T<d<2T。In this example, the user sets the target display effect of the dotted area in FIG. 5 as the screen-out effect. Therefore, the parallax attribute value of each image pixel in the area is greater than the distance between the human eye and the screen. The parallax attribute value of each pixel in the unselected area (ie, the area outside the dotted line frame in FIG. 5) may be equal to the distance f between the human eye and the screen. At this time, the adjustment range of the offset d of each pixel in the selected area is T<d<2T.
当用户在图4所示的编辑界面选中入屏按钮403,并在图像编辑区401涂抹到兔子的尾部区域时,如图6中的点划线框标出区域,可以获取涂抹区域的坐标,比如,[0,10,30,50],并给该坐标区域内的每个图像像素点添加视差属性值。When the user selects the on-screen button 403 in the editing interface shown in FIG. 4 and applies the image editing area 401 to the tail region of the rabbit, the area of the smear area can be obtained by marking the area as shown by the dotted line frame in FIG. For example, [0, 10, 30, 50], and add a disparity attribute value to each image pixel in the coordinate area.
在本示例中,用户设置图6中的点划线框标出区域的目标显示效果为入屏效果,因此,该区域内每个图像像素点的视差属性值要小于人眼与屏幕之间的距离f。此时,未选中区域(即图6中虚线框和点划线框之外的区域)内每个图像像素点的视差属性值可以等于人眼与屏幕之间的距离f。此时,点划线框选中区域内每个图像像素点的偏移量d的调节范围为0<d<T。In this example, the user sets the target display effect of the marked area of the dotted line frame in FIG. 6 as the on-screen effect. Therefore, the parallax attribute value of each image pixel in the area is smaller than between the human eye and the screen. Distance f. At this time, the parallax attribute value of each image pixel in the unselected area (ie, the area outside the dotted line and the dotted line frame in FIG. 6) may be equal to the distance f between the human eye and the screen. At this time, the adjustment range of the offset d of each image pixel in the selected area of the dotted line frame is 0<d<T.
在本示例中,为了让3D图像过渡平滑,可以采用概率密度算法对3D图像的效果做平滑处理,则视差属性值可以根据下式计算:
Figure PCTCN2018104381-appb-000003
In this example, in order to smooth the 3D image transition, the effect of the 3D image can be smoothed by the probability density algorithm, and the parallax attribute value can be calculated according to the following formula:
Figure PCTCN2018104381-appb-000003
其中,以分辨率1920×1080的6英寸屏的图像为例,分辨率1920×1080的6英寸屏的图像的像素密度(Pixels Per Inch,PPI)等于367。For example, a 6-inch screen image with a resolution of 1920×1080 is taken as an example, and a pixel density (Pixels Per Inch, PPI) of a 6-inch screen with a resolution of 1920×1080 is equal to 367.
在本示例中,在虚线框标出区域,中心图像像素点的d=2T,周边邻近图像像素点的d在T<d<2T范围内依次递减。虚线框标出区域内每个图像像素点的视差属性值根据对应的d值采用上述式子进行计算。在点划线标出区域,中心图像像素点的d=T,周边邻近图像像素点的d在0<d<T0范围内依次递减。点划线框标出区域内每个图像像素点的视差属性值根据对应的d值也采用上述式子进行计算。In this example, in the area marked by the dashed box, d=2T of the central image pixel point, and d of the neighboring image pixel point are successively decreased in the range of T<d<2T. The dashed box indicates that the parallax attribute value of each image pixel in the region is calculated according to the corresponding d value using the above formula. In the dotted line marked area, d=T of the central image pixel point, d of the neighboring image pixel point is successively decreased in the range of 0<d<T0. The value of the parallax attribute of each image pixel in the dotted line frame is calculated according to the corresponding d value according to the above formula.
在本示例中,通过调节区域坐标[20,20,40,40]和区域坐标[0,10,30,50]内像素水平方向的d,更新这两个区域内每个图像像素点的视差属性值。除这两个区域外的图像像素点的视差属性值可以等于f。In this example, the parallax of each image pixel in the two regions is updated by adjusting the d in the horizontal direction of the pixel in the region coordinates [20, 20, 40, 40] and the region coordinates [0, 10, 30, 50]. Property value. The parallax attribute value of the image pixel points other than these two areas may be equal to f.
在本示例中,对2D图像中每个图像像素点的视差属性值进行更新之后,生成视差图像。将生成的视差图像作为右眼图像,将2D图像作为左眼图像,生成左右眼格式的3D图像,从而实现2D图像至3D图像的转换,且显示效果可以进行区域化设置。In this example, after updating the parallax attribute value of each image pixel point in the 2D image, a parallax image is generated. The generated parallax image is used as a right eye image, and the 2D image is used as a left eye image to generate a 3D image of the left and right eye format, thereby realizing conversion of the 2D image to the 3D image, and the display effect can be regionalized.
在另一示例中,人眼与屏幕之间的距离f可以实时变化。此时,可以开启前置摄像头的眼球追踪功能,实时检测人眼与屏幕之间的距离。在本示例中,可以根据初始的d,采用上述平滑处理的视差属性值计算式确定初始的视差属性值;在f发生变化时,可以根据下式调整d值:d=f×T/M0;其中,M0为初始视差属性值;在d更新之后,然后再采用上述平滑处理的视差属性值计算式计算更新后的M。在本示例中,可以通过f值实时调节d值,再通过d值控制出屏和入屏效果。In another example, the distance f between the human eye and the screen can vary in real time. At this point, you can turn on the eye tracking function of the front camera to detect the distance between the human eye and the screen in real time. In this example, the initial disparity attribute value may be determined according to the initial d, using the above-described smoothing disparity attribute value calculation formula; when f changes, the d value may be adjusted according to the following formula: d=f×T/M0; Wherein, M0 is an initial parallax attribute value; after d is updated, the updated M is calculated by using the above-described smoothed disparity attribute value calculation formula. In this example, the d value can be adjusted in real time by the f value, and the screen and screen effects can be controlled by the d value.
在步骤S304中,对转换得到的3D图像进行压缩处理。In step S304, the converted 3D image is subjected to compression processing.
在本步骤中,分别对左眼图像和右眼图像在横向坐标方向进行压缩,即横向像素压缩到一半,比如,将分辨率为1920×1080的图像压缩到分辨率为960×1080的图像。In this step, the left-eye image and the right-eye image are respectively compressed in the lateral coordinate direction, that is, the horizontal pixels are compressed to half, for example, an image with a resolution of 1920×1080 is compressed to an image with a resolution of 960×1080.
在步骤S305中,对压缩后的3D图像进行图像交织。In step S305, image interleaving is performed on the compressed 3D image.
在本步骤中,对经过压缩处理后的图像处理(Image Signal Processor,ISP)通道内的左右眼图像进行交织融合处理,如图8所示,左眼图像和右眼图像按并排(side by side)的方式进行排布,从而生成标准的分辨率为1920×1080的交 织图像。In this step, the left and right eye images in the compressed image processing (Image Signal Processor, ISP) channel are interlaced and merged. As shown in FIG. 8, the left eye image and the right eye image are side by side (side by side). The arrangement is performed to generate a standard interlaced image with a resolution of 1920×1080.
在步骤S306中,对交织图像进行拉伸。In step S306, the interlaced image is stretched.
在本步骤中,对交织得到的图像的左眼和右眼数据拉伸复原到1920×1080分辨率。In this step, the left eye and right eye data of the interleaved image are stretched back to 1920×1080 resolution.
在步骤S307中,显示3D图像。In step S307, a 3D image is displayed.
在本示例中,可以在终端的显示屏表面贴一层3D光栅膜,使得显示屏播放3D图像时,人的左右眼可以接收不同的左右眼图像数据,如图9所示,其中,斜线框标记部分为左眼图像,白色矩形框标记部分为右眼图像。然而,本申请对此并不限定。在其他实现方式中,终端的显示屏播放3D图像或3D视频时,用户可以通过佩戴3D眼镜进行观看。In this example, a 3D grating film can be attached to the display surface of the terminal, so that when the display screen plays a 3D image, the left and right eyes of the person can receive different left and right eye image data, as shown in FIG. The frame mark portion is a left eye image, and the white rectangular frame mark portion is a right eye image. However, this application is not limited thereto. In other implementations, when the display screen of the terminal plays a 3D image or a 3D video, the user can watch by wearing the 3D glasses.
需要说明的是,在其他实现方式中,在用户选择待处理的2D图像之后,可以对整个2D图像进行处理,比如,选择出屏效果时,将d值增大到1.5T,当选择入屏效果时,将d值缩小为0.5T,然后,根据M=f×T/d自动计算[0,0,1080,1920]内每个图像像素点的视差属性值M,并生成视差图像。It should be noted that, in other implementation manners, after the user selects the 2D image to be processed, the entire 2D image may be processed. For example, when the screen output effect is selected, the d value is increased to 1.5T, and when the screen is selected, In the effect, the d value is reduced to 0.5T, and then the parallax attribute value M of each image pixel in [0, 0, 1080, 1920] is automatically calculated from M = f × T / d, and a parallax image is generated.
在图3所示实施例中,本实施例的图像处理方法可以用于将2D图像转换为3D图像,且不同区域的显示效果可以不同。In the embodiment shown in FIG. 3, the image processing method of the present embodiment can be used to convert a 2D image into a 3D image, and the display effects of different regions can be different.
在另一实施例中,本实施例的图像处理方法可以用于编辑3D图像。在本实施例中,可以选择原左右眼格式的3D图像中的左眼图像作为待处理图像,在该待处理图像中进行区域划分,并确定目标显示效果为出屏或入屏效果的区域;针对该区域,调整该区域中每个图像像素点的视差属性值,其中,若原左眼图像中的图像像素点没有设置视差属性值,则可以给每个图像像素点添加视差属性值,若原左眼图像中的图像像素点已设置有视差属性值,则可以根据最新确定的视差属性值进行更新。其中,该区域中每个图像像素点的视差属性值的计算方式可以参照图3所示实施例的说明,故于此不再赘述。在本实施例中,除选中区域外的图像像素点的视差属性值保持不变或者设置为预设值,比如,等于人眼与屏幕之间的距离。在本实施例中,在生成原左眼图像的视差图像之后,将原左眼图像仍作为左眼图像,将原左眼图像的视差图像作为右眼图像,得到更新后的左右眼格式的3D图像。如此,实现对3D图像的修改,可以修改不同区域的显示效果。In another embodiment, the image processing method of the present embodiment can be used to edit a 3D image. In this embodiment, the left eye image in the original left and right eye format 3D image may be selected as the image to be processed, and the area division is performed in the image to be processed, and the target display effect is determined as an area for the screen or the screen effect; For the area, the parallax attribute value of each image pixel in the area is adjusted, wherein if the image pixel point in the original left eye image is not set with the parallax attribute value, the parallax attribute value may be added to each image pixel point, if the original left The image pixel in the eye image has been set with the parallax attribute value, and can be updated according to the newly determined parallax attribute value. For the calculation of the parallax attribute value of each image pixel in the region, refer to the description of the embodiment shown in FIG. 3, and thus no further details are provided herein. In this embodiment, the parallax attribute value of the image pixel except the selected area remains unchanged or set to a preset value, for example, equal to the distance between the human eye and the screen. In this embodiment, after generating the parallax image of the original left eye image, the original left eye image is still used as the left eye image, and the parallax image of the original left eye image is used as the right eye image to obtain the updated left and right eye format 3D. image. In this way, the modification of the 3D image can be realized, and the display effect of different regions can be modified.
综上所述,通过本实施例提供的图像处理方法,使得用户可以便捷地实现 2D图像转为3D图像,而且可以根据需要调节区域的出屏和入屏效果,呈现3D效果。In summary, the image processing method provided by the embodiment enables the user to conveniently convert the 2D image into a 3D image, and can adjust the out-of-screen and on-screen effects of the region as needed to present a 3D effect.
图11为本申请实施例提供的图像处理装置的示意图。如图11所示,本实施例提供的图像处理装置,包括视差图像生成模块1101和三维图像生成模块1102。FIG. 11 is a schematic diagram of an image processing apparatus according to an embodiment of the present application. As shown in FIG. 11, the image processing apparatus provided in this embodiment includes a parallax image generating module 1101 and a three-dimensional image generating module 1102.
视差图像生成模块1101,设置为调整待处理图像的至少一个区域中图像像素点的视差属性值,并生成待处理图像的视差图像。The parallax image generating module 1101 is configured to adjust a parallax property value of an image pixel point in at least one region of the image to be processed, and generate a parallax image of the image to be processed.
三维图像生成模块1102,设置为根据待处理图像以及待处理图像的视差图像,生成三维图像。The three-dimensional image generation module 1102 is configured to generate a three-dimensional image according to the image to be processed and the parallax image of the image to be processed.
在一实施例中,视差图像生成模块1101可以设置为将待处理图像分割为至少一个区域;针对所述至少一个区域中的一区域,根据该区域的目标显示效果,确定该区域中的图像像素点的视差属性值。In an embodiment, the parallax image generating module 1101 may be configured to divide the image to be processed into at least one region; and determine an image pixel in the region according to the target display effect of the region for the region of the at least one region The parallax attribute value of the point.
在一实施例中,视差图像生成模块1101可以设置为通过以下方式根据区域的目标显示效果,确定该区域中的图像像素点的视差属性值:In an embodiment, the parallax image generating module 1101 may be configured to determine a parallax attribute value of an image pixel point in the area according to a target display effect of the area in the following manner:
根据该区域的目标显示效果,确定该区域中每个图像像素点的偏移量;针对该区域中的每一图像像素点,根据该图像像素点的偏移量,确定该图像像素点的视差属性值。Determining an offset of each image pixel in the region according to the target display effect of the region; determining, for each image pixel in the region, a parallax of the pixel of the image according to an offset of the pixel of the image Property value.
在一实施例中,视差图像生成模块1101可以设置为通过以下方式根据图像像素点的偏移量,确定该图像像素点的视差属性值:根据一个图像像素点的偏移量、人左右眼之间的间距以及待处理图像的像素密度,计算该图像像素点的视差属性值;或者,根据一个图像像素点的偏移量、人左右眼之间的间距以及人眼与屏幕之间的距离,计算该图像像素点的视差属性值。In an embodiment, the parallax image generating module 1101 may be configured to determine a disparity attribute value of the image pixel point according to an offset of the image pixel point by: an offset of the pixel point of one image, and a left and right eye of the person The spacing between the pixels and the pixel density of the image to be processed, the parallax property value of the pixel of the image is calculated; or, according to the offset of an image pixel point, the spacing between the left and right eyes of the person, and the distance between the human eye and the screen, The parallax attribute value of the pixel of the image is calculated.
在一实施例中,视差图像生成模块1101可以设置为通过以下方式根据图像像素点的偏移量、人左右眼之间的间距以及待处理图像的像素密度,计算该图像像素点的视差属性值:根据以下式子计算一个图像像素点的视差属性值:
Figure PCTCN2018104381-appb-000004
In an embodiment, the parallax image generating module 1101 may be configured to calculate a parallax attribute value of the image pixel point according to an offset of the image pixel point, a spacing between the left and right eyes of the person, and a pixel density of the image to be processed. : Calculate the parallax property value of an image pixel according to the following formula:
Figure PCTCN2018104381-appb-000004
其中,M为该图像像素点的视差属性值,d为该图像像素点的偏移量,T为人左右眼之间的间距,PPI为像素密度。Where M is the parallax attribute value of the pixel of the image, d is the offset of the pixel of the image, T is the spacing between the left and right eyes of the person, and PPI is the pixel density.
在一实施例中,视差图像生成模块1101可以设置为通过以下方式根据图像像素点的偏移量、人左右眼之间的间距以及人眼与屏幕之间的距离,计算该图 像像素点的视差属性值:根据以下式子计算一个图像像素点的视差属性值:M=f×T/d。In an embodiment, the parallax image generating module 1101 may be configured to calculate the parallax of the image pixel point according to the offset of the image pixel point, the spacing between the left and right eyes of the person, and the distance between the human eye and the screen in the following manner. Attribute value: Calculate the parallax attribute value of an image pixel according to the following formula: M = f × T / d.
其中,M为该图像像素点的视差属性值,d为该图像像素点的偏移量,T为人左右眼之间的间距,f为人眼与屏幕之间的距离。Where M is the parallax attribute value of the pixel of the image, d is the offset of the pixel of the image, T is the distance between the left and right eyes of the person, and f is the distance between the human eye and the screen.
在一实施例中,视差图像生成模块1101可以设置为通过以下方式根据区域的目标显示效果,确定该区域中每个图像像素点的偏移量:在一个区域的目标显示效果设置为入屏效果时,确定该区域中每一图像像素点的偏移量d的取值范围为大于0且小于T;在一个区域的目标显示效果设置为出屏效果时,确定该区域中每一图像像素点的偏移量d的取值范围为大于T且小于2T;其中,T为人左右眼之间的间距。In an embodiment, the parallax image generating module 1101 may be configured to determine an offset of each image pixel point in the region according to the target display effect of the region in the following manner: the target display effect in one region is set to the on-screen effect. Determining, the value of the offset d of each image pixel in the region is greater than 0 and less than T; when the target display effect of one region is set to the screen effect, determining each image pixel in the region The offset d ranges from greater than T to less than 2T; wherein T is the spacing between the left and right eyes of the person.
在一实施例中,视差图像生成模块1001还可以设置为在一个区域未设置目标显示效果时,保持该区域内每个图像像素点的原视差属性值不变;或者,更新该区域内每个图像像素点的视差属性值为预设值,比如,人眼与屏幕之间的距离。In an embodiment, the parallax image generating module 1001 may further be configured to keep the original parallax property value of each image pixel in the region unchanged when the target display effect is not set in one region; or, update each region in the region. The parallax attribute value of the image pixel is a preset value, for example, the distance between the human eye and the screen.
在一实施例中,视差图像生成模块1101可以设置为通过以下方式根据区域的目标显示效果,确定该区域中每个图像像素点的偏移量:在人眼与屏幕之间的距离未发生改变时,根据该区域的目标显示效果,确定该区域内的图像像素点的初始视差属性值;在人眼与屏幕之间的距离发生改变后,根据该图像像素点的初始视差属性值、改变后的人眼与屏幕之间的距离以及人左右眼之间的间距,确定在人眼与屏幕之间的距离发生改变后该图像像素点的偏移量。In an embodiment, the parallax image generating module 1101 may be configured to determine an offset of each image pixel point in the region according to a target display effect of the region in such a manner that the distance between the human eye and the screen does not change. And determining an initial parallax attribute value of the image pixel in the area according to the target display effect of the area; after the distance between the human eye and the screen is changed, according to the initial parallax attribute value of the image pixel point, after the change The distance between the human eye and the screen and the spacing between the left and right eyes of the person determine the offset of the pixel of the image after the distance between the human eye and the screen changes.
在一实施例中,视差图像生成模块1101可以设置为通过以下方式根据区域的目标显示效果,确定该区域中的图像像素点的视差属性值:根据区域的目标显示效果,确定给屏幕上与所述区域对应的光栅柱镜施加电压,以使液晶分子排列方向产生偏移,从而改变光线的折射率,其中,电压越大则折射率越大;根据该电压,确定该区域内每个图像像素点的视差属性值。In an embodiment, the parallax image generating module 1101 may be configured to determine a parallax attribute value of the image pixel point in the area according to the target display effect of the area by determining the image to the screen according to the target display effect of the area. Applying a voltage to the corresponding grating cylinder of the region to shift the alignment direction of the liquid crystal molecules, thereby changing the refractive index of the light, wherein the larger the voltage, the larger the refractive index; according to the voltage, each image pixel in the region is determined. The parallax attribute value of the point.
在一实施例中,三维图像生成模块1102可以设置为通过以下方式根据待处理图像以及该待处理图像的视差图像,生成三维图像:将待处理图像确定为左眼图像,将待处理图像的视差图像确定为右眼图像;或者,将待处理图像确定为右眼图像,将待处理图像的视差图像确定为左眼图像。In an embodiment, the three-dimensional image generating module 1102 may be configured to generate a three-dimensional image according to the image to be processed and the parallax image of the image to be processed in the following manner: determining the image to be processed as a left-eye image, and distorting the image to be processed The image is determined to be a right eye image; or, the image to be processed is determined as a right eye image, and the parallax image of the image to be processed is determined as a left eye image.
如图12所示,本实施例提供的装置还可以包括图像压缩模块1203,图像交 织模块1204图像拉伸模块1205和图像显示模块1206。As shown in FIG. 12, the apparatus provided in this embodiment may further include an image compression module 1203, an image interfacing module 1204, an image stretching module 1205, and an image display module 1206.
图像压缩模块1203,设置为在第一方向上,分别对三维图像包括的左眼图像和右眼图像进行压缩。The image compression module 1203 is configured to compress the left eye image and the right eye image included in the three-dimensional image, respectively, in the first direction.
图像交织模块1204,设置为将压缩后的左眼图像和右眼图像按照预定格式进行交织,得到交织图像。The image interleaving module 1204 is configured to interleave the compressed left eye image and the right eye image in a predetermined format to obtain an interlaced image.
图像拉伸模块1205,设置为在第一方向上对交织图像进行拉伸。The image stretching module 1205 is configured to stretch the interlaced image in a first direction.
图像显示模块1206,设置为显示拉伸后的交织图像。The image display module 1206 is configured to display the stretched interlaced image.
其中,图像压缩模块1203可以将左眼图像和右眼图像按照终端的3D屏幕显示的长宽分辨率等比进行压缩,以便进行下一步的图像交织。图像交织模块1204可以分别将压缩好的左眼图像和右眼图像按特定的格式进行交织。图像拉伸模块1205可以将交织好的图像按屏幕的长宽比进行等比例拉伸,以达到全屏的效果。图像显示模块1206可以采用3D光栅技术,在显示屏上贴一层3D光栅膜,使人的左右眼接收不同的左右眼图像数据。The image compression module 1203 can compress the left eye image and the right eye image according to the aspect ratio of the 3D screen display of the terminal, so as to perform the next image interleaving. The image interleaving module 1204 can interleave the compressed left eye image and the right eye image in a specific format, respectively. The image stretching module 1205 can stretch the interlaced images in equal proportions according to the aspect ratio of the screen to achieve a full screen effect. The image display module 1206 can adopt a 3D grating technology to apply a 3D grating film on the display screen, so that the left and right eyes of the person receive different left and right eye image data.
关于本实施例提供的图像处理装置的相关说明可以参照上述图像处理方法的描述,故于此不再赘述。For a description of the image processing apparatus provided in this embodiment, reference may be made to the description of the image processing method described above, and thus no further details are provided herein.
本申请实施例还提供一种图像处理方法,包括:根据指令,确定待处理图像的选中区域;将选中区域的显示效果调整为出屏或入屏效果。The embodiment of the present application further provides an image processing method, including: determining a selected area of an image to be processed according to an instruction; and adjusting a display effect of the selected area to an out screen or an on-screen effect.
在示例性实施方式中,选中区域可以包括至少一个第一选中区域和至少一个第二选中区域;第一选中区域的显示效果为出屏效果,第二选中区域的显示效果为入屏效果。比如,第一选中区域可以包括图6中的虚线框标出区域,第二选中区域可以包括图6中的点划线框标出区域。In an exemplary embodiment, the selected area may include at least one first selected area and at least one second selected area; the display effect of the first selected area is a screen-out effect, and the display effect of the second selected area is an on-screen effect. For example, the first selected area may include a dotted line frame area in FIG. 6, and the second selected area may include a dotted line frame marked area in FIG.
关于本实施例提供的图像处理方法的说明可以参照图3所示的示例描述,故于此不再赘述。The description of the image processing method provided in this embodiment can be described with reference to the example shown in FIG. 3, and thus will not be further described herein.
此外,本申请实施例还提供一种计算机可读介质,存储有图像处理程序,所述图像处理程序被处理器执行时实现上述图像处理方法的步骤。In addition, the embodiment of the present application further provides a computer readable medium storing an image processing program, where the image processing program is executed by a processor to implement the steps of the image processing method.
上文中所公开方法中的全部或某些步骤、系统、装置中的功能模块或单元可以被实施为软件、固件、硬件及其适当的组合。在硬件实施方式中,在以上描述中提及的功能模块或单元之间的划分不一定对应于物理组件的划分。例如,一个物理组件可以具有多个功能,或者一个功能或步骤可以由若干物理组件合作执行。某些组件或所有组件可以被实施为由处理器,如数字信号处理器或微 处理器执行的软件;某些组件或所有组件也可以被实施为硬件;某些组件或所有组件也可以被实施为集成电路,如专用集成电路。这样的软件可以分布在计算机可读介质上,计算机可读介质可以包括计算机存储介质(或非暂时性介质)和通信介质(或暂时性介质)。如本领域普通技术人员公知的,术语计算机存储介质包括在用于存储信息,诸如计算机可读指令、数据结构、程序模块或其他数据的任何方法或技术中实施的易失性和非易失性、可移除和不可移除介质。计算机存储介质包括但不限于随机存取存储器(Random Access Memory,RAM)、只读存储器(Read Only Memory,ROM)、带电可擦可编程只读存储器(Electrically Erasable Programmable Read-Only Memory,EEPROM)、闪存或其他存储器技术、只读光盘(Compact Disc Read-Only Memory,CD-ROM)、数字多功能盘(Digital Versatile Disc,DVD)或其他光盘存储、磁盒、磁带、磁盘存储或其他磁存储装置、或者可以用于存储期望的信息并且可以被计算机访问的任何其他的介质。此外,本领域普通技术人员公知的是,通信介质通常包含计算机可读指令、数据结构、程序模块或者诸如载波或其他传输机制之类的调制数据信号中的其他数据,并且可包括任何信息递送介质。All or some of the steps, systems, functional blocks or units in the methods disclosed above may be implemented as software, firmware, hardware, and suitable combinations thereof. In a hardware implementation, the division between functional modules or units mentioned in the above description does not necessarily correspond to the division of physical components. For example, one physical component can have multiple functions, or one function or step can be performed cooperatively by several physical components. Some or all of the components may be implemented as software executed by a processor, such as a digital signal processor or microprocessor; some or all of the components may also be implemented as hardware; some or all of the components may also be implemented It is an integrated circuit, such as an application specific integrated circuit. Such software may be distributed on a computer readable medium, which may include computer storage media (or non-transitory media) and communication media (or transitory media). As is well known to those of ordinary skill in the art, the term computer storage medium includes volatile and nonvolatile, implemented in any method or technology for storing information, such as computer readable instructions, data structures, program modules, or other data. , removable and non-removable media. The computer storage medium includes, but is not limited to, a random access memory (RAM), a read only memory (ROM), and an electrically erasable programmable read-only memory (EEPROM). Flash memory or other memory technology, Compact Disc Read-Only Memory (CD-ROM), Digital Versatile Disc (DVD) or other optical disc storage, magnetic cassette, magnetic tape, disk storage or other magnetic storage device Or any other medium that can be used to store the desired information and that can be accessed by the computer. Moreover, it is well known to those skilled in the art that communication media typically includes computer readable instructions, data structures, program modules or other data in a modulated data signal, such as a carrier wave or other transport mechanism, and can include any information delivery media. .

Claims (17)

  1. 一种图像处理方法,包括:An image processing method comprising:
    调整待处理图像的至少一个区域中图像像素点的视差属性值,并生成所述待处理图像的视差图像;Adjusting a disparity attribute value of an image pixel point in at least one region of the image to be processed, and generating a disparity image of the image to be processed;
    根据所述待处理图像以及所述待处理图像的视差图像,生成三维图像。A three-dimensional image is generated according to the image to be processed and the parallax image of the image to be processed.
  2. 根据权利要求1所述的方法,其中,所述调整待处理图像的至少一个区域中图像像素点的视差属性值,并生成所述待处理图像的视差图像,包括:The method according to claim 1, wherein the adjusting the disparity attribute value of the image pixel point in the at least one region of the image to be processed and generating the disparity image of the image to be processed comprises:
    将所述待处理图像分割为至少一个区域;Dividing the image to be processed into at least one region;
    针对所述至少一个区域中的一区域,根据所述区域的目标显示效果,确定所述区域中的图像像素点的视差属性值。For one of the at least one region, a disparity attribute value of an image pixel point in the region is determined according to a target display effect of the region.
  3. 根据权利要求2所述的方法,其中,所述根据所述区域的目标显示效果,确定所述区域中的图像像素点的视差属性值,包括:The method according to claim 2, wherein the determining a disparity attribute value of an image pixel point in the area according to a target display effect of the area comprises:
    根据所述区域的目标显示效果,确定所述区域中每个图像像素点的偏移量;针对所述区域中的每一图像像素点,根据所述图像像素点的偏移量,确定所述图像像素点的视差属性值。Determining an offset of each image pixel point in the area according to a target display effect of the area; determining, according to an offset of the image pixel point, for each image pixel point in the area The value of the disparity attribute of the image pixel.
  4. 根据权利要求3所述的方法,其中,所述根据所述图像像素点的偏移量,确定所述图像像素点的视差属性值,包括:The method according to claim 3, wherein the determining the disparity attribute value of the image pixel point according to the offset of the image pixel point comprises:
    根据所述图像像素点的偏移量、人左右眼之间的间距以及所述待处理图像的像素密度,计算所述图像像素点的视差属性值;或者,Calculating a disparity attribute value of the image pixel point according to an offset of the image pixel point, a spacing between the left and right eyes of the person, and a pixel density of the image to be processed; or
    根据所述图像像素点的偏移量、人左右眼之间的间距以及人眼与屏幕之间的距离,计算所述图像像素点的视差属性值。The parallax attribute value of the image pixel point is calculated according to the offset of the image pixel point, the spacing between the left and right eyes of the person, and the distance between the human eye and the screen.
  5. 根据权利要求4所述的方法,其中,所述根据所述图像像素点的偏移量、人左右眼之间的间距以及所述待处理图像的像素密度,计算所述图像像素点的视差属性值,包括:The method according to claim 4, wherein said calculating a disparity attribute of said image pixel point based on an offset of said image pixel point, a spacing between left and right eyes of a person, and a pixel density of said image to be processed Values, including:
    根据以下式子计算所述图像像素点的视差属性值:Calculating the parallax attribute value of the image pixel according to the following formula:
    Figure PCTCN2018104381-appb-100001
    Figure PCTCN2018104381-appb-100001
    其中,M为所述图像像素点的视差属性值,d为所述图像像素点的偏移量,T为人左右眼之间的间距,PPI为像素密度。Where M is the parallax attribute value of the image pixel, d is the offset of the image pixel point, T is the spacing between the left and right eyes of the person, and PPI is the pixel density.
  6. 根据权利要求4所述的方法,其中,所述根据所述图像像素点的偏移量、人左右眼之间的间距以及人眼与屏幕之间的距离,计算所述图像像素点的视差属性值,包括:The method according to claim 4, wherein said calculating a parallax attribute of said image pixel point according to an offset of said image pixel point, a spacing between a left and right eyes of a person, and a distance between a human eye and a screen Values, including:
    根据以下式子计算所述图像像素点的视差属性值:Calculating the parallax attribute value of the image pixel according to the following formula:
    M=f×T/d;M=f×T/d;
    其中,M为所述图像像素点的视差属性值,d为所述图像像素点的偏移量,T为人左右眼之间的间距,f为人眼与屏幕之间的距离。Where M is the parallax attribute value of the image pixel point, d is the offset of the image pixel point, T is the distance between the left and right eyes of the person, and f is the distance between the human eye and the screen.
  7. 根据权利要求3所述的方法,其中,所述根据所述区域的目标显示效果,确定所述区域中每个图像像素点的偏移量,包括:The method according to claim 3, wherein the determining an offset of each image pixel in the region according to a target display effect of the region comprises:
    在人眼与屏幕之间的距离未发生改变时,根据所述区域的目标显示效果,确定所述区域内的每个图像像素点的初始视差属性值;When the distance between the human eye and the screen has not changed, determining an initial parallax attribute value of each image pixel point in the area according to the target display effect of the area;
    在人眼与屏幕之间的距离发生改变后,根据所述图像像素点的初始视差属性值、改变后的人眼与屏幕之间的距离以及人左右眼之间的间距,确定在人眼与屏幕之间的距离发生改变后所述图像像素点的偏移量。After the distance between the human eye and the screen is changed, the initial parallax attribute value of the image pixel point, the distance between the changed human eye and the screen, and the distance between the left and right eyes of the person are determined in the human eye. The offset of the pixel of the image after the distance between the screens changes.
  8. 根据权利要求3所述的方法,其中,所述根据所述区域的目标显示效果,确定所述区域中每个图像像素点的偏移量,包括:The method according to claim 3, wherein the determining an offset of each image pixel in the region according to a target display effect of the region comprises:
    在所述区域的目标显示效果设置为入屏效果时,确定所述区域中每一图像像素点的偏移量d的取值范围为大于0且小于T;When the target display effect of the area is set to the on-screen effect, determining that the offset d of each image pixel in the area ranges from greater than 0 and less than T;
    在所述区域的目标显示效果设置为出屏效果时,确定所述区域中每一图像像素点的偏移量d的取值范围为大于T且小于2T;When the target display effect of the area is set to the screen effect, determining that the offset d of each image pixel in the area ranges from greater than T and less than 2T;
    其中,T为人左右眼之间的间距。Where T is the spacing between the left and right eyes of the person.
  9. 根据权利要求2所述的方法,所述调整待处理图像的至少一个区域中图像像素点的视差属性值,并生成所述待处理图像的视差图像,还包括:The method of claim 2, the adjusting the disparity attribute value of the image pixel in the at least one region of the image to be processed, and generating the disparity image of the image to be processed, further comprising:
    在所述区域未设置目标显示效果时,保持所述区域内每个图像像素点的原视差属性值不变;或者,更新所述区域内每个图像像素点的视差属性值为预设值。When the target display effect is not set in the area, the original parallax attribute value of each image pixel in the area is kept unchanged; or the parallax attribute value of each image pixel in the area is updated to a preset value.
  10. 根据权利要求2所述的方法,其中,所述根据所述区域的目标显示效果,确定所述区域中的图像像素点的视差属性值,包括:The method according to claim 2, wherein the determining a disparity attribute value of an image pixel point in the area according to a target display effect of the area comprises:
    根据所述区域的目标显示效果,确定给屏幕上与所述区域对应的光栅柱镜施加电压,以使液晶分子排列方向产生偏移;根据所述电压,确定所述区域内每个图像像素点的视差属性值。Determining, according to the target display effect of the area, applying a voltage to the grating cylinder mirror corresponding to the area on the screen, so as to cause an offset of the alignment direction of the liquid crystal molecules; determining each image pixel point in the area according to the voltage The value of the parallax property.
  11. 一种图像处理装置,包括:An image processing apparatus comprising:
    视差图像生成模块,设置为调整待处理图像的至少一个区域中图像像素点 的视差属性值,并生成所述待处理图像的视差图像;a parallax image generating module, configured to adjust a parallax property value of the image pixel point in at least one region of the image to be processed, and generate a parallax image of the image to be processed;
    三维图像生成模块,设置为根据所述待处理图像以及所述待处理图像的视差图像,生成三维图像。The three-dimensional image generating module is configured to generate a three-dimensional image according to the image to be processed and the parallax image of the image to be processed.
  12. 根据权利要求11所述的装置,其中,所述视差图像生成模块,设置为将所述待处理图像分割为至少一个区域;针对所述至少一个区域中的一区域,根据给所述区域设置的目标显示效果,确定所述区域中的图像像素点的视差属性值。The apparatus according to claim 11, wherein the parallax image generating module is configured to divide the image to be processed into at least one region; and for an area in the at least one region, according to the region to be set The target display effect determines the parallax attribute value of the image pixel point in the area.
  13. 根据权利要求12所述的装置,其中,所述视差图像生成模块,设置为通过以下方式根据给所述区域设置的目标显示效果,确定所述区域中的图像像素点的视差属性值:The apparatus according to claim 12, wherein the parallax image generating module is configured to determine a parallax attribute value of an image pixel point in the area according to a target display effect set to the area by:
    根据所述区域的目标显示效果,确定所述区域中每个图像像素点的偏移量;针对所述区域中的每一图像像素点,根据所述图像像素点的偏移量,确定所述图像像素点的视差属性值;或者,Determining an offset of each image pixel point in the area according to a target display effect of the area; determining, according to an offset of the image pixel point, for each image pixel point in the area The parallax attribute value of the image pixel; or,
    根据所述区域的目标显示效果,确定给屏幕上与所述区域对应的光栅柱镜施加电压,以使液晶分子排列方向产生偏移;根据所述电压,确定所述区域内每个图像像素点的视差属性值。Determining, according to the target display effect of the area, applying a voltage to the grating cylinder mirror corresponding to the area on the screen, so as to cause an offset of the alignment direction of the liquid crystal molecules; determining each image pixel point in the area according to the voltage The value of the parallax property.
  14. 一种终端,包括:存储器、处理器以及存储在所述存储器并可在所述处理器上运行的图像处理程序,所述图像处理程序被所述处理器执行时实现如权利要求1至10中任一项所述的图像处理方法的步骤。A terminal comprising: a memory, a processor, and an image processing program stored in the memory and operable on the processor, the image processing program being implemented by the processor when implemented in claims 1 to 10 The steps of the image processing method of any of the above.
  15. 一种计算机可读介质,存储有图像处理程序,所述图像处理程序被处理器执行时实现如权利要求1至10中任一项所述的图像处理方法的步骤。A computer readable medium storing an image processing program, the image processing program being executed by a processor to implement the steps of the image processing method according to any one of claims 1 to 10.
  16. 一种图像处理方法,包括:An image processing method comprising:
    根据指令,确定待处理图像的选中区域;Determining a selected area of the image to be processed according to the instruction;
    将所述选中区域的显示效果调整为出屏效果或入屏效果。Adjusting the display effect of the selected area to an on-screen effect or an on-screen effect.
  17. 根据权利要求16所述的方法,其中,所述选中区域包括至少一个第一选中区域和至少一个第二选中区域;所述第一选中区域的显示效果为出屏效果,所述第二选中区域的显示效果为入屏效果。The method according to claim 16, wherein the selected area comprises at least one first selected area and at least one second selected area; the display effect of the first selected area is a screen effect, and the second selected area The display effect is the screen effect.
PCT/CN2018/104381 2017-09-11 2018-09-06 Image processing method and device WO2019047896A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201710813002.6A CN107767412A (en) 2017-09-11 2017-09-11 A kind of image processing method and device
CN201710813002.6 2017-09-11

Publications (1)

Publication Number Publication Date
WO2019047896A1 true WO2019047896A1 (en) 2019-03-14

Family

ID=61265716

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2018/104381 WO2019047896A1 (en) 2017-09-11 2018-09-06 Image processing method and device

Country Status (2)

Country Link
CN (1) CN107767412A (en)
WO (1) WO2019047896A1 (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107767412A (en) * 2017-09-11 2018-03-06 西安中兴新软件有限责任公司 A kind of image processing method and device
CN111556304B (en) * 2020-04-22 2021-12-31 浙江未来技术研究院(嘉兴) Panoramic image processing method, device and system
CN112053360B (en) * 2020-10-10 2023-07-25 腾讯科技(深圳)有限公司 Image segmentation method, device, computer equipment and storage medium

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120019528A1 (en) * 2010-07-26 2012-01-26 Olympus Imaging Corp. Display apparatus, display method, and computer-readable recording medium
CN102761761A (en) * 2011-04-28 2012-10-31 乐金显示有限公司 Stereoscopic image display and method of adjusting stereoscopic image thereof
CN103004217A (en) * 2011-06-08 2013-03-27 松下电器产业株式会社 Parallax image generation device, parallax image generation method, program and integrated circuit
CN103024410A (en) * 2011-09-23 2013-04-03 Lg电子株式会社 Image display apparatus and method for operating the same
CN103108198A (en) * 2011-11-09 2013-05-15 宏碁股份有限公司 Image generation device and image adjusting method
CN103135889A (en) * 2011-12-05 2013-06-05 Lg电子株式会社 Mobile terminal and 3D image control method thereof
CN107767412A (en) * 2017-09-11 2018-03-06 西安中兴新软件有限责任公司 A kind of image processing method and device

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR100505334B1 (en) * 2003-03-28 2005-08-04 (주)플렛디스 Real-time stereoscopic image conversion apparatus using motion parallaxr
CN202057928U (en) * 2011-04-21 2011-11-30 冠捷显示科技(厦门)有限公司 Novel three-dimensional display panel component
CN102630033A (en) * 2012-03-31 2012-08-08 彩虹集团公司 Method for converting 2D (Two Dimension) into 3D (Three Dimension) based on dynamic object detection
EP2959684A4 (en) * 2013-02-20 2016-11-09 Intel Corp Real-time automatic conversion of 2-dimensional images or video to 3-dimensional stereo images or video
US9591290B2 (en) * 2014-06-10 2017-03-07 Bitanimate, Inc. Stereoscopic video generation
CN105872518A (en) * 2015-12-28 2016-08-17 乐视致新电子科技(天津)有限公司 Method and device for adjusting parallax through virtual reality
CN106131533A (en) * 2016-07-20 2016-11-16 深圳市金立通信设备有限公司 A kind of method for displaying image and terminal

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120019528A1 (en) * 2010-07-26 2012-01-26 Olympus Imaging Corp. Display apparatus, display method, and computer-readable recording medium
CN102761761A (en) * 2011-04-28 2012-10-31 乐金显示有限公司 Stereoscopic image display and method of adjusting stereoscopic image thereof
CN103004217A (en) * 2011-06-08 2013-03-27 松下电器产业株式会社 Parallax image generation device, parallax image generation method, program and integrated circuit
CN103024410A (en) * 2011-09-23 2013-04-03 Lg电子株式会社 Image display apparatus and method for operating the same
CN103108198A (en) * 2011-11-09 2013-05-15 宏碁股份有限公司 Image generation device and image adjusting method
CN103135889A (en) * 2011-12-05 2013-06-05 Lg电子株式会社 Mobile terminal and 3D image control method thereof
CN107767412A (en) * 2017-09-11 2018-03-06 西安中兴新软件有限责任公司 A kind of image processing method and device

Also Published As

Publication number Publication date
CN107767412A (en) 2018-03-06

Similar Documents

Publication Publication Date Title
US10148930B2 (en) Multi view synthesis method and display devices with spatial and inter-view consistency
US9355455B2 (en) Image data processing method and stereoscopic image display using the same
US8791989B2 (en) Image processing apparatus, image processing method, recording method, and recording medium
KR102121389B1 (en) Glassless 3d display apparatus and contorl method thereof
US8514275B2 (en) Three-dimensional (3D) display method and system
US20120229595A1 (en) Synthesized spatial panoramic multi-view imaging
US9041773B2 (en) Conversion of 2-dimensional image data into 3-dimensional image data
US10074343B2 (en) Three-dimensional image output apparatus and three-dimensional image output method
JP2016116162A (en) Video display device, video display system and video display method
KR102174258B1 (en) Glassless 3d display apparatus and contorl method thereof
US8723920B1 (en) Encoding process for multidimensional display
WO2019047896A1 (en) Image processing method and device
US9167237B2 (en) Method and apparatus for providing 3-dimensional image
KR101598055B1 (en) Method for normalizing contents size at multi-screen system, device and computer readable medium thereof
CN107483913A (en) A kind of various dimensions picture-in-picture display methods
US10992927B2 (en) Stereoscopic image display apparatus, display method of liquid crystal display, and non-transitory computer-readable recording medium storing program of liquid crystal display
US9986222B2 (en) Image processing method and image processing device
US9479766B2 (en) Modifying images for a 3-dimensional display mode
US20160014400A1 (en) Multiview image display apparatus and multiview image display method thereof
KR101598057B1 (en) Method for normalizing contents size at multi-screen system, device and computer readable medium thereof
JP6377155B2 (en) Multi-view video processing apparatus and video processing method thereof
KR102143463B1 (en) Multi view image display apparatus and contorl method thereof
JP4129786B2 (en) Image processing apparatus and method, recording medium, and program
KR101376734B1 (en) OSMU( One Source Multi Use)-type Stereoscopic Camera and Method of Making Stereoscopic Video Content thereof
Chappuis et al. Subjective evaluation of an active crosstalk reduction system for mobile autostereoscopic displays

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18853226

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 18853226

Country of ref document: EP

Kind code of ref document: A1