WO2018068680A1 - 图像处理方法和装置 - Google Patents

图像处理方法和装置 Download PDF

Info

Publication number
WO2018068680A1
WO2018068680A1 PCT/CN2017/105090 CN2017105090W WO2018068680A1 WO 2018068680 A1 WO2018068680 A1 WO 2018068680A1 CN 2017105090 W CN2017105090 W CN 2017105090W WO 2018068680 A1 WO2018068680 A1 WO 2018068680A1
Authority
WO
WIPO (PCT)
Prior art keywords
area
image
region
adjacent
overlapping
Prior art date
Application number
PCT/CN2017/105090
Other languages
English (en)
French (fr)
Inventor
傅佳莉
谢清鹏
宋翼
Original Assignee
华为技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 华为技术有限公司 filed Critical 华为技术有限公司
Priority to EP17860369.2A priority Critical patent/EP3518177A4/en
Publication of WO2018068680A1 publication Critical patent/WO2018068680A1/zh
Priority to US16/383,022 priority patent/US11138460B2/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/06Topological mapping of higher dimensional structures onto lower dimensional surfaces
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4038Image mosaicing, e.g. composing plane images from plane sub-images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/06Topological mapping of higher dimensional structures onto lower dimensional surfaces
    • G06T3/067Reshaping or unfolding 3D tree structures onto 2D planes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/12Panospheric to cylindrical image transformations

Definitions

  • the present application relates to the field of image processing and, more particularly, to an image processing apparatus.
  • the spherical panoramic image cannot be conveniently stored and represented, it is usually necessary to convert the spherical panoramic image into a two-dimensional planar image and perform encoding and the like.
  • the two-dimensional planar image is divided into a plurality of mutually independent regions, and then the images of the regions are respectively encoded to obtain encoded data, and finally the encoded data is stored or transmitted to the decoding. end.
  • the pixel values of the pixels of the plurality of regions of the two-dimensional planar image can be obtained by decoding, and the two-dimensional planar image is displayed by the terminal. If the quality of the images of the adjacent first region and the second region in the two-dimensional planar image are different, the user may obviously feel when the image displayed by the terminal is switched from the image of the first region to the image of the second region. The difference between the first area image and the second area image, the image display effect is poor, affecting the user experience.
  • the application provides an image processing method and apparatus to improve an image display effect.
  • an image processing method comprising: acquiring encoded data of a first region of a two-dimensional planar image and an adjacent region adjacent to the first region, the two-dimensional planar image being spherical An image obtained by mapping a panoramic image, wherein an overlap region exists between the first region and the adjacent region; determining a pixel of a pixel of the image of the first region according to encoded data of an image of the first region And determining, according to the encoded data of the image of the adjacent area, a pixel value of a pixel of the image of the adjacent area; and a pixel value of the pixel of the overlapping area according to the first area and the adjacent A pixel value of a pixel at the overlap region of the region determines a target pixel value of a pixel of the overlap region.
  • the final pixel value of the overlapping area is determined according to the pixel value of the pixel of the overlapping area in the different area, so that the change of the pixel value during the transition between the adjacent areas can be made relatively smooth, thereby improving the image of the adjacent area when switching. Shows the effect and improves the user experience.
  • the method before acquiring the encoded data of the multiple regions of the two-dimensional planar image, the method further includes: sending, to the encoding end, first indication information, where The first indication information is used to indicate that the coding end has the overlapping area between the first area and the adjacent area when the area is divided into the two-dimensional plane image.
  • the decoder can determine how The two-dimensional planar image is divided into a plurality of regions, and the information of the divided regions is notified to the encoding end, that is, the manner of dividing the regions can also be determined at the decoding end, so that the processing of the image is more flexible.
  • the first indication information is further used to indicate a size of the overlapping area or the overlapping area is opposite to the first area s position.
  • the encoding end is configured to determine not only how to divide the two-dimensional planar image into a plurality of regions according to the first indication information, but also to determine the size and position of the overlapping region, so that the encoding end can perform image processing on the image.
  • the method further includes: receiving second indication information from the encoding end, where the second indication information is used to indicate the first area and the phase The overlapping area exists between adjacent areas.
  • the pixel in the overlapping area according to the first area a pixel value of the point and a pixel value of the pixel of the adjacent area at the overlapping area, determining a target pixel value of the pixel of the overlapping area, including: a pixel of the first area in the overlapping area
  • the pixel value of the point and the pixel value of the pixel of the adjacent area in the overlapping area are weighted to obtain a target pixel value of the pixel of the overlapping area.
  • the final pixel value of the pixel of the overlapping area is determined by the pixel value of the pixel of the adjacent area in the overlapping area, so that the pixel value of the pixel of the overlapping area can achieve a smooth transition between adjacent areas, improving the image. display effect.
  • the method further includes: Determining a pixel value of a pixel of the overlapping area and a pixel value of the pixel of the adjacent area at the overlapping area, determining a target pixel value of the pixel of the overlapping area, including: when the first difference When less than the first preset threshold, the pixel value of the pixel of the first area or the adjacent area in the overlapping area is determined as the target pixel value, and the first difference is an image of the first area a difference between a resolution and a resolution of an image of the adjacent area, or a difference between a code rate of the encoded data of the first area and a code rate of encoded data of the adjacent area.
  • the pixel values of the pixels of the overlapping area can be directly determined, and the processing efficiency of the image is improved.
  • the overlapping region is located in the first region and the Between adjacent regions, the first region is located in a horizontal direction of the adjacent region, or the first region is located in a vertical direction of the adjacent region.
  • the size of the overlapping area is according to the two-dimensional plane The size of the image is determined.
  • the size of the overlapping area is according to the first area Or the size of the adjacent area is determined.
  • the two-dimensional planar image further includes a second region, The first area is inside the second area, the overlapping area is an area other than the first area in the second area, and the second area is an image of a third area in the spherical panoramic image mapped to An area of the two-dimensional planar image, the third area being an area in which the corresponding image in the first viewing angle range of the spherical panoramic image is located.
  • the first viewing angle range is an angle value
  • the angle value is a divisor of 360 degrees.
  • an image processing method comprising: dividing a two-dimensional planar image into a plurality of regions, the plurality of regions including a first region and an adjacent region adjacent to the first region, An overlapping area exists between the first area and the adjacent area, the two-dimensional planar image is an image obtained by mapping a spherical panoramic image; and encoding an image of the first area to obtain the first area Encoding data; encoding an image of the adjacent area to obtain encoded data of the adjacent area.
  • the two-dimensional plane image is divided into an image with an overlapping area between adjacent areas, and the decoding end can make the decoding end according to the pixel of the adjacent area in the overlapping area compared with the area where the divided area in the prior art has no overlapping area.
  • the pixel value of the point is used to determine the final pixel value of the overlapping area, so that the change of the pixel value during the transition between the adjacent areas is relatively gentle, thereby improving the display effect of the image corresponding to the adjacent area at the time of switching, and improving the user experience.
  • the method before the dividing the two-dimensional planar image into multiple regions, the method further includes: receiving first indication information from the decoding end, where An indication information is used to indicate that the overlapping area exists between the divided first region and the adjacent region when the region is divided into the two-dimensional planar image.
  • the first indication information is further used to indicate a size of the overlapping area or the overlapping area is opposite to the first The location of the area.
  • the method further includes: sending, to the decoding end, second indication information, where the second indication information is used to indicate the first area and the The overlapping area exists between adjacent areas.
  • the overlapping region is located in the first region and the Between adjacent regions, the first region is located in a horizontal direction of the adjacent region, or the first region is located in a vertical direction of the adjacent region.
  • the size of the overlapping area is according to the two-dimensional plane The size of the image is determined.
  • the size of the overlapping area is according to the first area Or the size of the adjacent area is determined.
  • the two-dimensional planar image further includes a second region, The first area is inside the second area, the overlapping area is an area other than the first area in the second area, and the second area is an image of a third area in the spherical panoramic image mapped to An area of the two-dimensional planar image, the third area being an area in which the image in the first viewing angle range of the spherical panoramic image is located.
  • the first viewing angle range is an angle value
  • the angle value is a divisor of 360 degrees.
  • an image processing apparatus comprising means for performing the method of the first aspect.
  • an image processing apparatus comprising means for performing the method of the second aspect.
  • an image processing apparatus comprising: a memory for storing a program, the processor for executing a program, and when the program is executed, the processor is configured to execute the first The method on the one hand.
  • an image processing apparatus including a memory for storing a program, the processor for executing a program, and when the program is executed, the processor is configured to execute the first The method on the one hand.
  • a computer readable medium storing program code for device execution, the program code comprising instructions for performing the method of the first aspect.
  • a computer readable medium storing program code for device execution, the program code comprising instructions for performing the method of the second aspect.
  • the plurality of regions further includes other region regions, the overlapping regions being present between the other regions.
  • the overlapping region exists between any two adjacent regions of the plurality of regions.
  • the first region and the adjacent region are the same size.
  • the method further includes: determining a second difference between a code rate corresponding to an image of the first area and a code rate corresponding to an image of the adjacent area;
  • Determining, according to a pixel value of a pixel of the overlapping area, a pixel value of a pixel of the adjacent area and a pixel value of a pixel of the overlapping area, determining a pixel point of the overlapping area Target pixel values including:
  • the size of the overlap region is fixed. That is to say, the size of the overlapping area may be a fixed area size that has been determined before the encoding and decoding.
  • the shape of any one of the plurality of regions is at least one of a square, a rectangle, a circle, a trapezoid, and an arc. It should be understood that the shape of any one of the plurality of regions may also be other irregular shapes.
  • the two-dimensional planar image is a 2D latitude and longitude map.
  • FIG. 1 is a schematic diagram of a code stream corresponding to a video file.
  • FIG. 2 is a schematic diagram of a spherical panoramic image mapped to a latitude and longitude map.
  • FIG. 3 is a schematic diagram of a sub-area of a two-dimensional planar image.
  • FIG. 4 is a schematic diagram of encoded data corresponding to different sub-regions of a two-dimensional planar image.
  • FIG. 5 is a schematic flowchart of an image processing method according to an embodiment of the present application.
  • Figure 6 is a schematic illustration of a first region and an adjacent region of a two-dimensional planar image.
  • Figure 7 is a schematic illustration of a first region and an adjacent region of a two-dimensional planar image.
  • Figure 8 is a schematic illustration of a first region and an adjacent region of a two-dimensional planar image.
  • Figure 9 is a schematic illustration of a first region and an adjacent region of a two-dimensional planar image.
  • FIG. 10 is a schematic flowchart of an image processing method according to an embodiment of the present application.
  • Figure 11 is a schematic illustration of the A, B, C, and D regions of a two-dimensional image.
  • Figure 12 is a schematic illustration of a spherical panoramic image mapped to a planar image.
  • FIG. 13 is a schematic block diagram of a system architecture of an image processing method application according to an embodiment of the present application.
  • FIG. 14 is a schematic block diagram of an image processing apparatus according to an embodiment of the present application.
  • FIG. 15 is a schematic block diagram of an image processing apparatus according to an embodiment of the present application.
  • FIG. 16 is a schematic block diagram of an image processing apparatus according to an embodiment of the present application.
  • Figure 17 is a schematic block diagram of an image processing apparatus of an embodiment of the present application.
  • the code stream is also called a media representation.
  • the code parameters such as the code rate and resolution of different media representations are generally different.
  • Each media representation can be segmented. Into a number of small files, these small files are generally called segments.
  • FIG. 1 after encoding a video file, the encoding device obtains three media representations R1, R2, and R3, where R1 is a high-definition video with a code rate of 4 MBPS (megabits per second), and R2 is a code rate of 2 MBPS.
  • the standard definition video, R3 is a standard definition video with a code rate of 1 MBPS, and each media representation includes 8 segments.
  • the segments represented by each media can be saved in one file in turn, or separately in different files.
  • the user can select any media representation to play. For example, if the user selects R1, then the 4MBPS HD video is played. If the user selects R2, then the 2MBPS standard definition video is played (obviously, R1 is selected. The playback effect is better than the R2 playback effect).
  • the user can also select some of the segments in R1, R2, and R3 to play, as shown in Figure 1, the shaded portion is the segment that the client requests to play, and when playing the video file, the first three segments of R3 are played first.
  • Segment then play the fourth to sixth segment of R1, and finally play the seventh and eighth segments of R2, that is, the user starts requesting playback of 1MBPS standard definition video, and then requests playback. It is 4MBPS HD video, and the last request is to play 2MBPS SD video.
  • the played video is switched between different resolutions, the user can obviously feel the change of the video quality due to the difference in video quality, and if the user is currently paying attention to the image of a certain area, if the video quality is suddenly A big change will seriously affect the user's subjective experience.
  • the spherical panoramic image is a 360-degree spherical image that is beyond the normal visual range of the human eye.
  • the panoramic image is first converted into a two-dimensional planar image (the more common form of the two-dimensional planar image is a latitude and longitude image), and then the two-dimensional planar image is subjected to other processing.
  • the left side of FIG. 2 is a spherical panoramic image, and the spherical panoramic image is mapped to obtain the latitude and longitude diagram on the right side of FIG.
  • the specific process of converting a spherical panoramic image into a 2D latitude and longitude map is: for any point P on the spherical image, when mapping the image, mapping the longitude coordinate of the P point to the horizontal coordinate of the rectangle, and mapping the dimensional coordinate of the P point to the rectangle
  • the vertical coordinate maps other points on the spherical image to the rectangular area to get a 2:1 2D latitude and longitude map.
  • the latitude and longitude map When encoding the latitude and longitude map, the latitude and longitude map is generally divided into n independent regions, and then the images in each region are separately coded, and different coding modes may be adopted for different regions. Specifically, the latitude and longitude diagram in FIG. 2 can be divided into nine independent regions (A, B, C, D, E, F, G, H, I) as shown in FIG. Next, the nine regions are separately coded, and the media representations of the nine regions are shown in FIG.
  • the images of different regions may correspond to different viewing angles, for example, when the user's viewing angle changes, the image viewed by the user is switched from the image displayed in the D region to the image displayed in the E region. And then switch to the image displayed in the F area.
  • the user sees the image corresponding to the first three segments of the D region (S1, S2, and S3), and the fourth and fifth regions of the E region.
  • the user obviously perceives the two areas when viewing the image corresponding to the D area and switching to the image corresponding to the E area.
  • the difference in the corresponding image and when the user is paying attention to the image of the boundary between the D area and the E area, the user experience is seriously affected.
  • the S5 of the E area and the S6 code stream of the F area or the resolution of the corresponding image are different, the user obviously feels when viewing the image corresponding to the E area and switching to the image corresponding to the F area. The difference in the image.
  • the embodiment of the present application provides a method for processing an image, which is to divide a two-dimensional planar image into a plurality of regions including overlapping regions between adjacent regions when encoding the two-dimensional planar image, and then to multiple regions.
  • the images are separately encoded to obtain encoded data, and different encoding modes and encoding parameters can be used when encoding different regions of the image.
  • the decoding end can obtain the pixel values of the pixels of the image of the plurality of regions of the two-dimensional planar image, and then determine the final pixel points of the overlapping region according to the pixel values of the pixel points of the plurality of regions.
  • the pixel value that is, the pixel value of the pixel of the adjacent area is comprehensively considered when determining the pixel value of the pixel of the overlapping area, so that the image corresponding to the adjacent area can be made smoother when switching the image when the image is played. , improved display.
  • FIG. 5 is a schematic flowchart of an image processing method according to an embodiment of the present application.
  • the image processing method shown in FIG. 5 can be performed by a decoding end device, and the image processing method includes:
  • the two-dimensional planar image may be a longitude and latitude map as shown in FIG. 2, or may be a two-dimensional planar image obtained by mapping a spherical panoramic image onto a polyhedron.
  • the two-dimensional planar image may be a spherical panoramic image mapped onto a regular hexahedron. And the image obtained after the regular hexahedron is unfolded.
  • the decoding end device may acquire the encoded data of the two-dimensional planar image from the encoding end, the cloud, or other storage device.
  • the first region and the adjacent region of the first region may be any two adjacent regions of the plurality of regions in the two-dimensional image. It should be understood that other regions may be included in the plurality of regions, and overlapping regions may also exist between the regions.
  • the plurality of regions are N regions, and an overlapping region exists between any two adjacent regions of the N regions.
  • the image displayed by the terminal is switched from the image corresponding to one area to the image corresponding to another adjacent area, and the transition is relatively natural, thereby improving the user experience.
  • the pixel value of the prime point determines the target pixel value of the pixel of the overlap region.
  • the A area is the first area
  • the B area is the adjacent area of the first area
  • the A area and the B area both include the area C, that is, the C area is the overlap of the A area and the B area. region.
  • the pixel values of the 100 pixels of the C region obtained when decoding the A region are the first type of pixel values
  • the 100 pixels of the C region obtained when the B region is decoded is the second type of pixel value. If the encoding end uses different encoding methods for the images in the A area and the B area, it is likely that the decoding end decodes the first type of pixel value and the second type of pixel value.
  • the decoding end may determine the target pixel value of 100 pixel points in the C region according to the first type of pixel value and the second type of pixel value, for example, smoothing the first type of pixel value and the second type of pixel value. Processing such that the target pixel value of 100 pixels in the C region is between the first type of pixel value and the second type of pixel value, such that when the image displayed by the terminal is switched from the image of the first area to the adjacent area When the image is imaged, smooth transition can be achieved, the display effect of the played image can be improved, and the user experience can be improved.
  • the decoding end device may be a terminal device or a device specially used for decoding.
  • the image may be directly displayed after decoding, and when the decoding end device is specifically used for decoding, the decoding is performed. After the terminal device completes decoding, the decoding information may be stored or transmitted to the terminal.
  • the final pixel value of the overlapping area is determined according to the pixel value of the pixel of the overlapping area in the different area, so that the change of the pixel value during the transition between adjacent areas can be made relatively smooth, thereby improving the correspondence of the adjacent area.
  • the image is displayed at the time of switching, improving the user experience.
  • the image processing method of the embodiment of the present application before acquiring the encoded data of the multiple regions of the two-dimensional planar image, the image processing method of the embodiment of the present application further includes:
  • the first indication information is sent to the coding end, where the first indication information is used to indicate that the coding end has the overlapping area between the first area and the adjacent area when the area is divided into two-dimensional plane images.
  • the decoding end may first determine a division manner of the area division of the two-dimensional plane image, and the division manner specifically includes dividing the two-dimensional plane image into several regions, and which regions have overlapping regions. Then, the decoding end sends the first indication information to the encoding end, so that the encoding end divides the two-dimensional plane image according to the division manner determined by the decoding end.
  • the foregoing first indication information is further used to indicate a size of the overlapping area or a position of the overlapping area relative to the first area. That is to say, the first indication information not only indicates which areas of the two-dimensional plane image have overlapping areas, but also indicates the size or position of the overlapping area.
  • the encoding end can not only divide the two-dimensional planar image into multiple regions according to the instruction of the decoding end, but also determine how to divide the two-dimensional planar image into multiple regions.
  • the method of the embodiment of the present application further includes:
  • the decoding end determines, according to the second indication information, that there is an overlapping area between the first area and the adjacent area.
  • determining a target pixel value of a pixel point of the overlapping area according to a pixel value of the pixel point of the first area in the overlapping area and a pixel value of the pixel point of the adjacent area in the overlapping area including: The pixel value of the pixel of the first region in the overlap region and the pixel value of the pixel of the adjacent region in the overlap region are weighted to obtain a target pixel value of the pixel of the overlap region.
  • the overlapping area in the first area and the adjacent area has a total of 9 pixels (here only 9 for convenience of description) For example, the number of pixels in the overlapping area is actually much larger than 9).
  • the pixel values of the nine pixels in the overlap region of the first region are respectively decoded by the encoded data corresponding to the first region: 100, 100, 100, 110, 120, 110, 100, 120, 120; After decoding the coded data corresponding to the adjacent area, the pixel values of the nine pixels in the overlapping area of the adjacent area are: 90, 90, 90, 90, 90, 90, 100, 100, 100, respectively.
  • the pixel values of the pixels in the first region and the pixel values of the pixels in the adjacent region are smoothed, and the final pixel values of the nine pixels in the overlapping region are: (100+90)/ 2, (100+90)/2, (100+90)/2, (110+90)/2, (120+90)/2, (110+90)/2, (100+100)/2, (100+120)/2, (100+100)/2.
  • the image processing method of the embodiment of the present application further includes:
  • the pixel value of the pixel of the first area or the adjacent area in the overlapping area is determined as the target pixel value, and the first difference is the resolution of the image of the first area and The difference between the resolutions of the images of the adjacent regions, or the difference between the code rate of the encoded data of the first region and the code rate of the encoded data of the adjacent region.
  • the method of the embodiment of the present application further includes: determining the resolution of the image of the first region and the adjacent region. The first difference in the resolution of the image;
  • the pixel value of the pixel of the first region in the overlapping region tends to be closer to the pixel value of the pixel at the overlapping region of the adjacent region.
  • directly using the pixel value of the first region or the adjacent region in the overlapping region as the pixel value of the pixel of the overlapping region can not only improve the decoding efficiency.
  • the image processing method of the embodiment of the present application further includes:
  • Determining a target pixel value of the pixel of the overlapping area according to the pixel value of the pixel of the first area in the overlapping area and the pixel value of the pixel of the adjacent area in the overlapping area, including: when the second difference is smaller than the second preset At the threshold value, the pixel value of the pixel of the first region or the adjacent region at the overlapping region is determined as the target pixel value of the pixel of the overlapping region.
  • the pixel value of the pixel of the first area or the adjacent area in the overlapping area may be directly determined as the final pixel value of the pixel of the overlapping area.
  • the overlapping area is located between the first area and the adjacent area, the first area is located in a horizontal direction of the adjacent area, or the first area is located in a vertical direction of the adjacent area. It should be understood that the first region may also be in a direction at an angle to the adjacent region.
  • the A area is the first area
  • the B area is the adjacent area
  • the A area and the B area are the areas adjacent in the horizontal direction
  • a and B form the overlapping area in the horizontal direction
  • the area A is the first area
  • the D area is the adjacent area
  • the A area and the D area are the areas adjacent in the vertical direction
  • the A area and the D area are the vertical direction.
  • An overlapping area is formed on the upper surface.
  • one region may form an overlapping region with an adjacent region in the horizontal direction, or may form an overlapping region with an adjacent region in the vertical direction.
  • the A area is the first area
  • the B area and the D area are the adjacent areas of the A area.
  • An overlap region is formed in the horizontal direction between the A region and the B region, and the A region and the D region form an overlap region in the vertical direction.
  • the size of the overlapping area is preset.
  • the size of the overlapping area may be an encoder or a user preset, for example, the overlapping area may be set to a K ⁇ L size area, where K is 200 pixels and L is 100 pixels.
  • the size of the overlapping area is determined according to the size of the two-dimensional planar image.
  • the size of the overlapping area is positively correlated with the size of the two-dimensional planar image, and the larger the overlapping area of the two-dimensional planar image is.
  • the two-dimensional planar image can be multiplied by a certain ratio to determine the size of the overlapping area.
  • the size of the two-dimensional planar image is X ⁇ Y (X pixels in the horizontal direction and Y pixels in the vertical direction)
  • the size of each region of the two-dimensional planar image is M ⁇ N (the horizontal direction is M pixels, and the vertical direction is N pixels)
  • the size of the overlap region is K ⁇ L (K pixels in the horizontal direction and L pixels in the vertical direction).
  • the size of the overlapping area is determined according to the size of the first area or the adjacent area.
  • the size of the overlapping area is positively correlated with the size of the divided area, and the larger the divided area, the larger the overlapping area.
  • the two-dimensional planar image can be multiplied by a certain ratio to determine the size of the overlapping area.
  • the size of the first area or the adjacent area is M ⁇ N (the horizontal direction is M pixels, and the vertical direction is N pixels).
  • the size of the overlapping area is determined according to a range of viewing angles of the spherical panoramic image. That is to say, when mapping a spherical panoramic image to a planar image, it is possible to determine how to divide the region according to the viewing angle range in which the user views the spherical panoramic video image, and which overlapping regions exist between the regions, and what is the size of the overlapping region.
  • the code rate When zoning an image, if there is an overlapping area between adjacent areas, the code rate will be increased when the image is processed. For example, if there is an overlapping area between the first area and the adjacent area, the image in the overlapping area is repeatedly encoded when encoding the first area and the adjacent area, thereby causing an increase in the code rate, and the code rate. The increase may affect the playback of the image.
  • the two-dimensional planar image further includes a second area, the first area is inside the second area, the overlapping area is an area other than the first area in the second area, and the second area is a spherical panorama
  • the image of the third region in the image is mapped to the region of the two-dimensional planar image, and the third region is the region of the spherical panoramic image in which the corresponding image is within the first viewing angle range.
  • the third area is centered on the center of the fourth area in the spherical panoramic image, and is obtained according to the preset first viewing angle range.
  • the first viewing angle range is an angle value
  • the angle value is a divisor of 360 degrees.
  • the angle value can be 60 degrees, 30 degrees, and the like.
  • the third area is first determined according to the first angle of view, and then the third area is mapped to the second area, and the overlapping area of the first area is finally determined according to the second area and the first area.
  • the two-dimensional planar image is divided into nine regions, and there are overlapping regions between each of the two adjacent regions.
  • overlapping regions By setting overlapping regions of different sizes, it is found that the code rate and code rate of the two-dimensional plane image under different overlapping regions change with different overlapping regions as shown in Table 1.
  • the embodiment of the present application can improve the display effect of the image and improve the user experience without significantly increasing the code rate corresponding to the two-dimensional plane image.
  • the image processing method of the embodiment of the present application is described in detail above from the perspective of the decoding end.
  • the entire flow of the image processing method of the embodiment of the present application is described in detail below from the perspective of the decoding end.
  • the encoding process corresponds to the decoding process.
  • the content already existing at the decoding end is appropriately omitted.
  • FIG. 10 is a schematic flowchart of an image processing method according to an embodiment of the present application.
  • the image processing method shown in FIG. 10 can be performed by an encoding device, and the image processing method includes:
  • the dimensional planar image is an image obtained by mapping a spherical panoramic image.
  • the encoding end when the above image processing method is performed by the encoding end, the encoding end also encodes images of other regions in the plurality of regions, and finally obtains encoded data of the respective region images of the entire two-dimensional planar image, and the The encoded data is stored or sent to the decoder.
  • the decoding end obtains the above encoded data, the encoded data can be decoded.
  • the specific decoding process is shown in the image processing method shown in FIG. 5, and the description is not repeated here.
  • the two-dimensional plane image is divided into an image with an overlapping area between the adjacent areas, and the decoding end can be made adjacent to the adjacent side than the area that is not overlapped in the prior art.
  • the pixel value of the pixel in the overlapping area determines the final pixel value of the overlapping area, so that the change of the pixel value during the transition between the adjacent areas is relatively gentle, thereby improving the display effect of the image corresponding to the adjacent area when switching. Improve the user experience.
  • the image processing method of the embodiment of the present application further includes:
  • first indication information is used to indicate that the overlapping area exists between the first region and the adjacent region that are divided when the region is divided into two-dimensional planar images.
  • the foregoing first indication information is further used to indicate a size of the overlapping area or a position of the overlapping area relative to the first area.
  • the decoding end can conveniently determine that there is an overlapping area between the first area of the plurality of areas of the two-dimensional image and the adjacent area of the first area.
  • the image processing method of the embodiment of the present application further includes:
  • the overlapping area is located between the first area and the adjacent area, the first area is located in a horizontal direction of the adjacent area, or the first area is located in a vertical direction of the adjacent area. It should be understood that the first region may also be in a direction at an angle to the adjacent region.
  • the size of the overlap region is determined according to the size of the two-dimensional planar image.
  • the size of the overlapping area is determined according to the size of the first area or the adjacent area.
  • the two-dimensional planar image further includes a second area, the first area is inside the second area, the overlapping area is an area other than the first area in the second area, and the second area is a spherical panorama
  • the image of the third region in the image is mapped to the region of the two-dimensional planar image, and the third region is the region in which the image within the first viewing angle range of the spherical panoramic image is located.
  • the third area is centered on the center of the fourth area in the spherical panoramic image, and is obtained according to the preset first viewing angle range.
  • the first viewing angle range is an angle value
  • the angle value is a divisor of 360 degrees.
  • the image processing method and the image processing method of the embodiments of the present application are respectively described in detail from the perspectives of the decoding end and the encoding end respectively.
  • the image processing method and image of the embodiment of the present application are described below with reference to FIG. 11 and FIG.
  • the processing method is described in detail.
  • the decoding end decodes the encoded data to obtain a decoded image.
  • the decoding end may acquire a plurality of encoded data of the two-dimensional planar image from the encoding end or the cloud, and decoding the encoded data may generate a reconstructed image, where the reconstructed image is a two-dimensional planar image composed of a plurality of regions.
  • Decompression methods such as H.264 and H.265 can be used for decoding.
  • the decoding end obtains the overlapping area information generated by the encoding end at the same time as or before the acquisition of the encoded data (the overlapping information is used to indicate that the encoding end divides the two-dimensional planar image into several regions, and which regions overlap.
  • the area, the size and position of the overlapping area, etc. so that after decoding the images of the plurality of areas, the decoding end can determine the position and size of the overlapping area of the reconstructed image according to the overlapping information, and according to different areas where the overlapping area is located
  • the pixel value of the pixel is used to determine the final pixel value of the pixel of the overlap region.
  • the decoding end obtains reconstructed images of these regions from the encoded data corresponding to the two-dimensional planar images A, B, C, and D regions. And according to the overlapping information, it is determined that there is an overlapping area between the A area and the C area and between the B area and the D area. Then, the decoding end can determine the pixel value of the pixel of the overlapping area between the A area and the C area according to the pixel value of the pixel of the A area and the C area. Similarly, the decoding end can also determine the B area and the D by the same method. The pixel value of the pixel of the overlapping area between the areas.
  • the images of the multiple regions are spliced into a panoramic image according to the regional location.
  • the images of the regions A, B, C, and D are spliced into a panoramic image according to the distribution positions of the regions.
  • step 303 Display the panoramic image obtained in step 303 on the terminal device, or convert the panoramic image into a spherical image and display it on the terminal device.
  • the encoding end converts the spherical panoramic image into a two-dimensional planar image.
  • the spherical panoramic image may be a 360-degree panoramic video image
  • the two-dimensional planar image may be a 2D latitude and longitude image, or a two-dimensional planar image of a polyhedral format obtained by mapping the spherical panoramic image to a polyhedron.
  • the two-dimensional planar image of the polyhedral format may be obtained by first mapping a spherical panoramic image into a regular polyhedron (for example, a regular hexahedron), and then expanding the regular hexahedron to obtain a two-dimensional planar image in a polyhedral format.
  • a regular polyhedron for example, a regular hexahedron
  • the encoding end divides the two-dimensional planar image into a plurality of regions including overlapping regions.
  • the shape of the plurality of regions may be any region, for example, a square, a rectangle, a circle, a diamond, or other irregular shape, and the like, and may be an overlapping region between the partial regions, or may be between adjacent regions. There are overlapping areas.
  • the encoding end can determine how to divide the area, and which areas have overlapping areas. Alternatively, the manner of dividing the area is determined by the decoding end, and the encoding end determines how to divide the area according to the indication information sent by the decoding end.
  • overlapping information may be generated, where the overlapping information is used to indicate the position and size of the overlapping region, and the encoding end may send the encoded information to the decoding end while transmitting the overlapping information. To the decoder.
  • the encoding end encodes an image of multiple regions.
  • H.264, H.265 and other compression methods can be used.
  • the encoding end may transmit the encoded data to the decoding end, or may store the encoded data in the cloud or other storage device.
  • the two-dimensional planar image processed by the image processing method of the embodiment of the present application may further map the images of different curved surface regions in the spherical panoramic image to the plane according to the curvature of the spherical panoramic image to obtain a two-dimensional planar image (the mapping process is specific) Reference may be made to the patent application number 201610886263.6). Then, the boundary of a certain area of the two-dimensional plane image is expanded according to the preset viewing angle, so that the expanded area overlaps with other adjacent areas, thereby determining the overlapping area. The process of determining the overlapping area is described in detail below with reference to FIG. :
  • OA is the viewing line of view of the spherical image taken from the point O.
  • the horizontal viewing angle range is ⁇
  • the vertical viewing angle range is The image area F' corresponding to the angle of view on the spherical surface is obtained according to the line of sight OA and the angle of view range (the area in which the dotted line area in the left diagram of Fig. 12 is mapped to the spherical surface).
  • ⁇ and The value of the area F' is to cover the area E', that is, the area F' contains the area E'.
  • ⁇ and You can choose angles such as 30°, 45°, and 60° that can be divisible by 360°.
  • the encoding end encodes images of multiple regions.
  • FIG. 13 is a schematic diagram of an architecture of an application of a coded image processing method according to an embodiment of the present application.
  • the architecture encompasses the entire process of encoding, decoding, and display.
  • the architecture mainly includes:
  • Panoramic camera For capturing images 360 degrees and stitching the captured images into panoramic images or videos.
  • the image stitching here can be implemented by a panoramic camera or by a media server.
  • Media server encodes or transcodes the image captured or stitched by the panoramic camera, and transmits the encoded data to the terminal through the network; in addition, the media server can select the image to be transmitted according to the user perspective fed back by the terminal. And the quality of the image that is required to be transmitted.
  • the media server here may be a media source server, a transmission server, or a transcoding server, and the media server may be on the network side;
  • the central terminal here can be VR glasses, mobile phones, tablets, TVs, computers, etc. Electronic devices that can be connected to the network.
  • the encoding and decoding process of the image can be understood as the processing of the image in the video, and the video can be understood as the image sequence of the image acquired at different times, the image of the embodiment of the present application.
  • the image processed by the processing method and the image processing method may be a single image in the video or a sequence of images constituting the video.
  • FIGS. 14 to 17 can implement the respective steps of the image processing method described in FIGS. 1 to 13, and the duplicated description is appropriately omitted for the sake of brevity.
  • FIG. 14 is a schematic block diagram of an image processing apparatus according to an embodiment of the present application.
  • the image processing apparatus 300 shown in FIG. 14 includes:
  • An acquiring module 310 configured to acquire encoded data of a first region of a two-dimensional planar image and an adjacent region adjacent to the first region, where the two-dimensional planar image is an image obtained by mapping a spherical panoramic image, where An overlapping area exists between the first area and the adjacent area;
  • a first determining module 320 configured to determine, according to the encoded data of the image of the first region, a pixel value of a pixel of the image of the first region;
  • a second determining module 330 configured to determine, according to the encoded data of the image of the adjacent area, a pixel value of a pixel of the image of the adjacent area;
  • a third determining module 340 configured to determine, according to a pixel value of a pixel point of the first area in the overlapping area and a pixel value of a pixel point of the adjacent area in the overlapping area, a pixel of the overlapping area Point target pixel value.
  • the final pixel value of the overlapping area is determined according to the pixel value of the pixel of the overlapping area in the different area, so that the change of the pixel value during the transition between adjacent areas can be made relatively smooth, thereby improving the correspondence of the adjacent area.
  • the image is displayed at the time of switching, improving the user experience.
  • the image processing apparatus further includes:
  • the sending module 350 is configured to send first indication information to the encoding end before acquiring the encoded data of the multiple regions of the two-dimensional planar image, where the first indication information is used to indicate that the encoding end is in the When the two-dimensional planar image divides the region, the overlap region exists between the divided first region and the adjacent region.
  • the first indication information is further used to indicate a size of the overlapping area or a location of the overlapping area relative to the first area.
  • the image processing apparatus further includes:
  • the receiving module 360 is configured to receive second indication information from the encoding end, where the second indication information is used to indicate that the overlapping area exists between the first area and the adjacent area.
  • the third determining module 340 is specifically configured to:
  • the third determining module 340 is specifically configured to: when the first difference is smaller than the first preset threshold, the pixels of the first area or adjacent areas in the overlapping area The pixel value of the point is determined as the target pixel value, the first difference being a difference between a resolution of an image of the first region and a resolution of an image of the adjacent region, or the first region The difference between the code rate of the encoded data and the code rate of the encoded data of the adjacent region.
  • the overlapping area is located between the first area and the adjacent area, the first area is located in a horizontal direction of the adjacent area, or the first area Located in the vertical direction of the adjacent area.
  • the size of the overlapping area is determined according to a size of the two-dimensional planar image.
  • the size of the overlapping area is determined according to a size of the first area or an adjacent area.
  • the two-dimensional planar image further includes a second area, the first area is inside the second area, and the overlapping area is the first area except the first area An area outside the area, the second area is an area in which the image of the third area in the spherical panoramic image is mapped to the area of the two-dimensional planar image, and the third area is an image in the first viewing angle range of the spherical panoramic image The area where you are.
  • the first viewing angle range is an angle value
  • the angle value is a divisor of 360 degrees.
  • FIG. 15 is a schematic block diagram of an image processing apparatus according to an embodiment of the present application.
  • the image processing apparatus 400 shown in FIG. 15 includes:
  • a dividing module 410 configured to divide the two-dimensional planar image into a plurality of regions, where the plurality of regions include a first region and an adjacent region adjacent to the first region, the first region and the adjacent There is an overlapping area between the regions, and the two-dimensional planar image is an image obtained by mapping a spherical panoramic image;
  • a first encoding module 420 configured to encode an image of the first area to obtain encoded data of the first area
  • the second encoding module 430 is configured to encode an image of the adjacent area to obtain encoded data of the first area.
  • the two-dimensional plane image is divided into an image with an overlapping area between the adjacent areas, and the decoding end can be made adjacent to the adjacent side than the area that is not overlapped in the prior art.
  • the pixel value of the pixel in the overlapping area determines the final pixel value of the overlapping area, so that the change of the pixel value during the transition between the adjacent areas is relatively gentle, thereby improving the display effect of the image corresponding to the adjacent area when switching. Improve the user experience.
  • the image processing apparatus further includes:
  • the receiving module 440 is configured to receive first indication information from the decoding end, where the first indication information is used to indicate that the first area and the adjacent area are divided when the area is divided into the two-dimensional plane image. The overlapping area exists between them.
  • the first indication information is further used to indicate a size of the overlapping area or a location of the overlapping area relative to the first area.
  • the image processing apparatus further includes:
  • the sending module 450 is configured to send the second indication information to the decoding end, where the second indication information is used to indicate that the overlapping area exists between the first area and the adjacent area.
  • the overlapping area is located between the first area and the adjacent area, the first area is located in a horizontal direction of the adjacent area, or the first area Located in the vertical direction of the adjacent area.
  • the size of the overlapping area is determined according to a size of the two-dimensional planar image.
  • the size of the overlapping area is determined according to a size of the first area or an adjacent area.
  • the two-dimensional planar image further includes a second area, the first area is inside the second area, and the overlapping area is the first area except the first area An area outside the area, the second area is an area in which the image of the third area in the spherical panoramic image is mapped to the area of the two-dimensional planar image, and the third area is an image in the first viewing angle range of the spherical panoramic image The area where you are.
  • the first viewing angle range is an angle value
  • the angle value is a divisor of 360 degrees.
  • FIG. 16 is a schematic block diagram of an image processing apparatus according to an embodiment of the present application.
  • the image processing apparatus 500 shown in FIG. 16 includes:
  • the memory 510 is configured to store a program.
  • the processor 520 is configured to execute a program stored in the memory 510. When the belonging program is executed, the processor 520 is specifically configured to:
  • Determining a target pixel value of a pixel point of the overlapping area according to a pixel value of a pixel point of the first area in the overlapping area and a pixel value of a pixel point of the adjacent area in the overlapping area.
  • the final pixel value of the overlapping area is determined according to the pixel value of the pixel of the overlapping area in the different area, so that the change of the pixel value during the transition between adjacent areas can be made relatively smooth, thereby improving the correspondence of the adjacent area.
  • the image is displayed at the time of switching, improving the user experience.
  • the image processing apparatus 500 further includes:
  • the transceiver 530 is configured to send first indication information to the encoding end before acquiring the encoded data of the multiple regions of the two-dimensional planar image, where the first indication information is used to indicate that the encoding end is in the pair When the planar image is divided into regions, the overlapping region exists between the divided first region and the adjacent region.
  • the first indication information is further used to indicate a size of the overlapping area or a location of the overlapping area relative to the first area.
  • the transceiver 530 is configured to: receive second indication information from the encoding end, where the second indication information is used to indicate that the first area and the adjacent area exist The overlapping area.
  • the processor 520 is specifically configured to: a pixel value of a pixel point of the first area in the overlapping area and a pixel point of the adjacent area in the overlapping area The pixel values are weighted and processed to obtain a target pixel value of the pixel of the overlapping region.
  • the first difference when the first difference is smaller than the first preset threshold, determining a pixel value of the pixel of the first area or the adjacent area in the overlapping area as the target pixel value
  • the first difference is a difference between a resolution of an image of the first area and a resolution of an image of the adjacent area, or a code rate of the encoded data of the first area is adjacent to the adjacent
  • the difference between the code rates of the encoded data of the region is a difference between a resolution of an image of the first area and a resolution of an image of the adjacent area, or a code rate of the encoded data of the first area is adjacent to the adjacent.
  • the size of the overlapping area is determined according to a size of the two-dimensional planar image.
  • the size of the overlapping area is determined according to a size of the first area or an adjacent area.
  • the two-dimensional planar image further includes a second area, the first area is inside the second area, and the overlapping area is the first area except the first area An area outside the area, the second area is an area in which the image of the third area in the spherical panoramic image is mapped to the area of the two-dimensional planar image, and the third area is an image in the first viewing angle range of the spherical panoramic image The area where you are.
  • the first viewing angle range is an angle value
  • the angle value is a divisor of 360 degrees.
  • FIG. 17 is a schematic block diagram of an image processing apparatus of an embodiment of the present application.
  • the image processing apparatus 600 shown in FIG. 17 includes:
  • the memory 610 is configured to store a program.
  • the processor 620 is configured to execute a program stored in the memory 610.
  • the processor 620 is specifically configured to:
  • the two-dimensional planar image is an image obtained by mapping a spherical panoramic image
  • the two-dimensional plane image is divided into an image with an overlapping area between the adjacent areas, and the decoding end can be made adjacent to the adjacent side than the area that is not overlapped in the prior art.
  • the pixel value of the pixel in the overlapping area determines the final pixel value of the overlapping area, so that the change of the pixel value during the transition between the adjacent areas is relatively gentle, thereby improving the display effect of the image corresponding to the adjacent area when switching. Improve the user experience.
  • the image processing apparatus 600 further includes:
  • the transceiver 630 is configured to send first indication information to the encoding end before acquiring the encoded data of the multiple regions of the two-dimensional planar image, where the first indication information is used to indicate that the encoding end is in the When the two-dimensional planar image divides the region, the overlap region exists between the divided first region and the adjacent region.
  • the first indication information is further used to indicate a size of the overlapping area or a location of the overlapping area relative to the first area.
  • the transceiver 630 is configured to receive second indication information from an encoding end, where the second indication information is used to indicate that the first area and the adjacent area are Overlapping area.
  • the processor 620 is specifically configured to: a pixel value of a pixel point of the first area in the overlapping area and a pixel point of the adjacent area in the overlapping area The pixel values are weighted and processed to obtain a target pixel value of the pixel of the overlapping region.
  • the processor 620 is specifically configured to: when the first difference is smaller than the first preset threshold, the first area or the adjacent area is at a pixel of the overlapping area a pixel value determined as the target pixel value, the first difference being a difference between a resolution of an image of the first region and a resolution of an image of the adjacent region, or an encoding of the first region The difference between the code rate of the data and the code rate of the encoded data of the adjacent area.
  • the overlapping area is located between the first area and the adjacent area, the first area is located in a horizontal direction of the adjacent area, or the first area Located in the vertical direction of the adjacent area.
  • the size of the overlapping area is determined according to a size of the two-dimensional planar image.
  • the size of the overlapping area is determined according to a size of the first area or an adjacent area.
  • the two-dimensional planar image further includes a second area, the first area is inside the second area, and the overlapping area is the first area except the first area An area outside the area, the second area is an area in which the image of the third area in the spherical panoramic image is mapped to the area of the two-dimensional planar image, and the third area is an image in the first viewing angle range of the spherical panoramic image The area where you are.
  • the first viewing angle range is an angle value
  • the angle value is a divisor of 360 degrees.
  • the techniques of the present application can be broadly implemented by a variety of devices or devices, including a wireless handset, an integrated circuit (IC), or a collection of ICs (eg, a chipset).
  • IC integrated circuit
  • Various components, modules or units are described in this application to emphasize functional aspects of the apparatus configured to perform the disclosed techniques, but are not necessarily required to be implemented by different hardware units. Rather, as described above, various units may be combined in a codec hardware unit or combined with suitable software and/or by a collection of interoperable hardware units (including one or more processors as described above). Or firmware to provide.
  • system and “network” are used interchangeably herein. It should be understood that the term “and/or” herein is merely an association relationship describing an associated object, indicating that there may be three relationships, for example, A and/or B, which may indicate that A exists separately, and A and B exist simultaneously. There are three cases of B alone. In addition, the character "/" in this article generally indicates that the contextual object is an "or" relationship.
  • B corresponding to A means that B is associated with A, and B can be determined according to A.
  • determining B from A does not mean that B is only determined based on A, and that B can also be determined based on A and/or other information.
  • the disclosed systems, devices, and methods may be implemented in other manners.
  • the device embodiments described above are merely illustrative.
  • the division of the unit is only a logical function division.
  • there may be another division manner for example, multiple units or components may be combined or Can be integrated into another system, or some features can be ignored or not executed.
  • the mutual coupling or direct coupling or communication connection shown or discussed may be an indirect coupling or communication connection through some interface, device or unit, and may be in an electrical, mechanical or other form.
  • the units described as separate components may or may not be physically separated, and the components displayed as units may or may not be physical units, that is, may be located in one place, or may be distributed to multiple network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of the embodiment.
  • each functional unit in each embodiment of the present application may be integrated into one processing unit, or each unit may exist physically separately, or two or more units may be integrated into one unit.
  • the functions may be stored in a computer readable storage medium if implemented in the form of a software functional unit and sold or used as a standalone product.
  • the technical solution of the present application which is essential or contributes to the prior art, or a part of the technical solution, may be embodied in the form of a software product, which is stored in a storage medium, including
  • the instructions are used to cause a computer device (which may be a personal computer, server, or network device, etc.) to perform all or part of the steps of the methods described in various embodiments of the present application.
  • the foregoing storage medium includes: a U disk, a mobile hard disk, a read-only memory (ROM), a random access memory (RAM), a magnetic disk, or an optical disk, and the like.
  • the medium of the code includes: a U disk, a mobile hard disk, a read-only memory (ROM), a random access memory (RAM), a magnetic disk, or an optical disk, and the like.

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)

Abstract

本申请实施例提供一种图像处理方法和装置。该图像处理方法包括:获取二维平面图像的第一区域和与所述第一区域相邻的相邻区域的编码数据,所述二维平面图像由球形全景图像映射得到的图像,其中,所述第一区域和所述相邻区域之间存在重叠区域;根据所述第一区域的图像的编码数据确定所述第一区域的图像的像素点的像素值;根据所述相邻区域的图像的编码数据确定所述相邻区域的图像的像素点的像素值;根据所述第一区域在所述重叠区域的像素点的像素值以及所述相邻区域在所述重叠区域的像素点的像素值,确定所述重叠区域的像素点的目标像素值。本申请实施例能提高图像显示效果。

Description

图像处理方法和装置
本申请要求于2016年10月13日提交中国专利局、申请号为201610896459.3、申请名称为“图像处理方法和装置”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请涉及图像处理领域,并且更具体地,涉及一种图像处理装置。
背景技术
由于球形全景图像无法方便的存储和表示,通常需要将球形全景图像转化为二维平面图像再进行编码等处理。现有技术在对二维平面图像进行编码时是将二维平面图像划分为多个相互独立的区域,然后分别对这些区域的图像进行编码得到编码数据,最后将编码数据存储起来或者传输到解码端。
当解码端接收到编码数据后,通过解码可以得到二维平面图像的多个区域的像素点的像素值,并通过终端将二维平面图像显示出来。如果二维平面图像中的相邻的第一区域和第二区域的图像的质量不同,那么当终端显示的图像从第一区域的图像切换到第二区域的图像时,用户会明显的感觉到第一区域图像和第二区域图像的差异,图像显示效果较差,影响用户体验。
发明内容
本申请提供一种图像处理方法和装置,以提高图像显示效果。
第一方面,提供了一种图像处理方法,该方法包括:获取二维平面图像的第一区域和与所述第一区域相邻的相邻区域的编码数据,所述二维平面图像由球形全景图像映射得到的图像,其中,所述第一区域和所述相邻区域之间存在重叠区域;根据所述第一区域的图像的编码数据确定所述第一区域的图像的像素点的像素值;根据所述相邻区域的图像的编码数据确定所述相邻区域的图像的像素点的像素值;根据所述第一区域在所述重叠区域的像素点的像素值以及所述相邻区域在所述重叠区域的像素点的像素值,确定所述重叠区域的像素点的目标像素值。
根据不同区域在重叠区域的像素点的像素值来确定重叠区域的最终像素值,能够使得相邻区域之间过渡时像素值的变化比较平缓,从而提高了相邻区域对应的图像在切换时的显示效果,提高了用户体验。
结合第一方面,在第一方面第一种实现方式中,在获取所述二维平面图像的多个区域的编码数据之前,所述方法还包括:向编码端发送第一指示信息,所述第一指示信息用于指示所述编码端在对所述二维平面图像划分区域时,划分得到的所述第一区域和所述相邻区域之间存在所述重叠区域。当上述方法是由解码端设备执行时,解码端可以确定如何将 二维平面图像划分为多个区域,并将划分区域的信息通知编码端,也就是说在解码端也可以确定划分区域的方式,这样对图像的处理更加灵活。
结合第一方面的第一种实现方式,在第一方面的第二种实现方式中,述第一指示信息还用于指示所述重叠区域的大小或所述重叠区域相对于所述第一区域的位置。
使得编码端不仅能够根据第一指示信息确定如何将二维平面图像划分为多个区域,而且还可以确定重叠区域的大小和位置,便于编码端对图像进行图像处理。
结合第一方面,在第一方面的第三种实现方式中,所述方法还包括:接收来自编码端的第二指示信息,所述第二指示信息用于指示所述第一区域和所述相邻区域之间存在所述重叠区域。
结合第一方面,以及第一方面的第一种至第三种实现方式中的任意一种,在第一方面的第四种实现方式中,根据所述第一区域在所述重叠区域的像素点的像素值以及所述相邻区域在所述重叠区域的像素点的像素值,确定所述重叠区域的像素点的目标像素值,包括:对所述第一区域在所述重叠区域的像素点的像素值以及所述相邻区域在所述重叠区域的像素点的像素值进行加权处理,得到所述重叠区域的像素点的目标像素值。
通过相邻区域在重叠区域的像素点的像素值来确定重叠区域的像素点的最终像素值,使得重叠区域的像素点的像素值能够在相邻区域之间实现平滑的过渡,提高了图像的显示效果。
结合第一方面,以及第一方面的第一种至第三种实现方式中的任意一种,在第一方面的第五种实现方式中,所述方法还包括:所述根据所述第一区域在所述重叠区域的像素点的像素值以及所述相邻区域在所述重叠区域的像素点的像素值,确定所述重叠区域的像素点的目标像素值,包括:当第一差值小于第一预设阈值时,将所述第一区域或者相邻区域在所述重叠区域的像素点的像素值确定为所述目标像素值,所述第一差值为第一区域的图像的分辨率与所述相邻区域的图像的分辨率之间的差值,或者所述第一区域的编码数据的码率与所述相邻区域的编码数据的码率之间的差值。
当相邻区域图像的分辨率比较接近时,可以直接确定重叠区域的像素点的像素值,提高了对图像的处理效率。
结合第一方面,以及第一方面的第一种至第五种实现方式中的任意一种,在第一方面的第六种实现方式中,所述重叠区域位于所述第一区域和所述相邻区域之间,所述第一区域位于所述相邻区域的水平方向上,或者所述第一区域位于所述相邻区域的竖直方向上。
结合第一方面,以及第一方面的第一种至第六种实现方式中的任意一种,在第一方面的第七种实现方式中,所述重叠区域的大小是根据所述二维平面图像的大小确定的。
结合第一方面,以及第一方面的第一种至第六种实现方式中的任意一种,在第一方面的第八种实现方式中,所述重叠区域的大小是根据所述第一区域或者相邻区域的大小确定的。
结合第一方面,以及第一方面的第一种至第五种实现方式中的任意一种,在第一方面的第九种实现方式中,所述二维平面图像还包括第二区域,所述第一区域在所述第二区域内部,所述重叠区域是所述第二区域中除所述第一区域之外的区域,第二区域是球形全景图像中的第三区域的图像映射到所述二维平面图像的区域,所述第三区域是所述球形全景图像中第一视角范围内对应的图像所在的区域。
结合第一方面的第九种实现方式,在第一方面的第十种实现方式中,所述第一视角范围为角度值,所述角度值为360度的约数。
第二方面,提供了一种图像处理方法,该方法包括:将二维平面图像划分为多个区域,所述多个区域包含第一区域和与所述第一区域相邻的相邻区域,所述第一区域和所述相邻区域之间存在重叠区域,所述二维平面图像是球形全景图像映射得到的图像;对所述第一区域的图像进行编码,得到所述第一区域的编码数据;对所述相邻区域的图像进行编码,得到所述相邻区域的编码数据。
在划分区域时,将二维平面图像划分成相邻区域之间存在重叠区域的图像,与现有技术中划分的区域没有重叠区域相比,能够使得解码端根据相邻区域在重叠区域的像素点的像素值来确定重叠区域的最终像素值,使得相邻区域之间过渡时像素值的变化比较平缓,从而提高了相邻区域对应的图像在切换时的显示效果,提高了用户体验。
结合第二方面,在第二方面的第一种实现方式中,在所述将二维平面图像划分为多个区域之前,所述方法还包括:接收来自解码端的第一指示信息,所述第一指示信息用于指示在对所述二维平面图像划分区域时,划分得到的所述第一区域和所述相邻区域之间存在所述重叠区域。
结合第二方面的第一种实现方式,在第二方面的第二种实现方式中,所述第一指示信息还用于指示所述重叠区域的大小或所述重叠区域相对于所述第一区域的位置。
结合第二方面,在第二方面的第三种实现方式中,所述方法还包括:向所述解码端发送第二指示信息,所述第二指示信息用于指示所述第一区域和所述相邻区域之间存在所述重叠区域。
结合第二方面,以及第二方面的第一种至第三种实现方式中的任意一种,在第二方面的第四种实现方式中,所述重叠区域位于所述第一区域和所述相邻区域之间,所述第一区域位于所述相邻区域的水平方向上,或者所述第一区域位于所述相邻区域的竖直方向上。
结合第二方面,以及第二方面的第一种至第四种实现方式中的任意一种,在第二方面的第五种实现方式中,所述重叠区域的大小是根据所述二维平面图像的大小确定的。
结合第二方面,以及第二方面的第一种至第四种实现方式中的任意一种,在第二方面的第六种实现方式中,所述重叠区域的大小是根据所述第一区域或者相邻区域的大小确定的。
结合第二方面,以及第二方面的第一种至第三种实现方式中的任意一种,在第二方面的第七种实现方式中,所述二维平面图像还包括第二区域,所述第一区域在所述第二区域内部,所述重叠区域是所述第二区域中除所述第一区域之外的区域,第二区域是球形全景图像中的第三区域的图像映射到所述二维平面图像的区域,所述第三区域是所述球形全景图像中第一视角范围内的图像所在的区域。
结合第二方面的第七种实现方式,在第二方面的第八种实现方式中,所述第一视角范围为角度值,所述角度值为360度的约数。
第三方面,提供一种图像处理装置,所述图像处理装置包括用于执行第一方面的方法的模块。
第四方面,提供一种图像处理装置,所述图像处理装置包括用于执行第二方面的方法的模块。
第五方面,提供一种图像处理装置,包括存储器和处理器,所述存储器用于存储程序,所述处理器用于执行程序,当所述程序被执行时,所述处理器用于执行所述第一方面中的方法。
第六方面,提供一种图像处理装置,包括存储器和处理器,所述存储器用于存储程序,所述处理器用于执行程序,当所述程序被执行时,所述处理器用于执行所述第一方面中的方法。
第七方面,提供一种计算机可读介质,所述计算机可读介质存储用于设备执行的程序代码,所述程序代码包括用于执行第一方面中的方法的指令。
第八方面,提供一种计算机可读介质,所述计算机可读介质存储用于设备执行的程序代码,所述程序代码包括用于执行第二方面中的方法的指令。
在某些实现方式中,所述多个区域还包括其它区域区域,所述其它区域之间存在所述重叠区域。
在某些实现方式中,所述多个区域中的任意两个相邻区域之间均存在所述重叠区域。
在某些实现方式中,所述第一区域和所述相邻区域的大小相同。
在某些实现方式中,所述方法还包括:确定所述第一区域的图像对应的码率与所述相邻区域的图像对应的码率的第二差值;
所述根据所述第一区域的图像在所述重叠区域的像素点的像素值以及所述相邻区域的图像在所述重叠区域的像素点的像素值,确定所述重叠区域的像素点的目标像素值,包括:
当所述第二差值小于第二预设阈值时,将所述第一区域或者相邻区域的图像在所述重叠区域的像素点的像素值确定为所述重叠区域的像素点的目标像素值。
在某些实现方式中,所述重叠区域的大小是固定的。也就是说,该重叠区域的大小可以是在编解码之前就已经确定好了的固定区域大小。
在某些实现方式中,所述多个区域中的任意一个区域的形状为正方形、矩形、圆形、梯形中以及弧形区域中的至少一种。应理解,所述多个区域中的任意一个区域的形状还可以为其它不规则的形状。
在某些实现方式中,所述二维平面图像为2D经纬图。
附图说明
图1是视频文件对应的码流示意图。
图2是球形全景图像映射到经纬图的示意图。
图3是二维平面图像的子区域的示意图。
图4是二维平面图像的不同子区域对应的编码数据的示意图。
图5是本申请实施例的图像处理方法的示意性流程图。
图6是二维平面图像的第一区域和相邻区域的示意图。
图7是二维平面图像的第一区域和相邻区域的示意图。
图8是二维平面图像的第一区域和相邻区域的示意图。
图9是二维平面图像的第一区域和相邻区域的示意图。
图10是本申请实施例的图像处理方法的示意性流程图。
图11是二维图像的A、B、C、D区域的示意图。
图12是球形全景图像映射到平面图像的示意图。
图13是本申请实施例的图像处理方法应用的系统架构的示意性框图。
图14是本申请实施例的图像处理装置的示意性框图。
图15是本申请实施例的图像处理装置的示意性框图。
图16是本申请实施例的图像处理装置的示意性框图。
图17是本申请实施例的图像处理装置的示意性框图。
具体实施方式
下面将结合附图,对本申请中的技术方案进行描述。
为了更好地理解本申请实施例的图像处理方法和图像处理方法,下面先结合图1至图4对现有技术中视频编解码的相关内容进行简单的介绍。
在对图像进行编码处理时通常会制作不同版本的码流,码流也被称为媒体表示(representation),不同媒体表示的码率以及分辨率等编码参数一般不相同,每个媒体表示可以分割成多个小的文件,这些小的文件一般称为分段(segment)。如图1所示,编码设备在对某视频文件编码后得到了三个媒体表示R1、R2和R3,其中R1是码率为4MBPS(每秒兆比特)的高清视频,R2是码率为2MBPS的标清视频,R3是码率为1MBPS的标清视频,每个媒体表示均包括8个分段。每个媒体表示的分段可以依次保存在一个文件中,也可以分开保存在不同的文件中。在播放该视频时用户可以选择任意一个媒体表示来播放,例如,如果用户选择R1,那么播放的就是4MBPS的高清视频,如果用户选择R2,那么播放的就是2MBPS的标清视频(显然,选择R1的播放效果要好于选择R2的播放效果)。此外,用户还可以选择R1、R2以及R3中的部分分段来播放,如图1所示,阴影部分是客户请求播放的分段,在播放该视频文件时,首先播放R3的前三个分段,然后播放R1的第四个至第六个分段,最后再播放R2的第七个和第八个分段,也就是说,用户开始请求播放的是1MBPS的标清视频,接下来请求播放的是4MBPS的高清视频,最后请求播放的是2MBPS的标清视频。当播放的视频在不同的分辨率之间切换时,由于视频质量的差异较大,用户能够明显感觉到视频质量的变化,并且,如果用户当前正在关注某个区域的图像时,如果视频质量突然产生很大的变化,那么将会严重影响用户的主观体验。
随着虚拟现实(Virtual Reality,VR)设备的兴起,开始出现了球形全景图像,球形全景图像是一种360度的球形图像,它超出了人类眼睛的正常视觉范围。在对球形全景图像进行编码的过程中通常先将全景图像先转换为二维平面图像(二维平面图像较为常见的形式是经纬图),然后再对二维平面图像进行其它处理。如图2所示,图2左侧是一个球形全景图像,对该球形全景图像进行映射可以得到图2右侧的经纬图。将球形全景图像转化为2D经纬图的具体过程为:对于球面图像上的任意一点P,在映射图像时,将P点的经度坐标映射到矩形的水平坐标,将P点的维度坐标映射到矩形的垂直坐标,按照同样的方式,将球面图像上的其它点都映射到矩形区域中就得到了一张2:1的2D经纬图。
在对经纬图进行编码时,一般会将该经纬图划分为n个独立的区域,然后对每个区域中的图像进行单独的编码,对于不同的区域可能采用不同的编码方式。具体来说,可以将图2中的经纬图划分成如图3所示的9个独立区域(A、B、C、D、E、F、G、H、I), 接下来,对这九个区域分别进行编码,得到这九个区域的媒体表示如图4所示。
终端在播放图2所示的球形全景图像时,不同区域的图像可能对应不同的视角,例如,当用户视角发生变化,使得用户看到的图像从D区域显示的图像切换到E区域显示的图像,然后再切换到F区域显示的图像,如图4所示,用户看到的是D区域的前三个分段(S1、S2和S3)对应的图像,E区域的第四个和第五个分段(S4和S5)对应的图像,以及F区域的第六个至第八个分段(S6、S7和S8)对应的图像。如果D区域中的S3与E区域的S4的码流或者对应的图像的分辨率不同,那么用户在观看到D区域对应的图像切换到E区域对应的图像时,会明显地感觉到两个区域对应的图像的差异,并且,当用户正在关注D区域和E区域交界的图像时,会严重影响用户体验。同样,如果E区域中的S5与F区域的S6的码流或者对应的图像的分辨率不同时,那么用户在观看到E区域对应的图像切换到F区域对应的图像时也会明显地感觉到图像的差异。
因此,本申请实施例提供了一种编图像处理方法,在对二维平面图像进行编码时将二维平面图像划分成相邻区域间包含重叠区域的多个区域,然后再对多个区域的图像分别进行编码得到编码数据,并且在对不同区域图像进行编码时可以采用不同的编码方式和编码参数。解码端在对编码数据进行解码后,可以得到二维平面图像的多个区域的图像的像素点的像素值,然后在根据多个区域的像素点的像素值来确定重叠区域的像素点的最终像素值,也就是说,在确定重叠区域的像素点的像素值时综合考虑了相邻区域的像素点的像素值,这样在播放图像时就能使得相邻区域对应的图像在切换时更平滑,提高了显示效果。
下面结合图5至图13对本申请实施例的图像处理方法进行详细的介绍。
图5是本申请实施例的图像处理方法的示意性流程图。图5所示的图像处理方法可以由解码端设备执行,该图像处理方法包括:
110、获取二维平面图像的第一区域和与所述第一区域相邻的相邻区域的编码数据,所述二维平面图像由球形全景图像映射得到的图像,其中,所述第一区域和所述相邻区域之间存在重叠区域;
该二维平面图像可以是如图2所示的经纬图,也可以是球形全景图像映射到多面体上得到的二维平面图像,例如,该二维平面图像可以是球形全景图像映射到正六面体上,并将该正六面体展开后得到的图像。
当上述图像处理方法由解码端设备执行时,解码端设备可以从编码端、云端或者其它存储设备获取二维平面图像的编码数据。
120、根据第一区域的图像的编码数据确定所述第一区域的图像的像素点的像素值。
130、根据相邻区域的图像的编码数据确定相邻区域的图像的像素点的像素值。
上述第一区域以及第一区域的相邻区域可以是上述二维图像中的多个区域中的任意相邻的两个区域。应理解,上述多个区域中还可以包含其它区域,这些区域之间也可以存在重叠区域。
优选地,上述多个区域为N个区域,N个区域中任意相邻的两个区域之间均存在重叠区域。
通过在相邻区域之间设置重叠区域,使得终端显示的图像从一个区域对应的图像切换到另一个相邻区域对应的图像时,过渡比较自然,提高用户体验。
140、根据第一区域在重叠区域的像素点的像素值以及相邻区域在所述重叠区域的像 素点的像素值,确定重叠区域的像素点的目标像素值。
例如,如图6所示,A区域为第一区域,B区域为第一区域的相邻区域,A区域和B区域均包含区域C,也就是说,C区域为A区域和B区域的重叠区域。假设C区域中共有100个像素点,对A区域进行解码时得到的C区域的100个像素点的像素值为第一类像素值,对B区域进行解码时得到的C区域的100个像素点的像素值为第二类像素值,如果编码端对A区域和B区域中的图像采用的编码方式不同,那么很可能导致解码端解码后得到第一类像素值与第二类像素值不同。这时,解码端可以根据第一类像素值和第二类像素值来确定C区域中的100个像素点的目标像素值,例如,将第一类像素值和第二类像素值进行平滑滤波处理,使得C区域中的100个像素点的目标像素值介于第一类像素值和第二类像素值之间,这样,当终端显示的图像从第一区域的图像切换到相邻区域的图像时,能够实现平滑过渡,提高播放图像的显示效果,提高用户体验。
上述解码端设备可以是终端设备,也可以是专门用于解码的设备,当解码端设备是终端设备时,在解码后可以直接显示图像,当解码端设备是专门用于解码的设备时,解码端设备在完成解码后可以将解码信息存储起来或者传输给终端。
本申请实施例中,根据不同区域在重叠区域的像素点的像素值来确定重叠区域的最终像素值,能够使得相邻区域之间过渡时像素值的变化比较平缓,从而提高了相邻区域对应的图像在切换时的显示效果,提高了用户体验。
可选地,作为一个实施例,在获取二维平面图像的多个区域的编码数据之前,本申请实施例的图像处理方法还包括:
向编码端发送第一指示信息,该第一指示信息用于指示编码端在对二维平面图像划分区域时,划分得到的第一区域和相邻区域之间存在上述重叠区域。
也就是说,解码端可以先确定对二维平面图像进行区域划分的划分方式,该划分方式具体包括将二维平面图像划分为几个区域,哪些区域之间存在重叠区域。然后,解码端向编码端发送第一指示信息,使得编码端按照解码端确定的划分方式对二维平面图像进行划分。
可选地,上述第一指示信息还用于指示所述重叠区域的大小或所述重叠区域相对于所述第一区域的位置。也就是说第一指示信息不仅指示了二维平面图像的哪些区域之间存在重叠区域,而且还指示了重叠区域的大小或者位置。
应理解,编码端不仅可以根据解码端的指示将二维平面图像划分为多个区域,也可以自行确定如何将二维平面图像划分为多个区域。
可选地,作为一个实施例,本申请实施例的方法还包括:
接收编码端发送的第二指示信息,该第二指示信息用于指示所述第一区域和所述相邻区域之间存在所述重叠区域。
解码端根据上述第二指示信息确定第一区域与相邻区域之间存在重叠区域。
可选地,作为一个实施例,根据第一区域在重叠区域的像素点的像素值以及相邻区域在重叠区域的像素点的像素值,确定重叠区域的像素点的目标像素值,包括:对第一区域在重叠区域的像素点的像素值以及相邻区域在重叠区域的像素点的像素值进行加权处理,得到重叠区域的像素点的目标像素值。
例如,第一区域和相邻区域中的重叠区域共有9个像素点(这里为了描述方便仅以9 个像素点为例,实际上重叠区域的像素点的个数要远远大于9个)。其中,对第一区域对应的编码数据进行解码后得到第一区域在重叠区域中的9个像素点的像素值分别为:100,100,100,110,120,110,100,120,120;对相邻区域对应的编码数据进行解码后得到相邻区域在重叠区域中的9个像素点的像素值分别为:90,90,90,90,90,90,100,100,100。对第一区域中的这些像素点的像素值和相邻区域中的这些像素点的像素值进行平滑处理,得到重叠区域的这9个像素点的最终像素值分别为:(100+90)/2,(100+90)/2,(100+90)/2,(110+90)/2,(120+90)/2,(110+90)/2,(100+100)/2,(100+120)/2,(100+100)/2。
可选地,作为一个实施例,本申请实施例的图像处理方法还包括:
根据第一区域在重叠区域的像素点的像素值以及相邻区域在重叠区域的像素点的像素值,确定重叠区域的像素点的目标像素值,包括:
当第一差值小于第一预设阈值时,将第一区域或者相邻区域在重叠区域的像素点的像素值确定为目标像素值,第一差值为第一区域的图像的分辨率与相邻区域的图像的分辨率之间的差值,或者第一区域的编码数据的码率与相邻区域的编码数据的码率之间的差值。
应理解,在将第一区域或者相邻区域在重叠区域的像素点的像素值确定为目标像素值之前,本申请实施例的方法还包括:确定第一区域的图像的分辨率与相邻区域的图像的分辨率的第一差值;
当第一区域的图像的分辨率与相邻区域图像的分辨率比较接近时,第一区域在重叠区域的像素点的像素值往往与相邻区域在重叠区域的像素点的像素值也比较接近,这时直接将第一区域或者相邻区域在重叠区域的像素值作为重叠区域的像素点的像素值不仅能够提高解码效率。
可选地,本申请实施例的图像处理方法还包括:
确定第一区域的图像对应的码率与相邻区域的图像对应的码率的第二差值;
根据第一区域在重叠区域的像素点的像素值以及相邻区域在重叠区域的像素点的像素值,确定重叠区域的像素点的目标像素值,包括:当第二差值小于第二预设阈值时,将第一区域或者相邻区域在重叠区域的像素点的像素值确定为重叠区域的像素点的目标像素值。
除了比较第一区域的图像的分辨率与相邻区域的图像的分辨率的差异外,还可以比较第一区域的图像对应的码率与相邻区域的图像对应的码率的差异,当分辨率或者码率差异较小时都可以直接将第一区域或者相邻区域在重叠区域的像素点的像素值确定为重叠区域的像素点的最终像素值。
可选地,上述重叠区域位于第一区域和相邻区域之间,第一区域位于相邻区域的水平方向上,或者第一区域位于相邻区域的竖直方向上。应理解,第一区域还可以是位于与相邻区域成一定夹角的方向上。
下面结合图7至图9对第一区域以及第一区域的相邻区域的重叠区域进行详细的说明。
在图7中,A区域为第一区域,B区域为相邻区域,A区域和B区域为在水平方向上相邻的区域,A和B在水平方向形成重叠区域。在图8中,区域A为第一区域,D区域为相邻区域,A区域和D区域为在垂直方向上相邻的区域,A区域和D区域是垂直方向 上形成重叠区域。
应理解,在二维平面图像中,一个区域既可以与水平方向上的相邻区域形成重叠区域,也可以与在垂直方向上的相邻区域形成重叠区域。
在图9中,A区域为第一区域,B区域和D区域为A区域的相邻区域。A区域和B区域之间在水平方向上形成重叠区域,A区域和D区域在垂直方向上形成重叠区域。
可选地,作为一个实施例,上述重叠区域的大小是预设的。具体地,重叠区域的大小可以是编码器或者是用户预先设置的,例如,可以将重叠区域设置为K×L大小的区域,其中,K为200像素,L为100像素。
可选地,作为一个实施例,上述重叠区域的大小是根据所述二维平面图像的大小确定的。
具体地,重叠区域的大小与二维平面图像的大小正相关,二维平面图像越大重叠区域也就越大。在确定重叠区域的大小时,可以将二维平面图像乘以一定的比例确定重叠区域的大小。例如,二维平面图像的大小为X×Y(水平方向为X像素,垂直方向为Y像素),二维平面图像的每个区域的大小为M×N(水平方向为M像素,垂直方向为N像素),重叠区域的大小为K×L(水平方向为K像素,垂直方向为L像素)。那么,当重叠区域为水平重叠区域时,K=1/10*X,L=N;而当重叠区域为垂直重叠区域时,K=M,L=1/9*Y。
可选地,作为一个实施例,上述重叠区域的大小是根据第一区域或者相邻区域的大小确定的。
具体地,重叠区域的大小与划分区域的大小正相关,划分区域越大重叠区域也就越大。在确定重叠区域的大小时,可以将二维平面图像乘以一定的比例确定重叠区域的大小。例如,第一区域或者相邻区域的大小为M×N(水平方向为M像素,垂直方向为N像素)。那么,当重叠区域为水平重叠区域时,K=1/5*M,L=N;而当重叠区域为垂直重叠区域时,K=M,L=1/4*N。
可选地,作为一个实施例,上述重叠区域的大小是根据球形全景图像的视角范围确定的。也就是说,在将球形全景图像映射到平面图像时,可以根据用户观看球形全景视频图像的视角范围来确定如何划分区域,以及哪些区域之间存在重叠区域,重叠区域的大小是多少。
在对图像进行区域划分时,如果相邻区域之间存在重叠区域时,会导致对图像进行处理时码率会有一定的增加。例如,第一区域和相邻区域之间存在重叠区域,那么在对第一区域和相邻区域进行编码时会重复对重叠区域中的图像进行编码,因此会导致码率的增加,而码率的增加可能会影响图像的播放效果。
可选地,作为一个实施例,二维平面图像还包括第二区域,第一区域在第二区域内部,重叠区域是第二区域中除第一区域之外的区域,第二区域是球形全景图像中的第三区域的图像映射到二维平面图像的区域,第三区域是球形全景图像中第一视角范围内对应的图像所在的区域。
上述第三区域以上述第一区域映射到球形全景图像中的第四区域的中心为中心,并根据预设的第一视角范围得到。
可选地,作为一个实施例,其特征在于,上述第一视角范围为角度值,所述角度值为360度的约数。例如角度值可以为60度,30度等。
这里是先根据第一视角确定第三区域,然后再将第三区域映射到第二区域,根据第二区域和第一区域最终再确定第一区域的重叠区域。
以图9为例,二维平面图像被划分成9个区域,这9个区域中,每两个相邻的区域之间均存在重叠区域。通过设置不同大小的重叠区域,得出不同重叠区域下二维平面图像的码率以及码率随着不同大小重叠区域的变化情况如表1所示。
表1 重叠区域大小与码率以及码率变化的关系
Figure PCTCN2017105090-appb-000001
在表1中,随着重叠区域与图像子区域的比例逐渐增加(重叠区域的面积逐渐增加),二维平面图像的码率增加的比例并不明显。也就是说本申请实施例可以在不明显增加二维平面图像对应的码率的情况下,提高图像的显示效果,提升用户体验。
上文从解码端的角度对本申请实施例的图像处理方法进行了详细的描述,下面从解码端的角度对本申请实施例的图像处理方法的整个流程进行详细的介绍。应理解,编码过程与解码过程是对应的,为了简洁,在介绍图像处理方法时,适当省略在解码端已经存在的内容。
图10是本申请实施例的图像处理方法的示意性流程图。图10所示的图像处理方法可以由编码端设备执行,该图像处理方法包括:
210、将二维平面图像划分为多个区域,该多个区域包含第一区域以及与第一区域相邻的相邻区域,所述第一区域和相邻区域之间存在重叠区域,该二维平面图像是球形全景图像映射得到的图像。
220、对第一区域的图像进行编码,得到第一区域的编码数据。
230、对相邻区域的图像进行编码,得到相邻区域的编码数据。
应理解,当上述图像处理方法由编码端执行时,编码端还会对多个区域中的其他区域的图像进行编码,并最终得到整个二维平面图像的各个区域图像的编码数据,并将该编码数据存储起来或者发送给解码端。当解码端获得上述编码数据后就可以对这些编码数据进行解码,具体解码的过程如图5所示的图像处理方法所示,这里不再重复描述。
本申请实施例中,在划分区域时,将二维平面图像划分成相邻区域之间存在重叠区域的图像,与现有技术中划分的区域没有重叠区域相比,能够使得解码端根据相邻区域在重叠区域的像素点的像素值来确定重叠区域的最终像素值,使得相邻区域之间过渡时像素值的变化比较平缓,从而提高了相邻区域对应的图像在切换时的显示效果,提高了用户体验。
可选地,作为一个实施例,本申请实施例的图像处理方法还包括:
接收来自解码端的第一指示信息,第一指示信息用于指示在对二维平面图像划分区域时,划分得到的第一区域和相邻区域之间存在所述重叠区域。
可选地,上述第一指示信息还用于指示重叠区域的大小或重叠区域相对于第一区域的位置。
通过发送第一指示信息,使得解码端能够方便地确定二维图像的多个区域中的第一区域与第一区域的相邻区域之间存在重叠区域。
可选地,作为一个实施例,本申请实施例的图像处理方法还包括:
向解码端发送第二指示信息,第二指示信息用于指示第一区域和所述相邻区域之间存在所述重叠区域。
可选地,作为一个实施例,上述重叠区域位于第一区域和相邻区域之间,第一区域位于相邻区域的水平方向上,或者第一区域位于相邻区域的竖直方向上。应理解,第一区域还可以是位于与相邻区域成一定夹角的方向上。
可选地,作为一个实施例,重叠区域的大小是根据二维平面图像的大小确定的。
可选地,作为一个实施例,重叠区域的大小是根据第一区域或者相邻区域的大小确定的。
可选地,作为一个实施例,二维平面图像还包括第二区域,第一区域在第二区域内部,重叠区域是第二区域中除第一区域之外的区域,第二区域是球形全景图像中的第三区域的图像映射到二维平面图像的区域,第三区域是球形全景图像中第一视角范围内的图像所在的区域。
上述第三区域以上述第一区域映射到球形全景图像中的第四区域的中心为中心,并根据预设的第一视角范围得到。
可选地,作为一个实施例,其特征在于,上述第一视角范围为角度值,所述角度值为360度的约数。
上文分别从解码端和编码端的角度分别对本申请实施例的图像处理方法和图像处理方法进行了详细的描述,下面结合图11和图12以具体的实例对本申请实施例的图像处理方法和图像处理方法进行详细的描述。
实例一:
对二维平面图像进行解码的具体过程如下:
301、解码端对编码数据进行解码,得到解码图像。
解码端可以从编码端或者云端获取二维平面图像的多个编码数据,对编码数据进行解码可以生成重建图像,该重建图像是由多个区域组成的二维平面图像。在解码时可以采用H.264、H.265等解压缩方法。
302、对多个区域的图像进行处理。
具体地,解码端在获取编码数据时的同时或者之前获取编码端生成的重叠区信息(该重叠信息用于指示编码端将二维平面图像划分为几个区域,哪几个区域之间存在重叠区域,重叠区域的大小和位置等),这样,在对多个区域的图像进行解码后,解码端就可以根据重叠信息确定重建图像的重叠区域的位置和大小,并根据重叠区域所在的不同区域的像素点的像素值来确定重叠区域的像素点的最终像素值。
例如,如图12所示,解码端根据二维平面图像A、B、C、D区域对应的编码数据得到这些区域的重建图像。并根据重叠信息确定A区域和C区域之间以及B区域和D区域之间均存在重叠区域。那么,解码端可以根据A区域和C区域的像素点的像素值确定A区域和C区域之间的重叠区域的像素点的像素值,同样,解码端也可以采用同样的方法确定B区域和D区域之间的重叠区域的像素点的像素值。
303、将多个区域的图像按照区域位置拼接成全景图像。
以图11中的二维平面图像为例,将A、B、C、D区域的图像按照这些区域图像的分布位置,将这些区域的图像拼接成全景图像。
304、在终端设备上显示步骤303得到的全景图像,或者将该全景图像转化为球面图像并在终端设备上显示。
实例二:
对二维平面图像进行编码的具体过程如下
401、编码端将球形全景图像转换为二维平面图像。
具体地,上述球形全景图像可以是360度的全景视频图像,二维平面图像可以为2D经纬图,或者是将上述球形全景图像映射到多面体中得到的多面体格式的二维平面图像。
该多面体格式的二维平面图像可以是先将球形全景图像映射到一个正多面体(例如,正六面体)中,然后将正六面体展开就可以得到多面体格式的二维平面图像。
402、编码端将二维平面图像划分为包含重叠区域的多个区域。
上述多个区域的形状可以为任何区域,例如,正方形、矩形、圆形、菱形或者其它不规则的形状等,此外,可以是部分区域之间有重叠区域,也可以是相邻的区域之间均有重叠区域。在划分区域时,编码端可以自行确定如何进行区域的划分,哪些区域之间存在重叠区域等。或者,区域的划分方式由解码端确定,编码端根据解码端发送的指示信息来确定如何进行区域的划分。
在将二维平面图像划分为多个区域之后,可以生成重叠信息,该重叠信息用于指示重叠区域的位置和大小等,编码端可以在向解码端发送编码数据的同时将该重叠信息也发送给解码端。
403、编码端对多个区域的图像进行编码。
编码时可以采用H.264,H.265等压缩方法。
404、得到编码数据。
在得到编码数据后,编码端可以将编码数据传输给解码端,也可以将编码数据存储在云端或者其它存储装置中。
实例三:
本申请实施例的图像处理方法处理的二维平面图像还可以是根据球面全景图像的曲率,将球面全景图像中的不同曲面区域的图像分别映射到平面中,得到二维平面图像(映射过程具体可以参考专利申请号为201610886263.6的方案)。然后根据预设视角对二维平面图像的某个区域的边界进行扩充,使得扩充后的区域与其他相邻区域重叠,从而确定重叠区域,下面结合图12对确定重叠区域的过程进行详细的描述:
501、获取由球面全景图像映射得到的二维平面图像,将二维平面图像划分为多个子区域,确定需要进行扩充的区域E(这里是以E区域为例进行说明,实际上二维平面图像中的可以有多个需要扩充的区域)。
502、在球面图像中确定与区域E对应的曲面区域E’。
503、找到曲面区域的中心点,将A与球心O连线,可以认为OA为从O点引出的观看球面图像的观看视线。假设存在一个预定视角范围,水平视角范围为θ,垂直视角范围为
Figure PCTCN2017105090-appb-000002
,根据视线OA以及视角范围获得球面上的该视角范围对应的图像区域F’(图12左 图中的虚线区域映射到球面的区域)。θ和
Figure PCTCN2017105090-appb-000003
的取值要满足区域F’要覆盖区域E’,也就是说区域F’包含区域E’。
优选地,在满足区域F’包含区域E’的情况下,θ和
Figure PCTCN2017105090-appb-000004
的可以选择30°、45°以及60°等能够被360°整除的角度。
504、根据区域G和区域E确定E区域的重叠区域,其中G区域是图12左图中与区域F’对应的区域。由于区域G包含区域E,因此,可以将区域G中除区域E之外的其它区域作为重叠区域。
505、按照类似的方式,确定二维图像中的其它区域的子图像的重叠区域。
506、编码端对多个区域的图像进行编码。
507、得到编码数据。
图13是本申请实施例的编图像处理方法的应用的架构的示意图。该架构包含了编码、解码以及显示的整个过程。该架构主要包括:
全景相机:用于360度采集图像并将采集到的图像拼接为全景图像或视频。这里的图像拼接既可以由全景相机来实现,也可以由媒体服务器来实现。
媒体服务器:对全景相机采集或者拼接成的图像进行编码或转码的操作,同时将编码后的数据通过网络传输到终端;此外,媒体服务器还可以根据终端反馈的用户视角,选择需要传输的图像以及要求传输的图像的质量。这里的媒体服务器可以是媒体源服务器,传输服务器,也可以是转码服务器等,媒体服务器可以在网络侧;
终端:这里的中终端可以是VR眼镜,手机,平板,电视,电脑等可以连上网络的电子设备。
应理解,在本申请实施例中,对图像的编码和解码处理可以理解为对视频中的图像进行的处理,视频可以理解为在不同时刻所采集得到图像的图像序列,本申请实施例的图像处理方法和图像处理方法的处理的图像可以是视频中的单幅图像,也可以是构成视频的图像序列。
上文结合图1至图13,详细的描述了本申请实施例的图像处理方法,下面将结合图14至图17,描述本申请实施例的图像处理装置。
应理解,图14至图17描述的图像处理装置能够实现图1至图13中描述的图像处理方法的各个步骤,为了简洁,适当省略重复的描述。
图14是本申请实施例的图像处理装置的示意性框图。图14所示的图像处理装置300包括:
获取模块310,用于获取二维平面图像的第一区域和与所述第一区域相邻的相邻区域的编码数据,所述二维平面图像由球形全景图像映射得到的图像,其中,所述第一区域和所述相邻区域之间存在重叠区域;
第一确定模块320,用于根据所述第一区域的图像的编码数据确定所述第一区域的图像的像素点的像素值;
第二确定模块330,用于根据所述相邻区域的图像的编码数据确定所述相邻区域的图像的像素点的像素值;
第三确定模块340,用于根据所述第一区域在所述重叠区域的像素点的像素值以及所述相邻区域在所述重叠区域的像素点的像素值,确定所述重叠区域的像素点的目标像素 值。
本申请实施例中,根据不同区域在重叠区域的像素点的像素值来确定重叠区域的最终像素值,能够使得相邻区域之间过渡时像素值的变化比较平缓,从而提高了相邻区域对应的图像在切换时的显示效果,提高了用户体验。
可选地,作为一个实施例,所述图像处理装置还包括:
发送模块350,用于在在获取所述二维平面图像的多个区域的编码数据之前,向编码端发送第一指示信息,所述第一指示信息用于指示所述编码端在对所述二维平面图像划分区域时,划分得到的所述第一区域和所述相邻区域之间存在所述重叠区域。
可选地,作为一个实施例,所述第一指示信息还用于指示所述重叠区域的大小或所述重叠区域相对于所述第一区域的位置。
可选地,作为一个实施例,所述图像处理装置还包括:
接收模块360,用于接收来自编码端的第二指示信息,所述第二指示信息用于指示所述第一区域和所述相邻区域之间存在所述重叠区域。
可选地,作为一个实施例,所述第三确定模块340具体用于:
对所述第一区域在所述重叠区域的像素点的像素值以及所述相邻区域在所述重叠区域的像素点的像素值进行加权和处理,得到所述重叠区域的像素点的目标像素值。
可选地,作为一个实施例,所述第三确定模块340具体用于:当第一差值小于第一预设阈值时,将所述第一区域或者相邻区域在所述重叠区域的像素点的像素值确定为所述目标像素值,所述第一差值为第一区域的图像的分辨率与所述相邻区域的图像的分辨率之间的差值,或者所述第一区域的编码数据的码率与所述相邻区域的编码数据的码率之间的差值。
可选地,作为一个实施例,所述重叠区域位于所述第一区域和所述相邻区域之间,所述第一区域位于所述相邻区域的水平方向上,或者所述第一区域位于所述相邻区域的竖直方向上。
可选地,作为一个实施例,所述重叠区域的大小是根据所述二维平面图像的大小确定的。
可选地,作为一个实施例,所述重叠区域的大小是根据所述第一区域或者相邻区域的大小确定的。
可选地,作为一个实施例,所述二维平面图像还包括第二区域,所述第一区域在所述第二区域内部,所述重叠区域是所述第二区域中除所述第一区域之外的区域,第二区域是球形全景图像中的第三区域的图像映射到所述二维平面图像的区域,所述第三区域是所述球形全景图像中第一视角范围内的图像所在的区域。
可选地,作为一个实施例,其特征在于,上述第一视角范围为角度值,所述角度值为360度的约数。
图15是本申请实施例的图像处理装置的示意性框图。图15所示的图像处理装置400包括:
划分模块410,用于将二维平面图像划分为多个区域,所述多个区域包含第一区域和与所述第一区域相邻的相邻区域,所述第一区域和所述相邻区域之间存在重叠区域,所述二维平面图像是球形全景图像映射得到的图像;
第一编码模块420,用于对所述第一区域的图像进行编码,得到所述第一区域的编码数据;
第二编码模块430,用于对所述相邻区域的图像进行编码,得到所述第一区域的编码数据。
本申请实施例中,在划分区域时,将二维平面图像划分成相邻区域之间存在重叠区域的图像,与现有技术中划分的区域没有重叠区域相比,能够使得解码端根据相邻区域在重叠区域的像素点的像素值来确定重叠区域的最终像素值,使得相邻区域之间过渡时像素值的变化比较平缓,从而提高了相邻区域对应的图像在切换时的显示效果,提高了用户体验。
可选地,作为一个实施例,所述图像处理装置还包括:
接收模块440,用于接收来自解码端的第一指示信息,所述第一指示信息用于指示在对所述二维平面图像划分区域时,划分得到的所述第一区域和所述相邻区域之间存在所述重叠区域。
可选地,作为一个实施例,所述第一指示信息还用于指示所述重叠区域的大小或所述重叠区域相对于所述第一区域的位置。
可选地,作为一个实施例,所述图像处理装置还包括:
发送模块450,用于向所述解码端发送第二指示信息,所述第二指示信息用于指示所述第一区域和所述相邻区域之间存在所述重叠区域。
可选地,作为一个实施例,所述重叠区域位于所述第一区域和所述相邻区域之间,所述第一区域位于所述相邻区域的水平方向上,或者所述第一区域位于所述相邻区域的竖直方向上。
可选地,作为一个实施例,所述重叠区域的大小是根据所述二维平面图像的大小确定的。
可选地,作为一个实施例,所述重叠区域的大小是根据所述第一区域或者相邻区域的大小确定的。
可选地,作为一个实施例,所述二维平面图像还包括第二区域,所述第一区域在所述第二区域内部,所述重叠区域是所述第二区域中除所述第一区域之外的区域,第二区域是球形全景图像中的第三区域的图像映射到所述二维平面图像的区域,所述第三区域是所述球形全景图像中第一视角范围内的图像所在的区域。
可选地,作为一个实施例,其特征在于,上述第一视角范围为角度值,所述角度值为360度的约数。
图16是本申请实施例的图像处理装置的示意性框图。图16所示的图像处理装置500包括:
存储器510,用于存储程序。
处理器520,用于执行存储器510中存储的程序,当所属程序被执行时,所述处理器520具体用于:
获取二维平面图像的第一区域和与所述第一区域相邻的相邻区域的编码数据,所述二维平面图像由球形全景图像映射得到的图像,其中,所述第一区域和所述相邻区域之间存在重叠区域;
根据所述第一区域的图像的编码数据确定所述第一区域的图像的像素点的像素值;
根据所述相邻区域的图像的编码数据确定所述相邻区域的图像的像素点的像素值;
根据所述第一区域在所述重叠区域的像素点的像素值以及所述相邻区域在所述重叠区域的像素点的像素值,确定所述重叠区域的像素点的目标像素值。
本申请实施例中,根据不同区域在重叠区域的像素点的像素值来确定重叠区域的最终像素值,能够使得相邻区域之间过渡时像素值的变化比较平缓,从而提高了相邻区域对应的图像在切换时的显示效果,提高了用户体验。
可选地,作为一个实施例,所述图像处理装置500还包括:
收发器530,用于在获取所述二维平面图像的多个区域的编码数据之前,向编码端发送第一指示信息,所述第一指示信息用于指示所述编码端在对所述二维平面图像划分区域时,划分得到的所述第一区域和所述相邻区域之间存在所述重叠区域。
可选地,作为一个实施例,所述第一指示信息还用于指示所述重叠区域的大小或所述重叠区域相对于所述第一区域的位置。
可选地,作为一个实施例,所述收发器530用于:接收来自编码端的第二指示信息,所述第二指示信息用于指示所述第一区域和所述相邻区域之间存在所述重叠区域。
可选地,作为一个实施例,所述处理器520具体用于:对所述第一区域在所述重叠区域的像素点的像素值以及所述相邻区域在所述重叠区域的像素点的像素值进行加权和处理,得到所述重叠区域的像素点的目标像素值。
可选地,作为一个实施例,当第一差值小于第一预设阈值时,将所述第一区域或者相邻区域在所述重叠区域的像素点的像素值确定为所述目标像素值,所述第一差值为第一区域的图像的分辨率与所述相邻区域的图像的分辨率之间的差值,或者所述第一区域的编码数据的码率与所述相邻区域的编码数据的码率之间的差值。
可选地,作为一个实施例,所述重叠区域的大小是根据所述二维平面图像的大小确定的。
可选地,作为一个实施例,所述重叠区域的大小是根据所述第一区域或者相邻区域的大小确定的。
可选地,作为一个实施例,所述二维平面图像还包括第二区域,所述第一区域在所述第二区域内部,所述重叠区域是所述第二区域中除所述第一区域之外的区域,第二区域是球形全景图像中的第三区域的图像映射到所述二维平面图像的区域,所述第三区域是所述球形全景图像中第一视角范围内的图像所在的区域。
可选地,作为一个实施例,其特征在于,上述第一视角范围为角度值,所述角度值为360度的约数。
图17是本申请实施例的图像处理装置的示意性框图。图17所示的图像处理装置600包括:
存储器610,用于存储程序。
处理器620,用于执行存储器610中存储的程序,当所属程序被执行时,所述处理器620具体用于:
将二维平面图像划分为多个区域,所述多个区域包含第一区域和与所述第一区域相邻的相邻区域,所述第一区域和所述相邻区域之间存在重叠区域,所述二维平面图像是球形全景图像映射得到的图像;
对所述第一区域的图像进行编码,得到所述第一区域的编码数据;
对所述相邻区域的图像进行编码,得到所述第一区域的编码数据。
本申请实施例中,在划分区域时,将二维平面图像划分成相邻区域之间存在重叠区域的图像,与现有技术中划分的区域没有重叠区域相比,能够使得解码端根据相邻区域在重叠区域的像素点的像素值来确定重叠区域的最终像素值,使得相邻区域之间过渡时像素值的变化比较平缓,从而提高了相邻区域对应的图像在切换时的显示效果,提高了用户体验。
可选地,作为一个实施例,所述图像处理装置600还包括:
收发器630,用于在在获取所述二维平面图像的多个区域的编码数据之前,向编码端发送第一指示信息,所述第一指示信息用于指示所述编码端在对所述二维平面图像划分区域时,划分得到的所述第一区域和所述相邻区域之间存在所述重叠区域。
可选地,作为一个实施例,所述第一指示信息还用于指示所述重叠区域的大小或所述重叠区域相对于所述第一区域的位置。
可选地,作为一个实施例,所述收发器630用于接收来自编码端的第二指示信息,所述第二指示信息用于指示所述第一区域和所述相邻区域之间存在所述重叠区域。
可选地,作为一个实施例,所述处理器620具体用于:对所述第一区域在所述重叠区域的像素点的像素值以及所述相邻区域在所述重叠区域的像素点的像素值进行加权和处理,得到所述重叠区域的像素点的目标像素值。
可选地,作为一个实施例,所述处理器620具体用于:当第一差值小于第一预设阈值时,将所述第一区域或者相邻区域在所述重叠区域的像素点的像素值确定为所述目标像素值,所述第一差值为第一区域的图像的分辨率与所述相邻区域的图像的分辨率之间的差值,或者所述第一区域的编码数据的码率与所述相邻区域的编码数据的码率之间的差值。
可选地,作为一个实施例,所述重叠区域位于所述第一区域和所述相邻区域之间,所述第一区域位于所述相邻区域的水平方向上,或者所述第一区域位于所述相邻区域的竖直方向上。
可选地,作为一个实施例,所述重叠区域的大小是根据所述二维平面图像的大小确定的。
可选地,作为一个实施例,所述重叠区域的大小是根据所述第一区域或者相邻区域的大小确定的。
可选地,作为一个实施例,所述二维平面图像还包括第二区域,所述第一区域在所述第二区域内部,所述重叠区域是所述第二区域中除所述第一区域之外的区域,第二区域是球形全景图像中的第三区域的图像映射到所述二维平面图像的区域,所述第三区域是所述球形全景图像中第一视角范围内的图像所在的区域。
可选地,作为一个实施例,其特征在于,上述第一视角范围为角度值,所述角度值为360度的约数。
本申请的技术可以广泛地由多种装置或设备来实施,所述装置或设备包含无线手持机、集成电路(IC)或IC集合(例如,芯片组)。在本申请中描述各种组件、模块或单元以强调经配置以执行所揭示技术的装置的功能方面,但未必要求通过不同硬件单元来实现。确切地说,如上文所描述,各种单元可组合于编解码器硬件单元中,或通过交互操作性硬件单元(包含如上文所描述的一个或多个处理器)的集合结合合适软件及/或固件来提供。
应理解,说明书通篇中提到的“一个实施方式”或“一实施方式”意味着与实施方式有关的特定特征、结构或特性包括在本申请的至少一个实施方式中。因此,在整个说明书各处出现的“在一个实施方式中”或“在一实施方式中”未必一定指相同的实施方式。此外,这些特定的特征、结构或特性可以任意适合的方式结合在一个或多个实施方式中。
在本申请的各种实施方式中,应理解,上述各过程的序号的大小并不意味着执行顺序的先后,各过程的执行顺序应以其功能和内在逻辑确定,而不应对本申请实施方式的实施过程构成任何限定。
另外,本文中术语“系统”和“网络”在本文中常可互换使用。应理解,本文中术语“和/或”,仅仅是一种描述关联对象的关联关系,表示可以存在三种关系,例如,A和/或B,可以表示:单独存在A,同时存在A和B,单独存在B这三种情况。另外,本文中字符“/”,一般表示前后关联对象是一种“或”的关系。
在本申请所提供的实施方式中,应理解,“与A相应的B”表示B与A相关联,根据A可以确定B。但还应理解,根据A确定B并不意味着仅仅根据A确定B,还可以根据A和/或其它信息确定B。
本领域普通技术人员可以意识到,结合本文中所公开的实施例描述的各示例的单元及算法步骤,能够以电子硬件、或者计算机软件和电子硬件的结合来实现。这些功能究竟以硬件还是软件方式来执行,取决于技术方案的特定应用和设计约束条件。专业技术人员可以对每个特定的应用来使用不同方法来实现所描述的功能,但是这种实现不应认为超出本申请的范围。
所属领域的技术人员可以清楚地了解到,为描述的方便和简洁,上述描述的系统、装置和单元的具体工作过程,可以参考前述方法实施例中的对应过程,在此不再赘述。
在本申请所提供的几个实施例中,应该理解到,所揭露的系统、装置和方法,可以通过其它的方式实现。例如,以上所描述的装置实施例仅仅是示意性的,例如,所述单元的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,例如多个单元或组件可以结合或者可以集成到另一个系统,或一些特征可以忽略,或不执行。另一点,所显示或讨论的相互之间的耦合或直接耦合或通信连接可以是通过一些接口,装置或单元的间接耦合或通信连接,可以是电性,机械或其它的形式。
所述作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分布到多个网络单元上。可以根据实际的需要选择其中的部分或者全部单元来实现本实施例方案的目的。
另外,在本申请各个实施例中的各功能单元可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中。
所述功能如果以软件功能单元的形式实现并作为独立的产品销售或使用时,可以存储在一个计算机可读取存储介质中。基于这样的理解,本申请的技术方案本质上或者说对现有技术做出贡献的部分或者该技术方案的部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储介质中,包括若干指令用以使得一台计算机设备(可以是个人计算机,服务器,或者网络设备等)执行本申请各个实施例所述方法的全部或部分步骤。而前述的存储介质包括:U盘、移动硬盘、只读存储器(Read-Only Memory,ROM)、随机存取存储器(Random Access Memory,RAM)、磁碟或者光盘等各种可以存储程序代 码的介质。
以上所述,仅为本申请的具体实施方式,但本申请的保护范围并不局限于此,任何熟悉本技术领域的技术人员在本申请揭露的技术范围内,可轻易想到变化或替换,都应涵盖在本申请的保护范围之内。因此,本申请的保护范围应以所述权利要求的保护范围为准。

Claims (40)

  1. 一种图像处理方法,其特征在于,包括:
    获取二维平面图像的第一区域和与所述第一区域相邻的相邻区域的编码数据,所述二维平面图像由球形全景图像映射得到的图像,其中,所述第一区域和所述相邻区域之间存在重叠区域;
    根据所述第一区域的图像的编码数据确定所述第一区域的图像的像素点的像素值;
    根据所述相邻区域的图像的编码数据确定所述相邻区域的图像的像素点的像素值;
    根据所述第一区域在所述重叠区域的像素点的像素值以及所述相邻区域在所述重叠区域的像素点的像素值,确定所述重叠区域的像素点的目标像素值。
  2. 如权利要求1所述的方法,其特征在于,在获取所述二维平面图像的多个区域的编码数据之前,所述方法还包括:
    向编码端发送第一指示信息,所述第一指示信息用于指示所述编码端在对所述二维平面图像划分区域时,划分得到的所述第一区域和所述相邻区域之间存在所述重叠区域。
  3. 如权利要求2所述的方法,其特征在于,所述第一指示信息还用于指示所述重叠区域的大小或所述重叠区域相对于所述第一区域的位置。
  4. 如权利要求1所述的方法,其特征在于,所述方法还包括:
    接收来自编码端的第二指示信息,所述第二指示信息用于指示所述第一区域和所述相邻区域之间存在所述重叠区域。
  5. 如权利要求1-4中任一项所述的方法,其特征在于,根据所述第一区域在所述重叠区域的像素点的像素值以及所述相邻区域在所述重叠区域的像素点的像素值,确定所述重叠区域的像素点的目标像素值,包括:
    对所述第一区域在所述重叠区域的像素点的像素值以及所述相邻区域在所述重叠区域的像素点的像素值进行加权处理,得到所述重叠区域的像素点的目标像素值。
  6. 如权利要求1-4中任一项所述的方法,其特征在于,所述方法还包括:
    所述根据所述第一区域在所述重叠区域的像素点的像素值以及所述相邻区域在所述重叠区域的像素点的像素值,确定所述重叠区域的像素点的目标像素值,包括:
    当第一差值小于第一预设阈值时,将所述第一区域或者相邻区域在所述重叠区域的像素点的像素值确定为所述目标像素值,所述第一差值为第一区域的图像的分辨率与所述相邻区域的图像的分辨率之间的差值,或者所述第一区域的编码数据的码率与所述相邻区域的编码数据的码率之间的差值。
  7. 如权利要求1-6中任一项所述的方法,其特征在于,所述重叠区域位于所述第一区域和所述相邻区域之间,所述第一区域位于所述相邻区域的水平方向上,或者所述第一区域位于所述相邻区域的竖直方向上。
  8. 如权利要求1-7中任一项所述的方法,其特征在于,所述重叠区域的大小是根据所述二维平面图像的大小确定的。
  9. 如权利要求1-7中任一项所述的方法,其特征在于,所述重叠区域的大小是根据所述第一区域或者相邻区域的大小确定的。
  10. 如权利要求1-6中任一项所述的方法,其特征在于,所述二维平面图像还包括第二区域,所述第一区域在所述第二区域内部,所述重叠区域是所述第二区域中除所述第一区域之外的区域,第二区域是球形全景图像中的第三区域的图像映射到所述二维平面图像的区域,所述第三区域是所述球形全景图像中第一视角范围内对应的图像所在的区域。
  11. 如权利要求10所述的方法,其特征在于,所述第一视角范围为角度值,所述角度值为360度的约数。
  12. 一种图像处理方法,其特征在于,包括:
    将二维平面图像划分为多个区域,所述多个区域包含第一区域和与所述第一区域相邻的相邻区域,所述第一区域和所述相邻区域之间存在重叠区域,所述二维平面图像是球形全景图像映射得到的图像;
    对所述第一区域的图像进行编码,得到所述第一区域的编码数据;
    对所述相邻区域的图像进行编码,得到所述相邻区域的编码数据。
  13. 如权利要求12所述的方法,其特征在于,在所述将二维平面图像划分为多个区域之前,所述方法还包括:
    接收来自解码端的第一指示信息,所述第一指示信息用于指示在对所述二维平面图像划分区域时,划分得到的所述第一区域和所述相邻区域之间存在所述重叠区域。
  14. 如权利要求13所述的方法,其特征在于,所述第一指示信息还用于指示所述重叠区域的大小或所述重叠区域相对于所述第一区域的位置。
  15. 如权利要求12所述的方法,其特征在于,所述方法还包括:
    向所述解码端发送第二指示信息,所述第二指示信息用于指示所述第一区域和所述相邻区域之间存在所述重叠区域。
  16. 如权利要求12-15中任一项所述的方法,其特征在于,所述重叠区域位于所述第一区域和所述相邻区域之间,所述第一区域位于所述相邻区域的水平方向上,或者所述第一区域位于所述相邻区域的竖直方向上。
  17. 如权利要求12-16中任一项所述的方法,其特征在于,所述重叠区域的大小是根据所述二维平面图像的大小确定的。
  18. 如权利要求12-16中任一项所述的方法,其特征在于,所述重叠区域的大小是根据所述第一区域或者相邻区域的大小确定的。
  19. 如权利要求12-15中任一项所述的方法,其特征在于,所述二维平面图像还包括第二区域,所述第一区域在所述第二区域内部,所述重叠区域是所述第二区域中除所述第一区域之外的区域,第二区域是球形全景图像中的第三区域的图像映射到所述二维平面图像的区域,所述第三区域是所述球形全景图像中第一视角范围内的图像所在的区域。
  20. 如权利要求19所述的方法,其特征在于,所述第一视角范围为角度值,所述角度值为360度的约数。
  21. 一种图像处理装置,其特征在于,包括:
    获取模块,用于获取二维平面图像的第一区域和与所述第一区域相邻的相邻区域的编码数据,所述二维平面图像由球形全景图像映射得到的图像,其中,所述第一区域和所述相邻区域之间存在重叠区域;
    第一确定模块,用于根据所述第一区域的图像的编码数据确定所述第一区域的图像的 像素点的像素值;
    第二确定模块,用于根据所述相邻区域的图像的编码数据确定所述相邻区域的图像的像素点的像素值;
    第三确定模块,用于根据所述第一区域在所述重叠区域的像素点的像素值以及所述相邻区域在所述重叠区域的像素点的像素值,确定所述重叠区域的像素点的目标像素值。
  22. 如权利要求21所述的图像处理装置,其特征在于,所述图像处理装置还包括:
    发送模块,用于在在获取所述二维平面图像的多个区域的编码数据之前,向编码端发送第一指示信息,所述第一指示信息用于指示所述编码端在对所述二维平面图像划分区域时,划分得到的所述第一区域和所述相邻区域之间存在所述重叠区域。
  23. 如权利要求22所述的图像处理装置,其特征在于,所述第一指示信息还用于指示所述重叠区域的大小或所述重叠区域相对于所述第一区域的位置。
  24. 如权利要求21所述的图像处理装置,其特征在于,所述图像处理装置还包括:
    接收模块,用于接收来自编码端的第二指示信息,所述第二指示信息用于指示所述第一区域和所述相邻区域之间存在所述重叠区域。
  25. 如权利要求21-24中任一项所述的图像处理装置,其特征在于,所述第三确定模块具体用于:
    对所述第一区域在所述重叠区域的像素点的像素值以及所述相邻区域在所述重叠区域的像素点的像素值进行加权和处理,得到所述重叠区域的像素点的目标像素值。
  26. 如权利要求21-24中任一项所述的图像处理装置,其特征在于,所述第三确定模块具体用于:
    当第一差值小于第一预设阈值时,将所述第一区域或者相邻区域在所述重叠区域的像素点的像素值确定为所述目标像素值,所述第一差值为第一区域的图像的分辨率与所述相邻区域的图像的分辨率之间的差值,或者所述第一区域的编码数据的码率与所述相邻区域的编码数据的码率之间的差值。
  27. 如权利要求21-26中任一项所述的图像处理装置,其特征在于,所述重叠区域位于所述第一区域和所述相邻区域之间,所述第一区域位于所述相邻区域的水平方向上,或者所述第一区域位于所述相邻区域的竖直方向上。
  28. 如权利要求21-27中任一项所述的图像处理装置,其特征在于,所述重叠区域的大小是根据所述二维平面图像的大小确定的。
  29. 如权利要求21-27中任一项所述的图像处理装置,其特征在于,所述重叠区域的大小是根据所述第一区域或者相邻区域的大小确定的。
  30. 如权利要求21-26中任一项所述的图像处理装置,其特征在于,所述二维平面图像还包括第二区域,所述第一区域在所述第二区域内部,所述重叠区域是所述第二区域中除所述第一区域之外的区域,第二区域是球形全景图像中的第三区域的图像映射到所述二维平面图像的区域,所述第三区域是所述球形全景图像中第一视角范围内的图像所在的区域。
  31. 如权利要求30所述的图像处理装置,其特征在于,所述第一视角范围为角度值,所述角度值为360度的约数。
  32. 一种图像处理装置,其特征在于,包括:
    划分模块,用于将二维平面图像划分为多个区域,所述多个区域包含第一区域和与所述第一区域相邻的相邻区域,所述第一区域和所述相邻区域之间存在重叠区域,所述二维平面图像是球形全景图像映射得到的图像;
    第一编码模块,用于对所述第一区域的图像进行编码,得到所述第一区域的编码数据;
    第二编码模块,用于对所述相邻区域的图像进行编码,得到所述第一区域的编码数据。
  33. 如权利要求32所述的图像处理装置,其特征在于,所述图像处理装置还包括:
    接收模块,用于接收来自解码端的第一指示信息,所述第一指示信息用于指示在对所述二维平面图像划分区域时,划分得到的所述第一区域和所述相邻区域之间存在所述重叠区域。
  34. 如权利要求33所述的图像处理装置,其特征在于,所述第一指示信息还用于指示所述重叠区域的大小或所述重叠区域相对于所述第一区域的位置。
  35. 如权利要求32所述的图像处理装置,其特征在于,所述图像处理装置还包括:
    发送模块,用于向所述解码端发送第二指示信息,所述第二指示信息用于指示所述第一区域和所述相邻区域之间存在所述重叠区域。
  36. 如权利要求32-35中任一项所述的图像处理装置,其特征在于,所述重叠区域位于所述第一区域和所述相邻区域之间,所述第一区域位于所述相邻区域的水平方向上,或者所述第一区域位于所述相邻区域的竖直方向上。
  37. 如权利要求32-36中任一项所述的方法,其特征在于,所述重叠区域的大小是根据所述二维平面图像的大小确定的。
  38. 如权利要求32-36中任一项所述的图像处理装置,其特征在于,所述重叠区域的大小是根据所述第一区域或者相邻区域的大小确定的。
  39. 如权利要求32-35中任一项所述的图像处理装置,其特征在于,所述二维平面图像还包括第二区域,所述第一区域在所述第二区域内部,所述重叠区域是所述第二区域中除所述第一区域之外的区域,第二区域是球形全景图像中的第三区域的图像映射到所述二维平面图像的区域,所述第三区域是所述球形全景图像中第一视角范围内的图像所在的区域。
  40. 如权利要求39所述的图像处理装置,其特征在于,所述第一视角范围为角度值,所述角度值为360度的约数。
PCT/CN2017/105090 2016-10-13 2017-09-30 图像处理方法和装置 WO2018068680A1 (zh)

Priority Applications (2)

Application Number Priority Date Filing Date Title
EP17860369.2A EP3518177A4 (en) 2016-10-13 2017-09-30 METHOD AND DEVICE FOR IMAGE PROCESSING
US16/383,022 US11138460B2 (en) 2016-10-13 2019-04-12 Image processing method and apparatus

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201610896459.3A CN107945101B (zh) 2016-10-13 2016-10-13 图像处理方法和装置
CN201610896459.3 2016-10-13

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US16/383,022 Continuation US11138460B2 (en) 2016-10-13 2019-04-12 Image processing method and apparatus

Publications (1)

Publication Number Publication Date
WO2018068680A1 true WO2018068680A1 (zh) 2018-04-19

Family

ID=61905172

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2017/105090 WO2018068680A1 (zh) 2016-10-13 2017-09-30 图像处理方法和装置

Country Status (4)

Country Link
US (1) US11138460B2 (zh)
EP (1) EP3518177A4 (zh)
CN (1) CN107945101B (zh)
WO (1) WO2018068680A1 (zh)

Families Citing this family (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR102598082B1 (ko) * 2016-10-28 2023-11-03 삼성전자주식회사 영상 표시 장치, 모바일 장치 및 그 동작방법
CN109725864B (zh) * 2018-12-24 2022-05-17 广州励丰文化科技股份有限公司 一种基于edid自定义分辨率的方法及系统
CN109767382B (zh) * 2019-01-21 2023-02-28 上海联影智能医疗科技有限公司 图像重建方法、装置、计算机设备和存储介质
WO2021120188A1 (en) * 2019-12-20 2021-06-24 Qualcomm Incorporated Image fusion
CN111885310A (zh) * 2020-08-31 2020-11-03 深圳市圆周率软件科技有限责任公司 一种全景数据处理方法、处理设备和播放设备
CN112543345B (zh) * 2020-12-02 2023-01-06 深圳创维新世界科技有限公司 图像处理方法、发送端、接收端以及计算机可读存储介质
CN115861050A (zh) * 2022-08-29 2023-03-28 如你所视(北京)科技有限公司 用于生成全景图像的方法、装置、设备和存储介质

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN201523430U (zh) * 2009-06-23 2010-07-07 长峰科技工业集团公司 一种全景视频监控系统
CN102013110A (zh) * 2010-11-23 2011-04-13 李建成 三维全景图像生成方法及系统
CN103227914A (zh) * 2013-05-17 2013-07-31 天津芬奇动视文化传播有限公司 一种多媒体边缘融合技术应用
US20140152658A1 (en) * 2012-12-05 2014-06-05 Samsung Electronics Co., Ltd. Image processing apparatus and method for generating 3d image thereof
CN105791882A (zh) * 2016-03-22 2016-07-20 腾讯科技(深圳)有限公司 视频编码方法及装置
CN106023074A (zh) * 2016-05-06 2016-10-12 安徽伟合电子科技有限公司 一种分区获取的视频图像拼接方法

Family Cites Families (22)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4754492A (en) * 1985-06-03 1988-06-28 Picturetel Corporation Method and system for adapting a digitized signal processing system for block processing with minimal blocking artifacts
WO1995015530A1 (en) * 1993-11-30 1995-06-08 Polaroid Corporation Image coding by use of discrete cosine transforms
US5657073A (en) 1995-06-01 1997-08-12 Panoramic Viewing Systems, Inc. Seamless multi-camera panoramic imaging with distortion correction and selectable field of view
ES2211685T3 (es) 1996-01-29 2004-07-16 Matsushita Electric Industrial Co., Ltd. Decodificador con medios para complementar una imagen digital con un elemento de imagen.
US6057847A (en) * 1996-12-20 2000-05-02 Jenkins; Barry System and method of image generation and encoding using primitive reprojection
US6111582A (en) * 1996-12-20 2000-08-29 Jenkins; Barry L. System and method of image generation and encoding using primitive reprojection
US6486908B1 (en) * 1998-05-27 2002-11-26 Industrial Technology Research Institute Image-based method and system for building spherical panoramas
US7015954B1 (en) 1999-08-09 2006-03-21 Fuji Xerox Co., Ltd. Automatic video system using multiple cameras
US6978051B2 (en) * 2000-03-06 2005-12-20 Sony Corporation System and method for capturing adjacent images by utilizing a panorama mode
JP2003141562A (ja) * 2001-10-29 2003-05-16 Sony Corp 非平面画像の画像処理装置及び画像処理方法、記憶媒体、並びにコンピュータ・プログラム
JP4297111B2 (ja) * 2005-12-14 2009-07-15 ソニー株式会社 撮像装置、画像処理方法及びそのプログラム
US20070200926A1 (en) * 2006-02-28 2007-08-30 Chianglin Yi T Apparatus and method for generating panorama images
CN101895693A (zh) * 2010-06-07 2010-11-24 北京高森明晨信息科技有限公司 生成全景图像的方法及装置
US9007432B2 (en) * 2010-12-16 2015-04-14 The Massachusetts Institute Of Technology Imaging systems and methods for immersive surveillance
CN102903090B (zh) * 2012-01-20 2015-11-25 李文松 全景立体图像合成方法、装置、系统和浏览装置
EP2645713A1 (en) * 2012-03-30 2013-10-02 Alcatel Lucent Method and apparatus for encoding a selected spatial portion of a video stream
JP2013228896A (ja) * 2012-04-26 2013-11-07 Sony Corp 画像処理装置、画像処理方法、プログラム
JP6494294B2 (ja) * 2014-05-15 2019-04-03 キヤノン株式会社 画像処理装置、及び撮像システム
US20170026659A1 (en) * 2015-10-13 2017-01-26 Mediatek Inc. Partial Decoding For Arbitrary View Angle And Line Buffer Reduction For Virtual Reality Video
CN105678693B (zh) * 2016-01-25 2019-05-14 成都易瞳科技有限公司 全景视频浏览播放方法
US9992502B2 (en) * 2016-01-29 2018-06-05 Gopro, Inc. Apparatus and methods for video compression using multi-resolution scalable coding
US10074161B2 (en) * 2016-04-08 2018-09-11 Adobe Systems Incorporated Sky editing based on image composition

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN201523430U (zh) * 2009-06-23 2010-07-07 长峰科技工业集团公司 一种全景视频监控系统
CN102013110A (zh) * 2010-11-23 2011-04-13 李建成 三维全景图像生成方法及系统
US20140152658A1 (en) * 2012-12-05 2014-06-05 Samsung Electronics Co., Ltd. Image processing apparatus and method for generating 3d image thereof
CN103227914A (zh) * 2013-05-17 2013-07-31 天津芬奇动视文化传播有限公司 一种多媒体边缘融合技术应用
CN105791882A (zh) * 2016-03-22 2016-07-20 腾讯科技(深圳)有限公司 视频编码方法及装置
CN106023074A (zh) * 2016-05-06 2016-10-12 安徽伟合电子科技有限公司 一种分区获取的视频图像拼接方法

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See also references of EP3518177A4 *

Also Published As

Publication number Publication date
CN107945101A (zh) 2018-04-20
US20190236400A1 (en) 2019-08-01
US11138460B2 (en) 2021-10-05
EP3518177A4 (en) 2019-08-21
EP3518177A1 (en) 2019-07-31
CN107945101B (zh) 2021-01-29

Similar Documents

Publication Publication Date Title
WO2018068680A1 (zh) 图像处理方法和装置
US11257283B2 (en) Image reconstruction method, system, device and computer-readable storage medium
EP3669333B1 (en) Sequential encoding and decoding of volymetric video
CN109644279B (zh) 用于用信号通知360度视频信息的方法和系统
CN109565610B (zh) 处理全向视频的方法、装置以及存储介质
KR102013403B1 (ko) 구면 영상 스트리밍
US10467775B1 (en) Identifying pixel locations using a transformation function
CN112204993B (zh) 使用重叠的被分区的分段的自适应全景视频流式传输
WO2018095087A1 (zh) 一种去块滤波方法及终端
EP3656126A1 (en) Methods, devices and stream for encoding and decoding volumetric video
WO2019012067A1 (en) METHODS, DEVICES AND STREAMS FOR VOLUMETRIC VIDEO ENCODING AND DECODING
TW201832186A (zh) 圖像的映射、處理方法、裝置和機器可讀媒體
WO2018044917A1 (en) Selective culling of multi-dimensional data sets
KR20190029735A (ko) 곡선 뷰 비디오 인코딩/디코딩에서 효율성 향상을 위한 시스템 및 방법
WO2020063547A1 (zh) 球面图像处理方法、装置及服务器
US10958950B2 (en) Method, apparatus and stream of formatting an immersive video for legacy and immersive rendering devices
CN111669564B (zh) 图像重建方法、系统、设备及计算机可读存储介质
WO2022022348A1 (zh) 视频压缩方法、解压方法、装置、电子设备及存储介质
CN111667438B (zh) 视频重建方法、系统、设备及计算机可读存储介质
TWI681662B (zh) 用於減少基於投影的圖框中的偽影的方法和裝置
EP3646286A1 (en) Apparatus and method for decoding and coding panoramic video
WO2022073796A1 (en) A method and apparatus for adapting a volumetric video to client devices
JP2022541908A (ja) ボリュメトリックビデオコンテンツを配信するための方法および装置
WO2019127100A1 (zh) 视频编码的方法、装置和计算机系统
WO2023280266A1 (zh) 鱼眼图像压缩、鱼眼视频流压缩以及全景视频生成方法

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 17860369

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

ENP Entry into the national phase

Ref document number: 2017860369

Country of ref document: EP

Effective date: 20190425