WO2019214594A1 - 图像处理方法、相关设备及计算机存储介质 - Google Patents

图像处理方法、相关设备及计算机存储介质 Download PDF

Info

Publication number
WO2019214594A1
WO2019214594A1 PCT/CN2019/085787 CN2019085787W WO2019214594A1 WO 2019214594 A1 WO2019214594 A1 WO 2019214594A1 CN 2019085787 W CN2019085787 W CN 2019085787W WO 2019214594 A1 WO2019214594 A1 WO 2019214594A1
Authority
WO
WIPO (PCT)
Prior art keywords
reference pixel
image
coordinate position
coordinate
pixel points
Prior art date
Application number
PCT/CN2019/085787
Other languages
English (en)
French (fr)
Inventor
宋翼
邸佩云
张赛萍
Original Assignee
华为技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 华为技术有限公司 filed Critical 华为技术有限公司
Publication of WO2019214594A1 publication Critical patent/WO2019214594A1/zh
Priority to US17/090,394 priority Critical patent/US11416965B2/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4007Scaling of whole images or parts thereof, e.g. expanding or contracting based on interpolation, e.g. bilinear interpolation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • G06T7/74Determining position or orientation of objects or cameras using feature-based methods involving reference images or patches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/97Determining parameters from multiple pictures

Definitions

  • the present invention relates to the field of image processing technologies, and in particular, to an image processing method, a related device, and a computer storage medium.
  • Image interpolation algorithm is a traditional algorithm for scaling images in modern digital image processing, which mainly includes nearest neighbor interpolation algorithm, bilinear interpolation algorithm and so on.
  • these image interpolation algorithms are all proposed for planar images, and the performance for processing planar images is better. It is not suitable for non-planar images (curved surfaces), such as large-angle images such as 360° images (panoramic images).
  • the embodiments of the present invention disclose an image processing method, a related device, and a computer storage medium, which can solve the problems of reducing the performance and efficiency of image interpolation when performing image processing on a non-planar image by using a planar image algorithm in the prior art.
  • an embodiment of the present invention provides an image interpolation method, where the method includes:
  • the target image is a curved image or a planar image having a spherical image format.
  • the spherical image format corresponding to each of the source image and the target image is different, and/or the image resolution corresponding to each of the source image and the target image is different. Specifically, when the source image and the target image are both planar images having a spherical image format, the spherical image formats corresponding to the source image and the target image are different.
  • the m reference pixel points are obtained by sampling up in the longitude direction and/or the latitude direction around the first coordinate position; wherein a portion of the m reference pixel points are reference pixels
  • the ordinate or latitude coordinates of the points are the same, and/or the abscissa or longitude coordinates of some of the m reference pixel points are the same.
  • the corresponding coordinate positions of the m reference pixel points are not completely the same, that is, there are no pixel points having the same coordinate position among the m reference pixel points.
  • the source image is a planar image having a spherical image format, the first coordinate position being a position of a point composed of an abscissa and an ordinate in a plane coordinate system;
  • the longitude direction is according to geography Determining, by the position mapping relationship between the coordinate system and the plane coordinate system of the source image, the latitude value corresponding to the coordinate position of the source image in the longitude direction is unchanged; the latitude direction is according to the geographic The positional relationship between the coordinate system and the plane coordinate system of the source image determines that the longitude value corresponding to the coordinate position of the source image in the latitude direction does not change.
  • the longitude direction is a direction in which the latitude coordinates of the geographic coordinate system remain unchanged
  • the source image is determined according to a positional mapping relationship between the geographic coordinate system and the plane coordinate system of the source image. of.
  • the latitude direction is a direction in which the longitude coordinate remains unchanged in the geographic coordinate system
  • the source image is determined according to a positional mapping relationship between the geographic coordinate system and the plane coordinate system of the source image. of.
  • the spherical distance includes a first spherical distance and a first spherical position between the coordinate position and the first coordinate position according to any one of the m reference pixel points a spherical distance, the first spherical distance being a spherical distance between a coordinate position of the any reference pixel in the longitude direction and the first coordinate position, the second spherical distance being in a latitudinal direction a spherical distance between a coordinate position of any reference pixel and the first coordinate position;
  • Determining, according to a spherical distance between the coordinate positions of the m reference pixel points and the first coordinate position, an interpolation weight of each of the m reference pixel points to the pixel to be interpolated comprises:
  • a unit distance the unit distance including a first unit distance and a second unit distance, wherein the first unit distance is a distance between the first reference pixel point and the second reference pixel point in the longitude direction;
  • the unit distance is a distance between the third reference pixel and the fourth reference pixel in the latitude direction;
  • the first reference pixel point and the second reference pixel point are distances from the first coordinate position in the m reference pixel points in the longitude direction (specifically, the first coordinate may be The longitude coordinate corresponding to the position) the two nearest reference pixels.
  • the first reference pixel point and the second reference pixel point correspond to the same latitude coordinate.
  • the third reference pixel point and the fourth reference pixel point are distances from the first coordinate position among the m reference pixel points in the latitude direction (specifically, the first coordinate may be The latitude coordinates corresponding to the position) the two nearest reference pixels.
  • the third reference pixel point and the fourth reference pixel point correspond to the same longitude coordinate.
  • the first unit distance Ud ⁇ can be calculated by the following formula:
  • the coordinate position of the first reference pixel A is ( ⁇ A , ⁇ A ), the coordinate position B of the second reference pixel is ( ⁇ B , ⁇ B ), and the first coordinate position is ( ⁇ , ⁇ ).
  • is the longitude coordinate and ⁇ is the latitude coordinate.
  • the second unit distance Ud ⁇ can be calculated by the following formula:
  • the coordinate position of the third reference pixel point C is ( ⁇ C , ⁇ C ), and the coordinate position of the fourth reference pixel point D is ( ⁇ D , ⁇ D ).
  • is the longitude coordinate and ⁇ is the latitude coordinate.
  • the first spherical distance And the second spherical distance It can be calculated according to the following formula:
  • the coordinate position of any of the reference pixel points is ( ⁇ ij , ⁇ ij ), and the first coordinate position is ( ⁇ , ⁇ ).
  • the interpolation weights of the pixels to be interpolated include:
  • Determining the m reference pixels according to a first weight component of the m reference pixel points to the pixel to be interpolated and a second weight component of the m reference pixel points respectively to the pixel to be interpolated The interpolation weights of the points to the pixel to be interpolated are respectively determined.
  • the determining the m reference pixel points according to the first unit distance and a first spherical distance between a respective coordinate position of the m reference pixel points and the first coordinate position includes:
  • the first weight component of the pixel to be interpolated by any one of the m reference pixel points It can be calculated by the following formula:
  • the determining the m reference pixel points according to the second unit distance and a second spherical distance between a respective coordinate position of the m reference pixel points and the first coordinate position includes:
  • the second weight component of the pixel to be interpolated by any one of the m reference pixel points It can be calculated by the following formula:
  • the interpolation weight L( ⁇ ij , ⁇ ij ) of any one of the m reference pixel points to the pixel to be interpolated may be calculated by using the following formula:
  • the pixel value P o of the pixel to be interpolated may be obtained by the following formula:
  • P o is the pixel value of the pixel to be interpolated.
  • P ij is a pixel value of any of the m reference pixel points.
  • L( ⁇ ij , ⁇ ij ) is an interpolation weight of the reference pixel point to the pixel to be interpolated.
  • a is the number of reference pixel points obtained by sampling in the longitude direction.
  • b is the number of reference pixel points obtained by sampling in the latitude direction.
  • the longitude direction is a direction in which the longitude coordinate value changes the fastest, or the longitude direction is a direction in which the latitude coordinate value remains unchanged.
  • the latitude direction is the direction in which the latitude coordinate value changes the fastest, or the latitude direction is the direction in which the longitude coordinate value remains unchanged.
  • the coordinate position is a position of a point composed of an abscissa and an ordinate in a plane coordinate system of the planar image, or is a longitude coordinate and a latitude in a geographic coordinate system of the curved image. The position of the point formed by the coordinates.
  • an embodiment of the present invention provides a terminal device, including a processing unit, where:
  • the processing unit is configured to determine, according to a coordinate position of the pixel to be interpolated in the target image, a first coordinate position corresponding to the pixel to be interpolated in the source image, where the source image is a curved image to be converted, or a planar image having a spherical image format to be converted, the target image being the image after the source image conversion;
  • the processing unit is further configured to determine, according to the first coordinate position, m reference pixel points, where the m reference pixel points are located in the source image, where m is a positive integer;
  • the processing unit is further configured to determine, according to a spherical distance between each of the coordinate positions of the m reference pixel points and the first coordinate position, each of the m reference pixel points to the pixel to be interpolated Interpolation weight
  • the processing unit is further configured to determine a pixel of the pixel to be interpolated according to a pixel value corresponding to each of the m reference pixel points and an interpolation weight of each of the m reference pixel points to the pixel to be interpolated a value to obtain the target image.
  • the spherical distance includes a first spherical distance and a first spherical position between the coordinate position and the first coordinate position according to any one of the m reference pixel points a spherical distance, the first spherical distance being a spherical distance between a coordinate position of the any reference pixel in the longitude direction and the first coordinate position, the second spherical distance being in a latitudinal direction a spherical distance between a coordinate position of any reference pixel and the first coordinate position;
  • the processing unit is specifically configured to determine a unit distance, where the unit distance includes a first unit distance and a second unit distance, where the first unit distance is between the first reference pixel point and the second reference pixel point in the longitude direction a distance in the latitude; the second unit distance is a distance between the third reference pixel and the fourth reference pixel in the latitude direction;
  • the processing unit is configured to determine, according to the unit distance and a spherical distance between the coordinate positions of the m reference pixel points and the first coordinate position, the m reference pixel points respectively The interpolation weight of the pixel to be interpolated.
  • the terminal device further includes a communication unit for transmitting an image, such as acquiring a source image or transmitting a target image, and the like.
  • an embodiment of the present invention provides a terminal device, including a memory and a processor coupled to the memory, the memory is configured to store an instruction, and the processor is configured to execute the instruction; The method described in the first aspect above is performed when the processor executes the instructions.
  • the terminal device further includes a display coupled to the processor, the display for displaying an image (specifically a target image or a source image) under the control of the processor.
  • the terminal device further includes a communication interface, the communication interface is in communication with the processor, and the communication interface is configured to perform with other devices (such as a network device, etc.) under the control of the processor. Communication.
  • a computer readable storage medium storing program code for a service switching process.
  • the program code includes instructions for performing the method described in the first aspect above.
  • the problems of the performance and efficiency of image interpolation caused by the image processing of the non-planar image (curved surface image) by the planar image interpolation algorithm in the prior art can be solved, thereby effectively improving the non-planar image interpolation. Performance and efficiency.
  • FIG. 1A is a schematic diagram of a spherical image according to an embodiment of the present invention.
  • FIG. 1B is a schematic diagram of a planar image having a spherical image format according to an embodiment of the present invention.
  • FIG. 2 is a schematic flowchart diagram of an image processing method according to an embodiment of the present invention.
  • 3A and 3B are schematic diagrams of two reference pixel points provided by an embodiment of the present invention.
  • 4A and 4B are schematic diagrams showing two reference pixel regions provided by an embodiment of the present invention.
  • FIG. 5 is a schematic flowchart diagram of another image processing method according to an embodiment of the present invention.
  • 6A and 6B are schematic diagrams of two other reference pixel points provided by an embodiment of the present invention.
  • FIG. 7 is a schematic flowchart diagram of another image processing method according to an embodiment of the present invention.
  • 8A and 8B are schematic diagrams of two other reference pixel points provided by an embodiment of the present invention.
  • FIG. 9 is a schematic diagram of another reference pixel provided by an embodiment of the present invention.
  • FIG. 10A and FIG. 10B are schematic diagrams of two other reference pixel points provided by an embodiment of the present invention.
  • 11A and 11B are schematic structural diagrams of two types of terminal devices according to an embodiment of the present invention.
  • Panoramic video also known as 360-degree panoramic video or 360-video, a video that uses 360-degree shooting with multiple cameras. Users can adjust the user's perspective to watch while watching the video.
  • the frame image constituting the panoramic video may be referred to as a panoramic image or a 360° image.
  • Large-view video refers to a video that covers a wide range of viewing angles, such as a video coverage range of 360°, 720°, and the like. Accordingly, the frame image constituting the large-view video may be referred to as a large-view image.
  • Interpolation Constructing new data points based on known discrete data points is called interpolation.
  • Integer point refers to the pixel in the image where the coordinate position is an integer in the reference coordinate system.
  • Sub-pixel point refers to a pixel point in the image where the coordinate position is a non-integer in the reference coordinate system.
  • Pixel Refers to dividing an image into small squares or dots, each of which is called a pixel.
  • pixel points are used as a general term for integer pixels and sub-pixel points. That is, in the present application, the pixel points may be integer pixels or sub-pixel points, and the pixel points for specific requirements will be described in detail below.
  • Reference pixel also known as an interpolation reference pixel, refers to a pixel (also referred to as a pixel to be interpolated) used to generate a pixel to be interpolated during image pixel interpolation.
  • the reference pixel point is generally selected in a specified area closest to the pixel point to be interpolated, which will be described in detail below.
  • Plane Cartesian Coordinate System also known as a plane coordinate system or a Cartesian coordinate system, it refers to a coordinate system composed of two numerical axes perpendicular to each other and having a common origin on the same plane. The two axes are placed in horizontal and vertical positions, respectively. Among them, the vertical number axis is usually called the y axis or the vertical axis, and the horizontal number axis is usually called the x axis or the horizontal axis. Accordingly, the position (coordinate position) of the point in the plane coordinate system can be represented by the abscissa of the point in the x direction and the ordinate in the y direction.
  • Geographical coordinate system refers to the spherical coordinate system that indicates the position of the ground point with latitude and longitude.
  • the horizontal line or east-west line
  • the vertical line or north-south line
  • the position (coordinate position) of the point in the geographic coordinate system can be expressed by the longitude coordinate (longitude value or longitude coordinate value) of the point in the longitude direction and the latitude coordinate (latitude value or latitude coordinate value) in the latitude direction.
  • the longitude direction refers to the direction in which the longitude coordinate value changes the fastest, and may also refer to the direction in which the latitude coordinate value remains unchanged.
  • the latitude direction refers to the direction in which the latitude coordinate value changes the fastest, and can also refer to the direction in which the longitude coordinate value remains unchanged.
  • Great circle big circle, also known as big circle. Defined as "the intersection of the sphere and a plane that passes through the center point of the sphere. NOTE 1: A great circle is also known as an orthodrome or Riemannian circle. NOTE 2: The center of the sphere and the center of a great Circle are co-located.” Translated as: “The ring that intersects the plane of the sphere and the sphere. Note 1: The big circle is usually called the big arc or the Riemann circle. Note 2: The center of the big circle coincides with the center of the sphere” .
  • Plane image refers to the image in the plane coordinate system, that is, the parts in the image are in the same plane.
  • a non-planar image also known as a non-planar image, it means that the parts in the image are not in a plane at the same time.
  • a curved image For example, a 360° image (panoramic image) is one of the curved images, which is also referred to as a spherical image.
  • a schematic diagram of a spherical image is shown in FIG. 1A.
  • Spherical image format Refers to a storage or transmission format of an image, as will be detailed below in this application. Illustratively, a schematic diagram of a planar image having a spherical image format is shown in FIG. 1B. The black area in FIG. 1B can be understood as a partial curved image mapped to an image area presented by the planar image, which will not be described in detail herein.
  • the inventor of the present application found in the process of the present application that since the angle of view covered by the large-view video (image) is too large, it is essentially a curved image (ie, a non-planar image), for example, the essence of the panoramic image is a spherical panoramic image.
  • Deformation occurs during image processing for large-view images. For example, when a large-view image (non-planar image) is converted/mapped into a planar image, or a planar image is mapped to a large-view image, there are different degrees of deformation, which causes an adjacent pixel between the mapped images. The correlation (or spacing) has changed. At this time, if the existing image interpolation algorithm is used for image interpolation, the performance and efficiency of image interpolation will be greatly reduced.
  • FIG. 2 is a schematic flowchart diagram of an image interpolation processing method according to an embodiment of the present invention.
  • the method shown in Figure 2 includes the following implementation steps:
  • Step S102 The terminal device determines, according to the coordinate position of the pixel to be interpolated in the target image, the first coordinate position corresponding to the pixel to be interpolated in the source image, where the source image is a curved surface image to be converted, or has a spherical surface A planar image of an image format, the target image being the image after the source image is converted.
  • the coordinate position of the pixel in the image may be a coordinate position in the geographic coordinate system, that is, the coordinate position is specifically composed of longitude coordinates and latitude coordinates.
  • the coordinate position of the pixel in the image may be a coordinate position in the plane coordinate system, that is, the coordinate position is specifically composed of the abscissa and the ordinate.
  • the spherical image format will be described in detail below.
  • Step S104 The terminal device determines, according to the first coordinate position, m reference pixel points, where the m reference pixel points are located in the source image, where m is a positive integer.
  • the terminal device may select m reference pixel points for the pixel to be interpolated in the source image according to the first coordinate position corresponding to the pixel to be interpolated in the source image.
  • the reference pixel is used for subsequent calculation of the pixel value of the point to be interpolated, and the selection of the reference pixel will be described in detail below, and details are not described herein again.
  • Step S106 The terminal device determines, according to a spherical distance between each of the coordinate positions of the m reference pixel points and the first coordinate position, an interpolation weight of each of the m reference pixel points to the pixel to be interpolated.
  • the coordinate position corresponding to the calculation of the spherical distance needs to be the coordinate position in the geographic coordinate system.
  • the coordinate position in the geographic coordinate system can be used to calculate the spherical distance of each of the two coordinate positions in the longitude direction and the latitude direction, which will be described in detail below.
  • Step S108 The terminal device determines, according to the pixel value corresponding to each of the m reference pixel points and the interpolation weight of the m reference pixel points to the pixel to be interpolated, the pixel value of the pixel to be interpolated, thereby The target image is obtained.
  • the steps S106-S108 may be repeatedly performed to calculate a pixel value of each pixel to be interpolated in the target image, so that the target image can be obtained. .
  • step S102 for any one of the pixel points to be generated (ie, the pixel to be interpolated), the pixel to be interpolated may be determined according to the coordinate position of the pixel to be interpolated in the target image.
  • the first coordinate position in the source image is a curved surface image to be converted, or the source image is a planar image having a spherical image format to be converted.
  • the target image is an image generated by converting the source image.
  • the terminal device may use a preset image mapping relationship. And mapping the coordinate position of the pixel to be interpolated in the target image into the source image to obtain a first coordinate position corresponding to the pixel to be interpolated in the source image.
  • the image mapping relationship refers to a position mapping relationship between a target image and a source image, that is, an association relationship between a source image and a target image at a position of the same point.
  • the terminal device can directly calculate the coordinate position corresponding to the pixel to be interpolated in the source image according to the coordinate position of the pixel to be interpolated in the target image and the image mapping relationship.
  • the image mapping relationship may specifically refer to a position mapping relationship between a plane coordinate system of the target image and a plane coordinate system of the source image, or a position mapping between the plane coordinate system of the target image and the geographic coordinate system of the source image. Relationship, this application does not elaborate. If the image mapping relationship refers to a positional mapping relationship between the plane coordinate system of the target image and the plane coordinate system of the source image, the calculated first coordinate position is a coordinate position in the plane coordinate system.
  • the source image and the target image may be planar images having a spherical image format.
  • the image mapping relationship refers to a positional mapping relationship between the plane coordinate system of the target image and the geographic coordinate system of the source image
  • the calculated first coordinate position is a coordinate position in the geographic coordinate system.
  • the source image is a curved image and the target image is a planar image having a spherical image format.
  • the terminal device may first convert the coordinate position of the pixel to be interpolated in the plane coordinate system of the target image into a coordinate position in the geographic coordinate system according to the preset first coordinate mapping relationship. Further, according to the preset image mapping relationship, the coordinate position of the pixel to be interpolated in the geographic coordinate system in the target image is mapped into the source image to obtain the first coordinate corresponding to the pixel to be interpolated in the source image. position.
  • the image mapping relationship herein may refer to a positional relationship between a geographic coordinate system of the target image and a plane coordinate system of the source image, or may also refer to a position between the geographic coordinate system of the target image and the geographic coordinate system of the source image. relationship. If the image mapping relationship refers to a positional relationship between a geographic coordinate system of the target image and a plane coordinate system of the source image, the first coordinate position obtained by the corresponding calculation is a coordinate position in the plane coordinate system. Accordingly, the source image and the target image may be planar images having a spherical image format.
  • the image mapping relationship refers to a position mapping relationship between the geographic coordinate system of the target image and the geographic coordinate system of the source image
  • the first coordinate position obtained by the corresponding calculation is the coordinate position in the geographic coordinate system.
  • the source image is a curved image
  • the target image is a planar image having a spherical image format.
  • the first coordinate mapping relationship refers to a position mapping relationship between a plane coordinate system and a geographic coordinate system, that is, an association relationship between the same point in a plane coordinate system and a geographic coordinate system.
  • the association relationship may be customized on the user side or the system side, and is not described in detail in this application.
  • the first coordinate mapping relationship and the image mapping relationship may each be represented by a corresponding mapping function.
  • the mapping function corresponding to the first coordinate mapping relationship may be f1
  • the mapping function corresponding to the image mapping relationship may be f2, etc., which is not described in detail in this application.
  • the terminal device may use a preset image map. And mapping the coordinate position of the pixel to be interpolated in the target image to the source image to obtain a first coordinate position corresponding to the pixel to be interpolated in the source image.
  • the terminal device can directly calculate the first coordinate position of the pixel to be interpolated in the source image according to the coordinate position of the pixel to be interpolated in the target image and the image mapping relationship.
  • the image mapping relationship herein may specifically refer to a mapping relationship between a geographic coordinate system of the target image and a plane coordinate system of the source image, and then the first coordinate position obtained by the corresponding calculation is a coordinate position in the plane coordinate system.
  • the source image may be a planar image having a spherical image format, and the target image is a curved image.
  • the image mapping relationship herein may specifically refer to a mapping relationship between the geographic coordinate system of the target image and the geographic coordinate system of the source image, and then the first coordinate position obtained by the corresponding calculation is the coordinate position in the geographic coordinate system.
  • Both the source image and the target image can be curved images.
  • the terminal device may first convert the coordinate position of the pixel to be interpolated in the geographic coordinate system of the target image into a coordinate position in the plane coordinate system according to the preset second coordinate mapping relationship. Further, the preset image mapping relationship is used to map the coordinate position of the pixel to be interpolated in the plane coordinate system in the target image to the source image, so as to obtain the first coordinate position corresponding to the pixel to be interpolated in the source image.
  • the image mapping relationship herein may specifically refer to a position mapping relationship between a plane coordinate system of the target image and a plane coordinate system of the source image, and the first coordinate position obtained by the corresponding calculation is a coordinate position in the plane coordinate system.
  • the source image may be a planar image having a spherical image format, and the target image is a curved image.
  • the image mapping relationship herein may specifically refer to a mapping relationship between the plane coordinate system of the target image and the geographic coordinate system of the source image, and then the first coordinate position obtained by the corresponding calculation is the coordinate position in the geographic coordinate system. Both the source image and the target image can be curved images.
  • the second coordinate mapping relationship refers to a position mapping relationship between a geographic coordinate system and a plane coordinate system, that is, an association relationship between the coordinate points of the same point in the geographic coordinate system and the plane coordinate system, which is not detailed in this application.
  • the first mapping relationship refers to a mapping relationship between a coordinate position in a plane coordinate system and a coordinate position in a geographic coordinate system
  • the second mapping relationship refers to a coordinate position in a geographic coordinate system to a plane coordinate system.
  • the mapping relationship between the coordinate positions is not detailed here.
  • the terminal device may further convert the first coordinate position in the plane coordinate system into the coordinate position in the geographic coordinate system by using the preset second coordinate mapping relationship. For the S106 calculation.
  • the image resolution corresponding to each of the source image and the target image may be different.
  • the resolution of the target image is higher than the resolution of the source image.
  • image interpolation may refer to recovering lost information in an image in the process of generating a high resolution image from a low resolution image.
  • the algorithm used in the image interpolation process is referred to as an image interpolation algorithm, which will be described in detail below.
  • the spherical image format corresponding to each of the source image and the target image may be different.
  • the spherical image formats corresponding to the source image and the target image may be different.
  • the image interpolation of the present application can be applied to image conversion in different spherical image formats. That is, using the image interpolation algorithm to perform image interpolation on the source image (specifically, the pixel points in the source image) in the first spherical image format, the target image in the second spherical image format can be generated/obtained.
  • the first spherical image format and the second spherical image format are different.
  • the spherical image format may refer to a format in which the device stores or transmits an image, which may include, but is not limited to, ERP (Equi-Rectangular Projection, Chinese called equidistant columnar projection or latitude and longitude mapping), CMP (Cube Map Projection, Chinese called Cube mapping), CPP (Craster Parabolic Projection), ACP (Adjusted Cube map Projection), COHP (Compact Octahedron Projection), Chinese called compact format regular octahedral mapping CISP (Compact Icosahedral projection, Chinese called compact format icosahedral mapping) and other spherical image formats, etc., this application is not limited.
  • ERP Equi-Rectangular Projection
  • CMP Cube Map Projection, Chinese called Cube mapping
  • CPP Craster Parabolic Projection
  • ACP Adjusted Cube map Projection
  • COHP Computer Octahedron Projection
  • the terminal device may select m reference pixel points around the first coordinate position, so as to conveniently calculate information about the pixel to be interpolated according to related information (such as coordinate position and pixel value) of the m pixel points. (such as pixel values). Wherein the m reference pixel points are all located in the source image.
  • the terminal device may obtain m reference pixel points by sampling up in the longitude direction and/or the latitude direction around the first coordinate position.
  • the ordinate or latitude coordinates of the partial reference pixel points of the m reference pixel points are the same, and/or the abscissa or longitude coordinates of the partial reference pixel points of the m reference pixel points are the same.
  • the terminal device may directly follow the longitude direction and/or around the first coordinate position. Or evenly sample in the latitude direction to obtain m reference pixels.
  • the terminal device first needs to be according to a coordinate system of the target image (specifically, may be a geographic coordinate)
  • the positional mapping relationship between the system or the plane coordinate system of the source image determines the longitude direction and/or the latitude direction. Further, the terminal device is uniformly sampled along the longitude direction and/or the latitude direction around the first coordinate position to obtain m reference pixel points.
  • the m reference pixel points are all pixel points in the source image.
  • the latitude coordinates (i.e., latitude values) corresponding to the coordinate positions of the source image in the longitude direction are unchanged. Accordingly, the longitude coordinate (ie, the longitude value) corresponding to the coordinate position of the source image in the latitude direction does not change.
  • the terminal device may uniformly sample along the longitude direction and the latitude direction according to the first coordinate position to obtain a*b (ie, m) reference pixel points.
  • a reference pixel point may be uniformly sampled along the longitude direction, and then b reference pixels may be uniformly sampled in the latitude direction for each of the a reference pixel points. That is, the a reference sample points are uniformly distributed in the longitude direction, and the corresponding latitude coordinates (latitude values) of the a reference pixel points are the same.
  • the b reference pixel points are also uniformly distributed in the latitude direction, and the corresponding longitude coordinates (longitude values) of the b reference pixel points are the same.
  • the color of the pixel is represented by brightness and chrominance during image conversion. Therefore, when selecting reference pixel points for the pixel to be interpolated, it can be considered from two dimensions of luminance and chrominance.
  • the latitude coordinates corresponding to each row of reference pixel points in the longitude direction are the same, and the longitude coordinates corresponding to each column of reference pixel points in the latitude direction are the same.
  • selecting the reference pixel in the source image may specifically: selecting the nearest neighboring a2*b2 pixel points along the longitude direction and the latitude direction around the first coordinate position as the reference pixel point. .
  • the a1, a2, b1, and b2 may be constants set by the user side or the system side. They may be the same or different, and are not limited in this application.
  • the terminal device may select a corresponding reference pixel region around the first coordinate position, where the first coordinate position is located in the reference pixel region.
  • m reference pixel points are selected from the reference pixel regions, that is, the reference pixel regions include m reference pixel points. The m reference pixel points are used to subsequently generate pixel points to be interpolated in the target image.
  • the selection of the reference pixel area specifically has the following possible implementation manners.
  • the terminal device selects the nearest designated area as the reference pixel area centering on the to-be-first coordinate position.
  • the designated area may be customized for the user side or the system side, and the size, shape, and the like of the designated area are not limited.
  • the designated area may be a circle having a radius corresponding to the pixel to be interpolated, a circle corresponding to the length, and the like.
  • the terminal device may select, as the reference pixel region, an area formed by intersecting two warp lines and two weft lines around the first coordinate position.
  • a schematic diagram of a reference pixel region selection is shown in FIG. 4A. 4A shows that a region composed of two meridians and two latitude lines is selected as a reference pixel region centering on the first coordinate position.
  • the terminal device may select, as the reference pixel region, an area formed by the intersection of two sets of large circles around the first coordinate position.
  • each group of large circles includes two large circles, and each of the two large circles has the same center.
  • FIG. 4B Another schematic diagram of reference pixel region selection is shown in FIG. 4B. Wherein, in FIG. 4B, an area composed of four large circles intersecting the same center of the sphere is selected as the reference pixel area centering on the first coordinate position. Alternatively, an area formed by the intersection of four large circles of the same center of the sphere is randomly selected around the first coordinate position as the reference pixel area or the like.
  • the terminal device may select m (ie, a*b) reference pixel points from the reference pixel area, so as to facilitate subsequent information according to the reference pixel points.
  • the information (such as the pixel value) of the pixel to be interpolated is calculated (such as a coordinate position, a pixel value, etc.).
  • the terminal device may uniformly sample from the reference pixel region according to the first coordinate position corresponding to the pixel to be interpolated in the source image to obtain m reference pixel points.
  • m reference pixel points For details on how to obtain the m reference pixel points, refer to the related description in the foregoing embodiment, and details are not described herein again.
  • the terminal device selects/determines m reference pixel points for the pixel to be interpolated, and there is also an implementation manner: the terminal device may be based on the coordinate position of the pixel to be interpolated in the target image, N pixels are selected around the pixel to be interpolated. The n pixel points are located in the target image, and n is a positive integer greater than or equal to m. Further, according to the coordinate positions of the n pixel points in the target image, the coordinate positions of the n pixel points corresponding to the source image are determined. Then, m reference pixel points are determined according to coordinate positions of the pixel points in the source image according to the n reference pixel points.
  • the terminal device selects a method for selecting n pixel points around the pixel to be interpolated, which is not limited in this application.
  • the terminal device may randomly select n pixel points around the pixel to be interpolated.
  • n pixels are uniformly sampled in a fixed step around the pixel to be interpolated to obtain n pixel points.
  • the coordinate position of the pixel to be interpolated is (x, y)
  • the coordinate position of the selected pixel may be (x+k 0 ⁇ x, y+k 0 ⁇ y).
  • k 0 is a custom value, such as +1, -1, +2, -2, and so on.
  • ⁇ x is the increment in the x direction (or a fixed step size at the time of sampling).
  • ⁇ y is the increment in the y direction (or a fixed step size at the time of sampling).
  • the terminal device may map the coordinate positions corresponding to the n pixel points in the target image to the source image according to a preset image mapping relationship, to obtain coordinates of the n pixel points corresponding to the source image. position. Further, the terminal device may select m reference pixel points from the pixel points in the source image corresponding to the n pixel points according to a setting rule.
  • the setting rule is customized for the user side or the system side. For example, the terminal device may perform a set function operation on the coordinate positions corresponding to the n pixel points in the source image, such as rounding down the floor function operation, rounding up the ceil function operation, etc., to obtain m references correspondingly.
  • the pixel point and the coordinate position of the m reference pixel points in the source image may be performed by performing a set function operation on the coordinate positions corresponding to the n pixel points in the source image, such as rounding down the floor function operation, rounding up the ceil function operation, etc.
  • a certain pixel point of the n pixel points corresponds to a coordinate position in the source image is (x 1 , y 1 )
  • a reference pixel can be obtained correspondingly.
  • the coordinate position of the reference pixel is (floor(x 1 ), floor(y 1 )).
  • the reference pixel points referred to in this application may be integer pixels.
  • the pixel to be interpolated may be a sub-pixel point or an integer pixel point.
  • the terminal device may determine, according to the spherical distance between the coordinate positions of the m reference pixel points and the first coordinate position, the interpolation of the m reference pixel points to the pixel to be interpolated. Weights.
  • the terminal device may first determine a unit distance according to respective coordinate positions of the m reference pixel points; and further, according to the unit distance and the coordinate positions of the m reference pixel points, and the first coordinate position.
  • the spherical distance between the spherical reference points is used to calculate the interpolation weights of the m reference pixel points for the pixel to be interpolated.
  • the interpolation weights for the unit distance and the reference pixel point to the pixel to be interpolated will be described in detail below in the present application.
  • the unit distance includes a first unit distance and a second unit distance.
  • the first unit distance is a distance between a first reference pixel point and a second reference pixel point in the longitude direction.
  • the first reference pixel point and the second reference pixel point may be distances from the first coordinate position in the longitude direction of the m reference pixel points (specifically, the longitude coordinates corresponding to the first coordinate position may be The two nearest reference pixels.
  • the latitude coordinates corresponding to the first reference pixel point and the second reference pixel point may be the same or different.
  • the second unit distance is a distance between the third reference pixel point and the fourth reference pixel point in the latitude direction.
  • the third reference pixel point and the fourth reference pixel point may be the first coordinate position in the latitude direction of the m reference pixel points (specifically, the latitude coordinate corresponding to the first coordinate position may be The two nearest reference pixels.
  • the longitude coordinates corresponding to the third reference pixel point and the fourth reference pixel point may be the same or different.
  • the first reference pixel, the second reference pixel, the third reference pixel, and the fourth reference pixel may be the same or different, and the present application is not limited thereto.
  • the spherical distance includes a first spherical distance and a second spherical distance.
  • the first spherical distance is a spherical distance between a coordinate position of any of the reference pixel points in the longitude direction and the first coordinate position.
  • the second spherical distance is a spherical distance between a coordinate position of any of the reference pixel points in the latitude direction and the first coordinate position.
  • the terminal device may perform weighting and summation on the pixel values corresponding to the m reference pixel points and the interpolation weights of the m reference pixel points to the pixels to be interpolated, thereby obtaining and calculating The pixel value of the pixel to be interpolated.
  • the terminal device may calculate the pixel value of the pixel to be interpolated by using the following formula (1):
  • P o is the pixel value of the pixel to be interpolated.
  • P ij is a pixel value of any one of the m reference pixel points (eg, a target reference pixel point).
  • L( ⁇ ij , ⁇ ij ) is an interpolation weight of any one of the reference pixel points (target reference pixel points) to be interpolated.
  • a is the number of reference pixel points obtained by sampling in the longitude direction.
  • b is the number of reference pixel points obtained by sampling in the latitude direction.
  • a related embodiment involved in determining the interpolation weights of the m reference pixel points for the pixel to be interpolated in S106 is set forth below. For details, refer to FIG. 5, which includes the following implementation steps:
  • Step S202 The terminal device calculates, according to the coordinate positions of the two reference pixel points closest to the first coordinate position among the m reference pixel points in the longitude direction, the first unit in the longitude direction. distance.
  • two reference pixel points A and B closest to the pixel to be interpolated that is, the first pixel point and the second method described above may be selected from the m reference pixel points. pixel. Further, based on the respective coordinate positions of the reference pixel points A and B, the calculation is performed at the first unit distance.
  • FIG. 6A A schematic diagram of a reference pixel point is shown in Figure 6A.
  • O is the first coordinate position corresponding to the pixel to be interpolated in the source image
  • a and B are the two reference pixel points closest to the first coordinate position O in the longitude direction.
  • the coordinate position of the reference pixel point A is ( ⁇ A , ⁇ A )
  • the coordinate position of the reference pixel point B is ( ⁇ B , ⁇ B )
  • the first coordinate position is ( ⁇ , ⁇ ).
  • the terminal device can calculate the first unit distance Ud ⁇ in the longitude direction by using the following formula (2):
  • R is the radius of the spherical surface corresponding to the source image.
  • is the longitude coordinate and ⁇ is the latitude coordinate.
  • Step S204 The terminal device calculates, according to the coordinate positions of the two reference pixel points closest to the first coordinate position among the m reference pixel points in the latitude direction, the second unit in the latitude direction. distance.
  • two reference pixel points C and D closest to the pixel to be interpolated that is, the third pixel and the fourth pixel described above may be selected from the m reference pixel points. Then, according to the respective coordinate positions of the reference pixel points C and D, the calculation is obtained at the second unit distance.
  • FIG. 6B Another schematic diagram of reference pixel points is shown in Figure 6B.
  • O is the first coordinate position corresponding to the pixel to be interpolated in the source image
  • C and D are the two reference pixel points closest to the first coordinate position O in the latitude direction.
  • the coordinate position of the reference pixel point C is ( ⁇ C , ⁇ C )
  • the coordinate position of the reference pixel point D is ( ⁇ D , ⁇ D )
  • the first coordinate position is ( ⁇ , ⁇ ).
  • the terminal device can calculate the second unit distance Ud ⁇ in the latitude direction by using the following formula (3):
  • Step S206 The terminal device calculates, according to the coordinate positions of the source image and the first coordinate position, the coordinate positions of the m reference pixel points in the longitude direction and the first coordinate, respectively. a first spherical distance between the positions and a second spherical distance between the coordinate positions of the m reference pixel points in the latitude direction and the first coordinate position.
  • the terminal device calculates a spherical distance between each of the coordinate positions of the m reference pixel points and the first coordinate position from the longitude direction and the latitude direction, respectively. That is, the spherical distance may include a first spherical distance in the longitude direction and a second spherical distance in the latitudinal direction, as will be described in detail below.
  • Step S208 The terminal device determines, according to the first unit distance and the first spherical distance between the coordinate positions of the m reference pixel points and the first coordinate position, the m reference pixel points are respectively The first weight component of the interpolated pixel is described.
  • the image interpolation algorithm may be customized for the user side or the system side, and may include, but is not limited to, a Lanczos interpolation algorithm, a bilinear interpolation algorithm, a cubic convolution interpolation algorithm, a nearest neighbor interpolation algorithm, a piecewise linear interpolation algorithm, Or other interpolation algorithms, etc., this application is not limited.
  • Step S210 The terminal device determines, according to the second unit distance between the second unit distance and the coordinate positions of the m reference pixel points and the first coordinate position, the m reference pixel points are respectively The second weight component of the interpolated pixel is described.
  • Step S212 The terminal device determines, according to the first weight component and the second weight component of the pixel to be interpolated, the interpolation of each of the m reference pixel points to the pixel to be interpolated. Weights.
  • the target reference pixel point is any one of the m reference pixel points.
  • the coordinate position of the target reference pixel in the source image ie, the coordinate position of the target reference pixel
  • ⁇ ij the coordinate position of the target reference pixel
  • ⁇ ij the coordinate position of the target reference pixel
  • the first coordinate position is ( ⁇ , ⁇ ).
  • the terminal device may calculate, by using the following formula (4), the first spherical distance between the coordinate position of the target reference pixel point and the first coordinate position in the longitude direction. a second spherical distance between the coordinate position of the target reference pixel point and the first coordinate position in the latitude direction
  • the terminal device can utilize the first unit distance Ud ⁇ and the first spherical distance And calculating, by using an image interpolation algorithm ⁇ , a first weight component of the target reference pixel in the longitude direction to the pixel to be interpolated Illustratively, it can be calculated by the following formula (5).
  • the terminal device can utilize the second unit distance Ud ⁇ and the second spherical distance And calculating, according to the image interpolation algorithm ⁇ , a second weight component of the target reference pixel in the latitude direction to the pixel to be interpolated Exemplarily, it can be calculated by the following formula (6).
  • the terminal device may, according to the target reference pixel, the first weight component of the pixel to be interpolated in the longitude direction. And a second weight component of the target reference pixel in the latitude direction to the pixel to be interpolated Calculating an interpolation weight L( ⁇ ij , ⁇ ij ) of the target reference pixel to the pixel to be interpolated.
  • the terminal device may perform the first weight component according to the set operation rule. And second weight component Processing is performed to obtain the corresponding interpolation weights.
  • the setting operation rule is an algorithm for custom setting on the user side or the system side, such as addition, multiplication, and the like. Exemplarily, taking the multiplication operation as an example, the terminal device can obtain L( ⁇ ij , ⁇ ij ) by using the following formula (7):
  • the problems of the performance and efficiency of image interpolation caused by the image processing of the non-planar image (curved surface image) by the planar image interpolation algorithm in the prior art can be solved, thereby effectively improving the non-planar image interpolation. Performance and efficiency.
  • the first embodiment the source image of the CPP format is converted into the target image of the ERP format
  • the image interpolation method includes the following implementation steps:
  • the coordinate position of the pixel to be interpolated in the target image is (m 0 , n 0 ), which may be a coordinate position in a geographic coordinate system or a coordinate position in a planar coordinate system.
  • (m 0 , n 0 ) is the coordinate position in the plane coordinate system
  • m 0 is the abscissa
  • n 0 is the ordinate.
  • the terminal device can convert (m 0 , n 0 ) into a coordinate position ( ⁇ 1 , ⁇ 1 ) in the geographic coordinate system according to a preset coordinate mapping relationship. That is, the geographical coordinates are used, and the coordinate position of the pixel to be interpolated in the target image is ( ⁇ 1 , ⁇ 1 ).
  • the following formula (8) shows the geographical coordinate position of the pixel to be interpolated in the target image:
  • W 1 is the width of the target image.
  • H 1 is the height of the target image.
  • ⁇ 1 is the longitude coordinate, and the value range is [- ⁇ , ⁇ ].
  • ⁇ 1 is the latitude coordinate, and the value range is [- ⁇ /2, ⁇ /2].
  • is a custom constant that represents the offset of the coordinate, and the value ranges from [0, 1). Typically, ⁇ is 0 or 0.5.
  • the terminal device maps the pixel to be interpolated ( ⁇ 1 , ⁇ 1 ) in the target image to the coordinate position in the source image according to the preset image mapping relationship, that is, the pixel to be interpolated corresponds to the first in the source image.
  • the image mapping relationship here refers to the position mapping relationship between the target image in the ERP format and the source image in the CPP format, that is, the relationship between the same point and the CPP source image in the same point.
  • the first coordinate position may be a coordinate position in a geographic coordinate system or a coordinate position in a planar coordinate system. Suppose here is the coordinate position in plane coordinates, x is the abscissa and y is the ordinate.
  • the first coordinate position (x, y) corresponding to the pixel to be interpolated ( ⁇ 1 , ⁇ 1 ) in the source image is given by the following formula (9):
  • H is the height of the source image.
  • W is the width of the source image.
  • the terminal device may uniformly sample along the longitude direction and/or the latitude direction around the first coordinate position, thereby obtaining m reference pixel points selected for the pixel to be interpolated.
  • the terminal device can deform the formula (9) to obtain the following formula (10):
  • the terminal device in this example can calculate the latitude direction according to the position mapping relationship between the geographic coordinate system and the plane coordinate system of the source image. Specifically, the terminal device calculates a derivative of y to x according to the position mapping relationship shown by the formula (10) to obtain a latitude direction. That is, the slope direction of the pixel point to be interpolated ( ⁇ 1 , ⁇ 1 ) is calculated as the latitude direction. Specifically, the following formula (11) shows the calculation formula of the latitude direction:
  • the terminal device uniformly samples around the first coordinate position along the slope direction (latitude direction) to obtain corresponding m reference pixel points.
  • the coordinate position (x ij , y ij ) of the reference pixel obtained by uniform sampling is shown in the following formula (12).
  • dx i represents the offset of the central abscissa of the reference pixel of the i-th row with respect to the abscissa of the pixel to be interpolated
  • dy i represents the ordinate of the reference pixel of the i-th row relative to the pixel to be interpolated
  • ⁇ y i represents the increment in the direction of the ordinate.
  • the function of the floor function is rounded down.
  • the floor function in the embodiment of the present application may also be replaced by a ceil function, and the function is rounding up. The round function is rounded off, which takes the nearest integer value.
  • the coordinate position used when calculating the spherical distance needs to be a coordinate position in the geographic coordinate system.
  • the coordinate position of the first coordinate position or the reference pixel point is a coordinate position in the plane coordinate system
  • the corresponding position is first converted into a coordinate position in the geographic coordinate system, and then each of them is calculated in the longitude direction and the latitude Spherical distance in the direction.
  • the terminal device may convert the first coordinate position (x, y) into a coordinate position ( ⁇ , ⁇ ) in the geographic coordinate system according to a preset coordinate mapping relationship.
  • the following formula (13) shows that the first coordinate position (x, y) in the above formula (9) corresponds to the coordinate position in the geographic coordinate system:
  • the terminal device can convert the reference pixel point (x ij , y ij ) in the source image into a coordinate position ( ⁇ ij , ⁇ ij ) in the geographic coordinate system according to a preset coordinate mapping relationship, as follows: 14):
  • the terminal device can calculate the respective first spherical distances in the longitude direction according to the coordinate positions ( ⁇ ij , ⁇ ij ) of the reference pixel points and the first coordinate position ( ⁇ , ⁇ ) in the ground coordinate system.
  • a second spherical distance in the latitude direction Specifically, it can be as shown in formula (15):
  • the terminal device may select two reference pixel points A and B closest to the first coordinate position from the m reference pixel points along the longitude direction. Further, the difference between the longitude coordinates of the two reference pixel points is calculated as the first unit distance Ud ⁇ in the longitude direction.
  • two reference pixel points C and D closest to the first coordinate position are selected from the m reference pixel points in the latitude direction.
  • the difference between the latitude coordinates of the two reference pixel points is calculated as the second unit distance Ud ⁇ in the latitude direction. Specifically, it can be as shown in the following formula (16):
  • the coordinate position of the reference pixel point A is ( ⁇ A , ⁇ A )
  • the coordinate position of the reference pixel point B is ( ⁇ B , ⁇ B )
  • the coordinate position of the reference pixel point C is ( ⁇ C , ⁇ C )
  • the coordinate position of the reference pixel D is ( ⁇ D , ⁇ D ).
  • is the longitude coordinate.
  • is the latitude coordinate.
  • 6*6 and 4*4 reference pixel points are selected for the luminance component and the chrominance component of the image.
  • the two reference pixel points closest to the first coordinate position O in the longitude direction are the pixel points of the third row and the third column, and the pixel points of the third row and the fourth column.
  • the two reference pixel points closest to the first coordinate position O in the latitudinal direction are the pixel points of the third row and the third column, and the pixel points of the fourth row and the third column.
  • the first unit distance Ud ⁇ in the longitude direction and the second unit distance Ud ⁇ in the latitude direction in FIG. 8A are as shown in the following formula (17):
  • the two reference pixel points closest to the first coordinate position O in the longitude direction are the pixel points of the second row and the second column, and the pixel points of the second row and the third column.
  • the two reference pixel points closest to the first coordinate position O in the latitudinal direction are the pixel points of the second row and the second column, and the pixel points of the third row and the second column.
  • the terminal device may calculate a first weight component of the pixel to be interpolated for each reference pixel in the longitude direction according to the image interpolation algorithm, the calculated first unit distance in the longitude direction, and the first spherical distance
  • the terminal device may calculate a second weight component of the pixel to be interpolated for each reference pixel in the latitude direction according to the image interpolation algorithm, the calculated second unit distance in the latitude direction, and the second spherical distance
  • the interpolation weight (ie, the two-dimensional interpolation weight) L( ⁇ ij , ⁇ ij ) of the reference pixel to be interpolated pixel can be calculated.
  • the formula for calculating L( ⁇ ij , ⁇ ij ) is given by the following formula (21):
  • the terminal device may calculate the pixel value of the pixel to be interpolated according to the pixel value of each reference pixel and the interpolation weight of each reference pixel to be interpolated. Specifically, it can be as shown in the following formula (22):
  • the second embodiment the source image of the CMP format is converted into the target image of the ERP format
  • the image interpolation method may include the following implementation steps:
  • the coordinate position of the pixel to be interpolated in the target image in the geographic coordinate system is ( ⁇ 1 , ⁇ 1 ).
  • the terminal device may map the coordinate position of the pixel to be interpolated in the target image to the coordinate position in the source image according to the preset image mapping relationship, that is, the pixel to be interpolated corresponds to the first coordinate position in the source image.
  • the first coordinate position (x, y) corresponding to the pixel to be interpolated ( ⁇ 1 , ⁇ 1 ) in the source image is exemplarily given by the following formula (23):
  • H is the height of the source image.
  • W is the width of the source image.
  • the terminal device may uniformly sample along the longitude direction and/or the latitude direction around the first coordinate position, thereby obtaining m reference pixel points selected for the pixel to be interpolated.
  • the terminal device can deform the formula (23) to obtain the following formula (24):
  • the terminal device in this example can calculate the longitude direction according to the position mapping relationship between the geographic coordinate system and the plane coordinate system of the source image. Specifically, the terminal device calculates the derivative of y to x according to the position mapping relationship shown by the formula (24) to obtain the longitude direction. That is, the slope direction in which the pixel point to be interpolated ( ⁇ 1 , ⁇ 1 ) is calculated is the longitude direction.
  • the formula for calculating the slope is shown by the following formula (25):
  • the terminal device uniformly samples around the first coordinate position along the slope direction (longitude direction) to obtain corresponding m reference pixel points.
  • the following formula (26) shows the coordinate position (x ij , y ij ) of the reference pixel obtained by uniform sampling.
  • the terminal device may convert the first coordinate position (x, y) into a coordinate position ( ⁇ , ⁇ ) in the geographic coordinate system according to a preset coordinate mapping relationship. That is, the first coordinate position corresponds to the coordinate position in the geographic coordinate system is ( ⁇ , ⁇ ), and the following formula (27) shows that the first coordinate position (x, y) in the above formula (23) corresponds to the geographic coordinate system.
  • Coordinate position :
  • the terminal device can convert the reference pixel point (x ij , y ij ) in the source image into a coordinate position ( ⁇ ij , ⁇ ij ) in the geographic coordinate system, as shown in the following formula (28). ) shown:
  • the terminal device can calculate the respective first spherical distances in the longitude direction according to the reference pixel points ( ⁇ ij , ⁇ ij ) and the first coordinate position ( ⁇ , ⁇ ) in the ground coordinate system. And a second spherical distance in the latitude direction
  • Third Embodiment Converting a source image of a low resolution ERP format to a target image of a high resolution ERP format
  • the image interpolation method may include the following implementation steps:
  • the coordinate position of the pixel to be interpolated in the target image is (m 0 , n 0 ), which may be a coordinate position in a geographic coordinate system or a coordinate position in a planar coordinate system.
  • (m 0 , n 0 ) is the coordinate position in the plane coordinate system
  • m 0 is the abscissa
  • n 0 is the ordinate.
  • the terminal device can convert (m 0 , n 0 ) into a coordinate position ( ⁇ 1 , ⁇ 1 ) in the geographic coordinate system according to a preset coordinate mapping relationship. That is, the geographical coordinates are used, and the coordinate position of the pixel to be interpolated in the target image is ( ⁇ 1 , ⁇ 1 ).
  • the following formula (29) shows the geographical coordinate position of the pixel to be interpolated in the target image:
  • W 1 is the width of the target image.
  • H 1 is the height of the target image.
  • ⁇ 1 is the longitude coordinate, and the value range is [- ⁇ , ⁇ ].
  • ⁇ 1 is the latitude coordinate, and the value range is [- ⁇ /2, ⁇ /2].
  • is a custom constant that represents the offset of the coordinate, and the value ranges from [0, 1). Typically, ⁇ is 0 or 0.5.
  • the terminal device maps the pixel to be interpolated ( ⁇ 1 , ⁇ 1 ) in the target image to the coordinate position in the source image according to the preset image mapping relationship, that is, the pixel to be interpolated corresponds to the first in the source image.
  • the first coordinate position may be a coordinate position in a geographic coordinate system or a coordinate position in a planar coordinate system.
  • x is the abscissa
  • y is the ordinate.
  • the coordinate position (x, y) corresponding to the pixel point ( ⁇ 1 , ⁇ 1 ) to be interpolated in the source image is given by the following formula (30):
  • H is the height of the source image.
  • W is the width of the source image.
  • the terminal device may uniformly sample along the longitude direction and/or the latitude direction around the first coordinate position, thereby obtaining m reference pixel points selected for the pixel to be interpolated.
  • the source image and the target image are both images of the same spherical image format (ERP)
  • the horizontal direction (ie, the longitude direction) and the vertical direction (ie, the latitude) of the coordinate system may be directly along the coordinate system.
  • Uniform sampling on the direction) to obtain m reference pixels Illustratively, as shown in Fig. 9, for the luminance component of the image, uniform sampling yields a reference pixel of 6*6.
  • the coordinate position (x ij , y ij ) of the reference pixel obtained by uniform sampling can be shown by the following formula (31).
  • the terminal device may convert the first coordinate position (x, y) in the plane coordinate system into the coordinate position ( ⁇ , ⁇ ) in the geographic coordinate system according to a preset coordinate mapping relationship. That is, the first coordinate position corresponds to the coordinate position in the geographic coordinate system is ( ⁇ , ⁇ ), and the following formula (32) shows that the first coordinate position (x, y) in the above formula (30) corresponds to the geographic coordinate system.
  • Coordinate position :
  • the terminal device can convert the reference pixel point (x ij , y ij ) in the source image into a coordinate position ( ⁇ ij , ⁇ ij ) in the geographic coordinate system, as shown in the following formula (33). ) shown:
  • the terminal device can calculate the respective first spherical distances in the longitude direction according to the coordinate positions ( ⁇ ij , ⁇ ij ) of the reference pixel points and the first coordinate position ( ⁇ , ⁇ ) in the ground coordinate system. And a second spherical distance in the latitude direction
  • steps S33-S35 may be specifically referred to the related descriptions in the foregoing S13-S15, and Let me repeat.
  • the image interpolation method may include the following implementation steps:
  • the image interpolation algorithm corresponds to the selected reference pixel.
  • 2*2 pixel points around the first coordinate position are generally selected as reference pixel points.
  • 4*4 pixel points around the first coordinate position are usually selected as reference pixel points.
  • the first coordinate position is a coordinate position of the pixel to be interpolated corresponding to the source image.
  • a bilinear interpolation algorithm is taken as an example, and a schematic diagram of reference pixel point selection is shown in FIG. 10A.
  • 2*2 pixel points are selected as reference pixel points ( ⁇ ij , ⁇ ij ) around the first coordinate position in FIG. 10A, specifically pixel points A, B, D and E in the illustration.
  • the terminal device can determine the pixel point C based on the reference pixel points A and B.
  • the longitude coordinate of the pixel point C and the longitude coordinate of the first coordinate position are the same, and the latitude coordinate of the pixel point C and the latitude coordinate of the pixel point A/B are the same.
  • the pixel point F can be weighted according to the reference pixel points D and E.
  • the longitude coordinate of the pixel point F and the longitude coordinate of the first coordinate position are the same, and the latitude coordinate of the pixel point F and the latitude coordinate of the pixel point D/E are the same.
  • the terminal device can calculate interpolation weights of the reference pixel points A, B, D, and E to be interpolated pixel points in the longitude direction. Specifically as shown in the following formula (34):
  • L k represents the weight component of the pixel point K to be interpolated.
  • ⁇ k represents the longitude coordinate of the pixel point K.
  • K can be A, B, C, D, E and F.
  • the terminal device can calculate the weight components of the reference pixel points A, B, D, and E to be interpolated in the latitude direction. That is, the interpolation weights of the pixels to be interpolated in the latitude direction of the pixel points C and F are calculated. Specifically as shown in the following formula (35):
  • ⁇ k represents the latitude coordinate of the pixel point K.
  • K is C, F or the first coordinate position O.
  • a bilinear interpolation algorithm is used to calculate the pixel value of the pixel to be interpolated. Specifically, it can be as shown in the following formula (36):
  • the image interpolation method may include the following implementation steps:
  • the image interpolation algorithm determines the reference pixel to be selected. For example, for a bilinear interpolation algorithm, 2*2 pixel points around the first coordinate position are generally selected as reference pixel points. For the cubic convolution interpolation algorithm, 4*4 pixel points around the first coordinate position are usually selected as reference pixel points.
  • the first coordinate position is a coordinate position of the pixel to be interpolated corresponding to the source image.
  • is a parameter in the cubic convolution interpolation algorithm, which is a constant set by the user side or the system side.
  • the interpolation weights L( ⁇ ij , ⁇ ij ) at which the reference pixel points are to be interpolated can be calculated. It should be noted that, in this application, the S41-S45 may be specifically referred to the related description in the foregoing S11-S15, and details are not described herein again.
  • the terminal device includes corresponding hardware structures and/or software modules for executing the respective functions.
  • the embodiments of the present invention can be implemented in a combination of hardware or hardware and computer software in combination with the elements and algorithm steps of the various examples described in the embodiments disclosed herein. Whether a function is implemented in hardware or computer software to drive hardware depends on the specific application and design constraints of the solution. A person skilled in the art can use different methods to implement the described functions for each specific application, but such implementation should not be considered to be beyond the scope of the technical solutions of the embodiments of the present invention.
  • the embodiment of the present invention may divide the functional unit into the terminal device according to the foregoing method example.
  • each functional unit may be divided according to each function, or two or more functions may be integrated into one processing unit.
  • the above integrated unit can be implemented in the form of hardware or in the form of a software functional unit. It should be noted that the division of the unit in the embodiment of the present invention is schematic, and is only a logical function division, and the actual implementation may have another division manner.
  • FIG. 11A shows a possible structural diagram of the terminal device involved in the above embodiment.
  • the terminal device 700 includes a processing unit 702 and a communication unit 703.
  • the processing unit 702 is configured to perform control management on the actions of the terminal device 700.
  • processing unit 702 is configured to support network device 700 to perform steps S102-S108 of FIG. 2, steps S202-S212 of FIG. 5, steps S11-S15 of FIG. 7, and/or for performing the techniques described herein.
  • Communication unit 703 is used to support communication of terminal device 700 with other devices, for example, communication unit 703 is used to support terminal device 700 in acquiring source images from network devices, and/or other steps for performing the techniques described herein.
  • the terminal device 700 may further include a storage unit 701 for storing program codes and data of the terminal device 700.
  • the processing unit 702 can be a processor or a controller, for example, a central processing unit (CPU), a general-purpose processor, a digital signal processor (DSP), and an application specific integrated circuit. (English: Application-Specific Integrated Circuit, ASIC), Field Programmable Gate Array (FPGA) or other programmable logic device, transistor logic device, hardware component or any combination thereof. It is possible to implement or carry out the various illustrative logical blocks, modules and circuits described in connection with the present disclosure.
  • the processor may also be a combination of computing functions, for example, including one or more microprocessor combinations, a combination of a DSP and a microprocessor, and the like.
  • the communication unit 703 can be a communication interface, a transceiver, a transceiver circuit, etc., wherein the communication interface is a collective name and can include one or more interfaces, such as an interface between the network device and other devices.
  • the storage unit 701 can be a memory.
  • the terminal device 700 may further include a display unit (not shown).
  • the display unit can be used to preview or display an image, such as displaying a target image or a source image using a display unit, and the like.
  • the display unit may be a display or a player, etc., which is not limited in this application.
  • the terminal device according to the embodiment of the present invention may be the terminal device shown in FIG. 11B.
  • the terminal device 710 includes a processor 712, a communication interface 713, and a memory 77.
  • the terminal device 710 may further include a bus 714.
  • the communication interface 713, the processor 712, and the memory 77 may be connected to each other through a bus 714;
  • the bus 714 may be a Peripheral Component Interconnect (PCI) bus or an extended industry standard structure (English: Extended Industry) Standard Architecture (EISA) bus, etc.
  • PCI Peripheral Component Interconnect
  • EISA Extended Industry Standard Architecture
  • the bus 714 can be divided into an address bus, a data bus, a control bus, and the like. For ease of representation, only one thick line is shown in FIG. 11B, but it does not mean that there is only one bus or one type of bus.
  • FIG. 11A or FIG. 11B may also refer to the corresponding description of the foregoing method embodiments, and details are not described herein again.
  • the steps of the method or algorithm described in connection with the disclosure of the embodiments of the present invention may be implemented in a hardware manner, or may be implemented by a processor executing software instructions.
  • the software instructions can be composed of corresponding software modules, which can be stored in random access memory (English: Random Access Memory, RAM), flash memory, read only memory (English: Read Only Memory, ROM), erasable and programmable. Read only memory (English: Erasable Programmable ROM, EPROM), electrically erasable programmable read only memory (English: Electrically EPROM, EEPROM), registers, hard disk, mobile hard disk, compact disk (CD-ROM) or well known in the art Any other form of storage medium.
  • An exemplary storage medium is coupled to the processor to enable the processor to read information from, and write information to, the storage medium.
  • the storage medium can also be an integral part of the processor.
  • the processor and the storage medium can be located in an ASIC. Additionally, the ASIC can be located in a network device.
  • the processor and the storage medium can also exist as discrete components in the terminal device.
  • the foregoing storage medium includes various media that can store program codes, such as a ROM, a RAM, a magnetic disk, or an optical disk.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Processing (AREA)

Abstract

本发明实施例公开了图像处理方法、相关设备及计算机存储介质,其中所述方法包括:根据待插值像素点在目标图像中的坐标位置,确定所述待插值像素点对应在源图像中的第一坐标位置;根据所述第一坐标位置,确定m个参考像素点;根据所述m个参考像素点的坐标位置各自与所述第一坐标位置之间的球面距离,确定所述m个参考像素点各自对所述待插值像素点的插值权重;根据所述m个参考像素点各自对应的像素值以及所述m个参考像素点各自对所述待插值像素点的插值权重,确定所述待插值像素点的像素值,从而得到所述目标图像。采用本发明实施例,能够解决现有技术中采用平面图像插值算法对非平面图像进行图像处理时导致图像插值的性能和效率低等问题。

Description

图像处理方法、相关设备及计算机存储介质 技术领域
本发明涉及图像处理技术领域,尤其涉及图像处理方法、相关设备及计算机存储介质。
背景技术
图像插值算法是现代数字图像处理中缩放图像的传统算法,其主要有最邻近插值算法、双线性插值算法等等。然而,这些图像插值算法均是针对平面图像所提出的,用以处理平面图像的性能较好。并不适用于非平面图像(曲面图像),例如360°图像(全景图像)等大视角图像中。
在实践中发现,如果使用现有图像插值算法来对非平面图像进行图像插值,会大大降低图像插值的效率和性能。
发明内容
本发明实施例公开了图像处理方法、相关设备及计算机存储介质,能够解决现有技术中采用平面图像算法对非平面图像进行图像处理时导致图像插值的性能和效率降低等问题。
第一方面,本发明实施例公开提供了一种图像插值方法,所述方法包括:
根据待插值像素点在目标图像中的坐标位置,确定所述待插值像素点对应在源图像中的第一坐标位置,所述源图像为待转换的曲面图像,或者为待转换的具有球面图像格式的平面图像,所述目标图像为所述源图像转换后的图像;
根据所述第一坐标位置,确定m个参考像素点,所述m个参考像素点位于所述源图像中,m为正整数;
根据所述m个参考像素点的坐标位置各自与所述第一坐标位置之间的球面距离,确定所述m个参考像素点各自对所述待插值像素点的插值权重;
根据所述m个参考像素点各自对应的像素值以及所述m个参考像素点各自对所述待插值像素点的插值权重,确定所述待插值像素点的像素值,从而得到所述目标图像。
在一些实施例中,所述目标图像为曲面图像,或者为具有球面图像格式的平面图像。
在一些实施例中,所述源图像和所述目标图像各自对应的球面图像格式不相同,和/或,所述源图像和所述目标图像各自对应的图像分辨率不相同。具体的,当所述源图像和所述目标图像均为具有球面图像格式的平面图像时,所述源图像和所述目标图像各自对应的球面图像格式不相同。
在一些实施例中,所述m个参考像素点为在所述第一坐标位置的周围沿着经度方向和/或纬度方向上采样获得的;其中,所述m个参考像素点中部分参考像素点的纵坐标或者纬度坐标相同,和/或,所述m个参考像素点中部分参考像素点的横坐标或者经度坐标相同。其中,所述m个参考像素点中各自对应的坐标位置不完全相同,即所述m个参考像素点中不存在有坐标位置完全相同的像素点。
在一些实施例中,所述源图像为具有球面图像格式的平面图像,所述第一坐标位置为平面坐标系下由横坐标和纵坐标所组成的点的位置;所述经度方向为根据地理坐标系和所 述源图像的平面坐标系之间的位置映射关系确定的,在所述经度方向上的所述源图像的坐标位置对应的纬度值不变;所述纬度方向为根据所述地理坐标系和所述源图像的平面坐标系之间的位置映射关系确定的,在所述纬度方向上的所述源图像的坐标位置对应的经度值不变。
在一些实施例中,所述经度方向为地理坐标系下纬度坐标保持不变的方向,在所述源图像中是根据地理坐标系和所述源图像的平面坐标系之间的位置映射关系确定的。
在一些实施例中,所述纬度方向为地理坐标系下经度坐标保持不变的方向,在所述源图像中是根据地理坐标系和所述源图像的平面坐标系之间的位置映射关系确定的。
在一些实施例中,对于所述根据所述m个参考像素点中任一参考像素点的坐标位置与所述第一坐标位置之间的球面距离,所述球面距离包括第一球面距离和第二球面距离,所述第一球面距离为在经度方向上所述任一参考像素点的坐标位置与所述第一坐标位置之间的球面距离,所述第二球面距离为在纬度方向上所述任一参考像素点的坐标位置与所述第一坐标位置之间的球面距离;
所述根据所述m个参考像素点的坐标位置各自与所述第一坐标位置之间的球面距离,确定所述m个参考像素点各自对所述待插值像素点的插值权重包括:
确定单位距离,所述单位距离包括第一单位距离以及第二单位距离,所述第一单位距离为第一参考像素点和第二参考像素点之间在经度方向上的距离;所述第二单位距离为第三参考像素点和第四参考像素点之间在纬度方向上的的距离;
根据所述单位距离以及所述m个参考像素点的坐标位置各自与所述第一坐标位置之间的球面距离,确定所述m个参考像素点各自对所述待插值像素点的插值权重。
在一些实施例中,所述第一参考像素点和所述第二参考像素点为在经度方向上所述m个参考像素点中距离所述第一坐标位置(具体可为所述第一坐标位置对应的经度坐标)最近的两个参考像素点。可选的,所述第一参考像素点和所述第二参考像素点对应相同的纬度坐标。
在一些实施例中,所述第三参考像素点和所述第四参考像素点为在纬度方向上所述m个参考像素点中距离所述第一坐标位置(具体可为所述第一坐标位置对应的纬度坐标)最近的两个参考像素点。可选的,所述第三参考像素点和所述第四参考像素点对应相同的经度坐标。
在一些实施例中,所述第一单位距离Ud φ可采用如下公式计算获得:
Ud φ=|φ AB|·R cosλ A
其中,第一参考像素点A的坐标位置为(φ A,λ A),第二参考像素点的坐标位置B为(φ B,λ B),第一坐标位置为(φ,λ)。R为源图像对应的球面的半径。在单位球上,通常R=1。φ为经度坐标,λ为纬度坐标。
在一些实施例中,所述第二单位距离Ud λ可采用如下公式计算获得:
Ud λ=|λ CD|·R
其中,第三参考像素点C的坐标位置为(φ C,λ C),第四参考像素点D的坐标位置为(φ D,λ D)。R为源图像对应的球面的半径。在单位球上,通常R=1。φ为经度坐标,λ为纬 度坐标。
在一些实施例中,所述第一球面距离
Figure PCTCN2019085787-appb-000001
以及所述第二球面距离
Figure PCTCN2019085787-appb-000002
可对应采用如下公式计算获得:
Figure PCTCN2019085787-appb-000003
Figure PCTCN2019085787-appb-000004
其中,所述任一参考像素点的坐标位置为(φ ijij),第一坐标位置为(φ,λ)。
在一些实施例中,所述根据所述单位距离以及所述m个参考像素点的坐标位置各自与所述第一坐标位置之间的球面距离,确定所述m个参考像素点各自对所述待插值像素点的插值权重包括:
根据所述第一单位距离以及所述m个参考像素点各自的坐标位置与所述第一坐标位置之间的第一球面距离,确定所述m个参考像素点各自对所述待插值像素点的第一权重分量;
根据所述第二单位距离以及所述m个参考像素点各自的坐标位置与所述第一坐标位置之间的第二球面距离,确定所述m个参考像素点各自对所述待插值像素点的第二权重分量;
根据所述m个参考像素点各自对所述待插值像素点的第一权重分量以及所述m个参考像素点各自对所述待插值像素点的第二权重分量,确定所述m个参考像素点各自对所述待插值像素点的插值权重。
在一些实施例中,所述根据所述第一单位距离以及所述m个参考像素点各自的坐标位置与所述第一坐标位置之间的第一球面距离,确定所述m个参考像素点各自对所述待插值像素点的第一权重分量包括:
根据图像插值算法、所述第一单位距离以及所述m个参考像素点各自的坐标位置与所述第一坐标位置之间的第一球面距离,确定所述m个参考像素点各自对所述待插值像素点的第一权重分量。
在一些实施例中,所述m个参考像素点中任一参考像素点对所述待插值像素点的第一权重分量
Figure PCTCN2019085787-appb-000005
可采用如下公式计算获得:
Figure PCTCN2019085787-appb-000006
其中,Ud φ为第一单位距离、
Figure PCTCN2019085787-appb-000007
为第一球面距离,δ为图像插值算法。
在一些实施例中,所述根据所述第二单位距离以及所述m个参考像素点各自的坐标位置与所述第一坐标位置之间的第二球面距离,确定所述m个参考像素点各自对所述待插值像素点的第二权重分量包括:
根据图像插值算法、所述第二单位距离以及所述m个参考像素点各自的坐标位置与所述第一坐标位置之间的第二球面距离,确定所述m个参考像素点各自对所述待插值像素点的第二权重分量。
在一些实施例中,所述m个参考像素点中任一参考像素点对所述待插值像素点的第二权重分量
Figure PCTCN2019085787-appb-000008
可采用如下公式计算获得:
Figure PCTCN2019085787-appb-000009
其中,Ud λ为第二单位距离,
Figure PCTCN2019085787-appb-000010
为第二球面距离,δ为图像插值算法。
在一些实施例中,所述m个参考像素点中任一参考像素点对所述待插值像素点的插值权重L(φ ij,λ ij)可采用如下公式计算获得:
Figure PCTCN2019085787-appb-000011
在一些实施例中,所述待插值像素点的像素值P o可采用如下公式获得:
Figure PCTCN2019085787-appb-000012
其中,P o为待插值像素点的像素值。P ij为所述m个参考像素点中任一参考像素点的像素值。L(φ ij,λ ij)为所述任一参考像素点对所述待插值像素点的插值权重。a为在经度方向上采样获得的参考像素点的数量。b为在纬度方向上采样获得的参考像素点的数量。a*b=m,且a,b和m均为正整数。
在一些实施例中,所述经度方向为经度坐标数值变化最快的方向,或者所述经度方向为纬度坐标值保持不变的方向。
在一些实施例中,所述纬度方向为纬度坐标数值变化最快的方向,或者所述纬度方向为经度坐标值保持不变的方向。
在一些实施例中,所述坐标位置为所述平面图像的平面坐标系下由横坐标和纵坐标所组成的点的位置,或者,为所述曲面图像的地理坐标系下由经度坐标和纬度坐标所组成的点的位置。
第二方面,本发明实施例提供一种终端设备,包括处理单元;其中:
所述处理单元,用于根据待插值像素点在目标图像中的坐标位置,确定所述待插值像素点对应在源图像中的第一坐标位置,所述源图像为待转换的曲面图像,或者为待转换的具有球面图像格式的平面图像,所述目标图像为所述源图像转换后的图像;
所述处理单元,还用于根据所述第一坐标位置,确定m个参考像素点,所述m个参考 像素点位于所述源图像中,m为正整数;
所述处理单元,还用于根据所述m个参考像素点的坐标位置各自与所述第一坐标位置之间的球面距离,确定所述m个参考像素点各自对所述待插值像素点的插值权重;
所述处理单元,还用于根据所述m个参考像素点各自对应的像素值以及所述m个参考像素点各自对所述待插值像素点的插值权重,确定所述待插值像素点的像素值,从而得到所述目标图像。
在一些实施例中,对于所述根据所述m个参考像素点中任一参考像素点的坐标位置与所述第一坐标位置之间的球面距离,所述球面距离包括第一球面距离和第二球面距离,所述第一球面距离为在经度方向上所述任一参考像素点的坐标位置与所述第一坐标位置之间的球面距离,所述第二球面距离为在纬度方向上所述任一参考像素点的坐标位置与所述第一坐标位置之间的球面距离;
所述处理单元,具体用于确定单位距离,所述单位距离包括第一单位距离以及第二单位距离,所述第一单位距离为第一参考像素点和第二参考像素点之间在经度方向上的距离;所述第二单位距离为第三参考像素点和第四参考像素点之间在纬度方向上的距离;
所述处理单元,具体用于根据所述单位距离以及所述m个参考像素点的坐标位置各自与所述第一坐标位置之间的球面距离,确定所述m个参考像素点各自对所述待插值像素点的插值权重。
在一些实施例中,所述终端设备还包括通信单元,所述通信单元用于传输图像,例如获取源图像或者发送目标图像等等。
关于本发明实施例中未示出或未描述的内容可参见前述第一方面所述方法实施例中的相关阐述,这里不再赘述。
第三方面,本发明实施例提供了又一种终端设备,包括存储器及与所述存储器和耦合的处理器;所述存储器用于存储指令,所述处理器用于执行所述指令;其中,所述处理器执行所述指令时执行上述第一方面所描述的方法。
在一些实施例中,所述终端设备还包括与所述处理器耦合的显示器,所述显示器用于在所述处理器的控制下显示图像(具体可为目标图像或者源图像)。
在一些实施例中,所述终端设备还包括通信接口,所述通信接口与所述处理器通信,所述通信接口用于在所述处理器的控制下与其他设备(如网络设备等)进行通信。
第四方面,提供了一种计算机可读存储介质,所述计算机可读存储介质存储了用于业务切换处理的程序代码。所述程序代码包括用于执行上述第一方面所描述的方法的指令。
通过实施本发明实施例,能够解决现有技术中采用平面图像插值算法对非平面图像(曲面图像)进行图像处理时导致图像插值的性能和效率降低等问题,从而有效提高了非平面图像插值的性能和效率。
附图说明
为了更清楚地说明本发明实施例或现有技术中的技术方案,下面将对实施例或现有技术描述中所需要使用的附图作简单地介绍。
图1A是本发明实施例提供的一种球面图像的示意图。
图1B是本发明实施例提供的一种具有球面图像格式的平面图像的示意图。
图2是本发明实施例提供的一种图像处理方法的流程示意图。
图3A和3B是本发明实施例提供的两种参考像素点的示意图。
图4A和4B是本发明实施例提供的两种参考像素区域的示意图。
图5是本发明实施例提供的另一种图像处理方法的流程示意图。
图6A和6B是本发明实施例提供的另两种参考像素点的示意图。
图7是本发明实施例提供的另一种图像处理方法的流程示意图。
图8A和8B是本发明实施例提供的另两种参考像素点的示意图。
图9是本发明实施例提供的另一种参考像素点的示意图。
图10A和图10B是本发明实施例提供的另两种参考像素点的示意图。
图11A和11B是本发明实施例提供的两种终端设备的结构示意图。
具体实施方式
下面将结合本发明的附图,对本发明实施例中的技术方案进行详细描述。
首先,介绍本申请涉及的一些技术术语。
全景视频:也名360度全景视频或360视频,一种用多摄像机进行全方位360度进行拍摄的视频,用户在观看视频的时候,可以随意调节用户的视角进行观看。构成全景视频中的帧图像可称为全景图像或360°图像。
大视角视频:指覆盖的视角范围过大的视频,例如视频覆盖的视角范围为360°、720°等等。相应地,构成所述大视角视频的帧图像可被称为大视角图像。
插值:根据已知的离散数据点来构建新的数据点,被称为插值。
整像素点:指在图像中,位于参考坐标系下坐标位置均为整数的像素点。
分像素点:指在图像中,位于参考坐标系下坐标位置为非整数的像素点。
像素点:指将图像分割为细小的方格或点,每个方格或点称为像素点。为方便本申请专利的描述,使用像素点作为整像素点和分像素点的统称。即本申请中,所述像素点可为整像素点或者分像素点,针对特定要求的像素点下文会进行详细阐述。
参考像素点:也名插值参考像素点,指在图像像素插值过程中,用以生成待插值像素的像素点(也称为待插值像素点)。其中,所述参考像素点通常是在最邻近所述待插值像素点的指定区域中选取的,下文将进行详细阐述。
平面直角坐标系:也名平面坐标系或直角坐标系,指在同一平面上相互垂直且有公共原点的两条数轴所构成的坐标系。两条数轴分别置于水平位置和垂直位置。其中,垂直的数轴通常称为y轴或纵轴,水平的数轴通常称为x轴或横轴。相应地,在平面坐标系下点的位置(坐标位置)可用该点在x方向上的横坐标和在y方向上的纵坐标来表示。
地理坐标系:指用经纬度表示地面点的位置的球面坐标系。在地理坐标系中水平线(或东西线)为纬线。在地理坐标系中垂直线(或南北线)为经线。在地理坐标系下点的位置(坐标位置)可用该点在经度方向上的经度坐标(经度值或经度坐标值)和在纬度方向上的纬度坐标(纬度值或纬度坐标值)来表示。其中,经度方向是指即经度坐标值变化最快的方向,也可指纬度坐标值保持不变的方向。纬度方向是指纬度坐标值变化最快的方向, 也可指经度坐标值保持不变的方向。
great circle:大圆,也名大圆圈。定义为“intersection of the sphere and a plane that passes through the center point of the sphere.NOTE 1:A great circle is also known as an orthodrome or Riemannian circle.NOTE 2:The center of the sphere and the center of a great circle are co-located.”翻译为:“指过球心的平面与球面相交的圆环。注1:大圆通常也叫大圆弧或者黎曼圆。注2:大圆的圆心与球心重合”。
平面图像:指在平面坐标系下的图像,即图像中的各部分位于同一平面。
曲面图像:也名非平面图像,指图像中的各部分不同时位于一个平面上。通常地,由于大视角图像覆盖的视角范围较大,其实质即为曲面图像。例如,360°图像(全景图像)即为曲面图像中的一种,其也被称为球面图像。具体如图1A示出一种球面图像的示意图。
球面图像格式:指图像的一种存储或传输格式,具体将在本申请下文进行详述。示例性地,如图1B示出一种具有球面图像格式的平面图像的示意图。其中,图1B中黑色区域可理解为部分曲面图像映射至平面图像所呈现的图像区域,这里不做详述。
本申请发明人在提出本申请的过程中发现:由于大视角视频(图像)覆盖的视角范围过大,其实质就是曲面图像(即非平面图像),例如全景图像的本质为球面全景图像。在针对大视角图像的图像处理过程中,会发生形变。例如,将大视角图像(非平面图像)转换/映射为平面图像,或者将平面图像映射为大视角图像时,均存在不同程度的形变,这使得映射后的图像中相邻像素点之间的相关性(或间距)发生了变化。此时,如果采用现有图像插值算法来进行图像插值,将大大降低图像插值的性能和效率。
为解决上述问题,本申请提出一种图像插值方法以及所述方法适用的终端设备。请参见图2,是本发明实施例提供的一种图像插值处理方法的流程示意图。如图2所示的方法包括如下实施步骤:
步骤S102、终端设备根据待插值像素点在目标图像中的坐标位置,确定所述待插值像素点对应在源图像中的第一坐标位置,所述源图像为待转换的曲面图像,或者具有球面图像格式的平面图像,所述目标图像为所述源图像转换后的图像。
本申请中,如果图像为曲面图像,则图像中的像素点的坐标位置可为地理坐标系下的坐标位置,即该坐标位置具体由经度坐标和纬度坐标构成。如果图像为具有球面图像格式的平面图像,则该图像中像素点的坐标位置可为平面坐标系下的坐标位置,即该坐标位置具体由横坐标和纵坐标构成。关于所述球面图像格式将在下文进行详述。
步骤S104、终端设备根据所述第一坐标位置,确定m个参考像素点,所述m个参考像素点位于所述源图像中,m为正整数。
终端设备可根据待插值像素点对应在源图像中的第一坐标位置,在所述源图像中为所述待插值像素点选取m个参考像素点。所述参考像素点用于后续计算所述待插值点的像素值,关于所述参考像素点的选取将在下文进行详细阐述,这里不再赘述。
步骤S106、终端设备根据所述m个参考像素点的坐标位置各自与所述第一坐标位置之间的球面距离,确定所述m个参考像素点各自对所述待插值像素点的插值权重。
由于参考像素点的坐标位置和第一坐标位置之间的距离为球面距离,因此这里计算该 球面距离对应使用的坐标位置需为地理坐标系下的坐标位置。具体可采用地理坐标系下的坐标位置,分别计算两个坐标位置各自在经度方向上和纬度方向上的球面距离,具体将在下文进行详述。
步骤S108、终端设备根据所述m个参考像素点各自对应的像素值和所述m个参考像素点各自对所述待插值像素点的插值权重,确定所述待插值像素点的像素值,从而得到所述目标图像。终端设备在计算获得所述待插值像素点的像素值后,可重复执行步骤S106-S108的步骤,计算获得所述目标图像中每个待插值像素点的像素值,从而可获得所述目标图像。
下面阐述本申请涉及的一些具体实施例和可选实施例。步骤S102中,针对待生成的目标图像中的任意一个像素点(即待插值像素点),可根据所述待插值像素点在所述目标图像中的坐标位置,确定所述待插值像素点对应在源图像中的第一坐标位置。其中,所述源图像为待转换的曲面图像,或者所述源图像为待转换的具有球面图像格式的平面图像。所述目标图像为将所述源图像转换后生成的图像。下面阐述步骤S102涉及的几种具体实施方式。
在一些实施方式中,当所述待插值像素点在目标图像中的坐标位置为平面坐标系下由横坐标和纵坐标所组成的点的坐标位置时,终端设备可使用预设的图像映射关系,将所述待插值像素点在目标图像中的坐标位置映射到源图像中,以获得待插值像素点对应在源图像中的第一坐标位置。
所述图像映射关系指目标图像和源图像之间的位置映射关系,即同一点的位置分别在源图像和目标图像中的关联关系。
具体的,终端设备可直接根据待插值像素点在目标图像中的坐标位置以及所述图像映射关系,计算获得所述待插值像素点对应在源图像中的坐标位置。此时这里的图像映射关系具体可指目标图像的平面坐标系和源图像的平面坐标系之间的位置映射关系,或者指目标图像的平面坐标系和源图像的地理坐标系之间的位置映射关系,本申请不做详述。如果图像映射关系指目标图像的平面坐标系和源图像的平面坐标系之间的位置映射关系,则计算获得的第一坐标位置为平面坐标系下的坐标位置。相应地,源图像和目标图像可为具有球面图像格式的平面图像。如果图像映射关系指目标图像的平面坐标系和源图像的地理坐标系之间的位置映射关系,则计算获得的第一坐标位置为地理坐标系下的坐标位置。相应地,源图像为曲面图像,目标图像为具有球面图像格式的平面图像。
或者,终端设备可先根据预设的第一坐标映射关系,将待插值像素点在目标图像中平面坐标系下的坐标位置转换为地理坐标系下的坐标位置。进一步地,根据预设的图像映射关系,将待插值像素点在目标图像中地理坐标系下的坐标位置映射到源图像中,以获得所述待插值像素点对应在源图像中的第一坐标位置。
同样地,这里的图像映射关系可指目标图像的地理坐标系和源图像的平面坐标系之间的位置关系,或者也可指目标图像的地理坐标系和源图像的地理坐标系之间的位置关系。如果该图像映射关系指目标图像的地理坐标系和源图像的平面坐标系之间的位置关系,则对应计算获得的第一坐标位置为平面坐标系下的坐标位置。相应地,源图像和目标图像可 为具有球面图像格式的平面图像。如果图像映射关系指目标图像的地理坐标系和源图像的地理坐标系之间的位置映射关系,则对应计算获得的第一坐标位置为地理坐标系下的坐标位置。相应地,源图像为曲面图像,目标图像为具有球面图像格式的平面图像。
所述第一坐标映射关系指平面坐标系和地理坐标系之间的位置映射关系,即同一点分别在平面坐标系和地理坐标系中的关联关系。该关联关系可为用户侧或系统侧自定义设置的,本申请不做详述。所述第一坐标映射关系以及所述图像映射关系均可用对应的映射函数表示。例如,所述第一坐标映射关系对应的映射函数可为f1,所述图像映射关系对应的映射函数可为f2等等,本申请不做详述。
在又一些实施方式中,当所述待插值像素点在目标图像中的坐标位置为地理坐标系下由经度坐标和纬度坐标所组成的点的坐标位置时,终端设备可使用预设的图像映射关系,将所述待插值像素点在目标图像中的坐标位置映射到源图像中,以获得待插值像素点对应在源图像中的第一坐标位置。
具体的,终端设备可直接根据待插值像素点在目标图像中的坐标位置以及所述图像映射关系,计算获得所述待插值像素点在源图像中的第一坐标位置。这里的图像映射关系具体可指目标图像的地理坐标系和源图像的平面坐标系之间的映射关系,则此时对应计算获得的第一坐标位置为平面坐标系下的坐标位置。相应地源图像可为具有球面图像格式的平面图像,目标图像为曲面图像。这里的图像映射关系还具体可指目标图像的地理坐标系和源图像的地理坐标系之间的映射关系,则此时对应计算获得的第一坐标位置为地理坐标系下的坐标位置。源图像和目标图像均可为曲面图像。
或者,终端设备可先根据预设的第二坐标映射关系将待插值像素点在目标图像中地理坐标系下的坐标位置转换为平面坐标系下的坐标位置。进一步再使用预设的图像映射关系将待插值像素点在目标图像中平面坐标系下的坐标位置对应映射到源图像中,以获得待插值像素点对应在源图像中的第一坐标位置。
同样地,这里的图像映射关系具体可指目标图像的平面坐标系和源图像的平面坐标系之间的位置映射关系,则对应计算获得的第一坐标位置为平面坐标系下的坐标位置。源图像可为具有球面图像格式的平面图像,目标图像为曲面图像。这里的图像映射关系还具体可指目标图像的平面坐标系和源图像的地理坐标系之间的映射关系,则此时对应计算获得的第一坐标位置为地理坐标系下的坐标位置。源图像和目标图像均可为曲面图像。
所述第二坐标映射关系指地理坐标系和平面坐标系之间的位置映射关系,即同一点分别在地理坐标系和平面坐标系中的坐标位置之间的关联关系,本申请不做详述。其中,所述第一映射关系指平面坐标系下的坐标位置到地理坐标系下的坐标位置之间存在的映射关系,所述第二映射关系指地理坐标系下的坐标位置到平面坐标系下的坐标位置之间存在的映射关系,这里不做详述。
在可选实施例中,由于步骤S106中需计算坐标位置之间的球面距离,则对应的坐标位置需为球面坐标,即地理坐标系下的坐标位置。如果计算获得的第一坐标位置为平面坐标系下的坐标位置,终端设备还可使用预设的第二坐标映射关系将平面坐标系下的第一坐标位置对应转换为地理坐标系下的坐标位置,便于S106计算。
在可选实施例中,所述源图像和所述目标图像各自对应的图像分辨率可以不相同。可 选的,所述目标图像的分辨率高于所述源图像的分辨率。其中,图像插值可指在从低分辨率图像生成高分辨率图像的过程中,用以恢复图像中丢失的信息。图像插值过程中涉及使用到的算法,本申请称为图像插值算法,具体将在下文进行详述。
在可选实施例中,所述源图像和所述目标图像各自对应的球面图像格式可以不相同。具体的,在所述源图像和所述目标图像均为具有球面映射格式的图像时,所述源图像和所述目标图像各自对应的球面图像格式可以不相同。本申请的图像插值可适用于不同球面图像格式的图像转换。即使用图像插值算法对第一球面图像格式下的源图像(具体为源图像中的像素点)进行图像插值,可生成/获得第二球面图像格式下的目标图像。所述第一球面图像格式和所述第二球面图像格式不相同。
所述球面图像格式可指设备存储或传输图像的格式,其可包括但不限于ERP(Equi-Rectangular Projection,中文称为等距柱状投影或经纬图映射)、CMP(Cube Map Projection,中文称为立方体映射)、CPP(Craster Parabolic Projection,中文称为克拉斯特抛物线映射)、ACP(Adjusted Cube map Projection,中文称为改进的立方体映射)、COHP(Compact Octahedron Projection,中文称为紧凑格式正八面体映射)、CISP(Compact Icosahedral projection,中文称为紧凑格式正二十面体映射)以及其他球面图像格式等,本申请不做限定。
步骤S104中,终端设备可在所述第一坐标位置的周围选取m个参考像素点,便于后续依据所述m个像素点的相关信息(如坐标位置以及像素值)计算待插值像素点的信息(如像素值)。其中,所述m个参考像素点均位于所述源图像中。具体存在以下几种实施方式:
在一些实施方式中,终端设备可在所述第一坐标位置的周围,沿着经度方向和/或纬度方向上采样获得m个参考像素点。其中,所述m个参考像素点中部分参考像素点的纵坐标或纬度坐标相同,和/或,所述m个参考像素点中部分参考像素点的横坐标或经度坐标相同。但所述m个参考像素点中不存在坐标位置完全相同的像素点,如果存在坐标位置相同的像素点,认为该像素点被重复采样,可视为一个参考像素点。
具体的,在所述第一坐标位置为地理坐标系下的坐标位置(或源图像为曲面图像)的情况下,终端设备可在所述第一坐标位置的周围,直接沿着经度方向和/或纬度方向上均匀采样,以获得m个参考像素点。或者,在所述第一坐标位置为平面坐标系下的坐标位置(或源图像为具有球面图像格式的平面图像)的情况下,终端设备需先根据目标图像的坐标系(具体可为地理坐标系或平面坐标系)和源图像的平面坐标系之间的位置映射关系,确定出经度方向和/或纬度方向。进一步地,终端设备再在所述第一坐标位置的周围沿着所述经度方向和/或纬度方向上均匀采样以获得m个参考像素点。
其中,所述m个参考像素点均为源图像中的像素点。在经度方向上所述源图像的坐标位置对应的纬度坐标(即纬度值)不变。相应地,在纬度方向上所述源图像的坐标位置对应的经度坐标(即经度值)不变。
示例性,终端设备可根据所述第一坐标位置分别沿着经度方向上和纬度方向上均匀采样以获得a*b个(即m个)参考像素点。例如,可先沿经度方向上均匀采样获得a个参考像素点,再为所述a个参考像素点中每个参考像素点沿纬度方向上均匀采样获得b个参考 像素点。即,所述a个参考采样点在经度方向上呈均匀分布,且这a个参考像素点各自对应的纬度坐标(纬度值)相同。所述b个参考像素点在纬度方向上也呈均匀分布,且这b个参考像素点各自对应的经度坐标(经度值)相同。
在可选实施例中,在图像转换过程中,像素点的颜色由亮度和色度共同表示。因此在为所述待插值像素点选取参考像素点时,可从亮度和色度两个维度上考虑。其中,对于图像的亮度分量,在源图像中选取参考像素点具体可为:在所述第一坐标位置的周围沿着经度方向和纬度方向选取最邻近的a1*b1个像素点,作为参考像素点。具体如图3A示出为所述待插值像素点选取的最邻近的6*6个参考像素点,即a1=b1=6。如图3A中,在经度方向上每行参考像素点对应的纬度坐标相同,在纬度方向上每列参考像素点对应的经度坐标相同。
对于图像的色度分量,在源图像中选取参考像素点具体可为:在所述第一坐标位置的周围沿着经度方向和纬度方向选取最邻近的a2*b2个像素点,作为参考像素点。其中,a1、a2、b1、b2可为用户侧或系统侧自定义设置的常数,它们可以相同,也可不相同,本申请不做限定。具体如图3B示出为所述待插值像素点选取最邻近的4*4个参考像素点,即a2=b2=4。
在又一些实施方式中,终端设备可在所述第一坐标位置的周围选取对应的参考像素区域,所述第一坐标位置位于所述参考像素区域中。进一步地,从所述参考像素区域中选取m个参考像素点,即所述参考像素区域中包括有m个参考像素点。所述m个参考像素点用于后续生成所述目标图像中的待插值像素点。
具体的,所述参考像素区域的选取具体存在以下几种可能的实现方式。
在一种可能的实现方式中,终端设备可在所述第一坐标位置的周围选取由a*b个像素点所构成的区域,作为所述参考像素区域。其中,a*b=m,且a和b均为正整数。示例性地,终端设备以所述待第一坐标位置为中心,选取最邻近的指定区域,作为参考像素区域。所述指定区域可为用户侧或系统侧自定义设置的,所述指定区域的大小、形状等特征均不受限。例如,所述指定区域可为以所述待插值像素点为中心,指定长度为半径所对应构成的圆等等。
在又一种可能的实现方式中,终端设备可在所述第一坐标位置的周围选取由两条经线和两条纬线相交所构成的区域,作为所述参考像素区域。如图4A示出一种参考像素区域选取的示意图。其中,图4A示出以所述第一坐标位置为中心,选取由两条经线和两条纬线相交所组成的区域为参考像素区域。
在又一种可能的实现方式中,终端设备可在所述第一坐标位置的周围选取由两组大圆相交所构成的区域,作为所述参考像素区域。其中,每组大圆中包括有两个大圆,且这两组大圆中的每个大圆均过同一球心。如图4B示出又一种参考像素区域选取的示意图。其中,图4B中以所述第一坐标位置为中心,选取过同一球心的四个大圆相交所组成的区域,作为所述参考像素区域。或者,在所述第一坐标位置的周围随意选取过同一球心的四个大圆相交所形成的区域,作为所述参考像素区域等。
相应地,在终端设备确定所述参考像素区域后,所述终端设备可从所述参考像素区域 中选取m个(即a*b个)参考像素点,便于后续依据这些参考像素点的相关信息(如坐标位置、像素值等)计算所述待插值像素点的信息(如像素值)。
具体的,终端设备可根据所述待插值像素点对应在源图像中的第一坐标位置,从所述参考像素区域中均匀采样,以获得m个参考像素点。关于如何采样获得所述m个参考像素点可参见前述实施例中的相关阐述,这里不再赘述。
需要说明的是,终端设备为所述待插值像素点选取/确定m个参考像素点,还存在以下实施方式:终端设备可根据所述待插值像素点在目标图像中的坐标位置,在所述待插值像素点的周围选取n个像素点。所述n个像素点位于所述目标图像中,n为大于等于m的正整数。进一步地,再根据所述n个像素点各自在目标图像中的坐标位置,确定所述n个像素点对应在源图像中的坐标位置。然后,在依据所述n个参考像素点对应在源图像中的像素点的坐标位置,确定m个参考像素点。
具体的,终端设备在所述待插值像素点的周围选取n个像素点的选取方式,本申请不做限定。示例性地,终端设备可在所述待插值像素点的周围随机选取n个像素点。或者,在所述待插值像素点的周围按照固定步长均匀采样获得n个像素点。例如,待插值像素点的坐标位置为(x,y),则选取的像素点的坐标位置可为(x+k 0Δx,y+k 0Δy)。其中,k 0为自定义的数值,例如+1,-1,+2,-2等等。Δx为x方向上的增量(或采样时的固定步长)。Δy为y方向上的增量(或采样时的固定步长)。
相应地,终端设备可根据预设的图像映射关系,将所述n个像素点对应在目标图像中的坐标位置映射到源图像中,以获得所述n个像素点对应在源图像中的坐标位置。进一步地,终端设备可按照设定规则,从所述n个像素点对应在源图像中的像素点中,选取出m个参考像素点。所述设定规则为用户侧或系统侧自定义设置的。例如,终端设备可对所述n个像素点对应在源图像中的坐标位置进行设定函数操作,如向下取整floor函数操作,向上取整ceil函数操作等等,以对应获得m个参考像素点以及所述m个参考像素点在源图像中的坐标位置。
示例性地,假设n个像素点中某个像素点对应在源图像中的坐标位置为(x 1,y 1),则利用floor函数对(x 1,y 1)进行向上取整操作后,可对应获得一个参考像素点。该参考像素点的坐标位置为(floor(x 1),floor(y 1))。
在可选实施例中,本申请涉及的参考像素点可为整像素点。所述待插值像素点可为分像素点,也可为整像素点。关于整像素点和分像素点可参见前述实施例中的相关介绍,这里不再赘述。
步骤S106中,终端设备可根据所述m个参考像素点各自的坐标位置和所述第一坐标位置之间的球面距离,确定所述m个参考像素点各自对所述待插值像素点的插值权重。
具体的,终端设备可先根据所述m个参考像素点各自的坐标位置,确定出单位距离;再根据所述单位距离以及所述m个参考像素点的坐标位置各自和所述第一坐标位置之间的球面距离,来计算所述m个参考像素点各自对所述待插值像素点的插值权重。关于所述单位距离以及所述参考像素点对所述待插值像素点的插值权重,将在本申请下文进行具体详细阐述。
其中,所述单位距离包括第一单位距离以及第二单位距离。所述第一单位距离为在经度方向上第一参考像素点和第二参考像素点之间的距离。所述第一参考像素点和所述第二参考像素点可为所述m个参考像素点中在经度方向上距离所述第一坐标位置(具体可为所述第一坐标位置对应的经度坐标)最近的两个参考像素点。可选的,所述第一参考像素点和所述第二参考像素点对应的纬度坐标可以相同,也可不相同。所述第二单位距离为在纬度方向上第三参考像素点和第四参考像素点之间的距离。所述第三参考像素点和所述第四参考像素点可为所述m个参考像素点中在纬度方向上距离所述第一坐标位置(具体可为所述第一坐标位置对应的纬度坐标)最近的两个参考像素点。可选的,所述第三参考像素点和所述第四参考像素点对应的经度坐标可以相同,也可不相同。
所述第一参考像素点、第二参考像素点、第三参考像素点以及第四参考像素点中它们可以相同,也可不同,本申请不做限定。
对于所述m个参考像素点中的任一参考像素点的坐标位置与所述第一坐标位置之间的球面距离而言,所述球面距离包括第一球面距离以及第二球面距离。所述第一球面距离为在经度方向上所述任一参考像素点的坐标位置与所述第一坐标位置之间的球面距离。所述第二球面距离为在纬度方向上所述任一参考像素点的坐标位置与所述第一坐标位置之间的球面距离。
步骤S108中,终端设备可依据所述m个参考像素点各自对应的像素值以及所述m个参考像素点各自对所述待插值像素点的插值权重,对它们进行加权求和,从而计算获得所述待插值像素点的像素值。具体的,所述终端设备可采用如下公式(1)计算获得所述待插值像素点的像素值:
Figure PCTCN2019085787-appb-000013
其中,P o为待插值像素点的像素值。P ij为所述m个参考像素点中任一个参考像素点(如目标参考像素点)的像素值。L(φ ij,λ ij)为所述任一个参考像素点(目标参考像素点)对待插值像素点的插值权重。a为在经度方向上采样获得的参考像素点的数量。b为在纬度方向上采样获得的参考像素点的数量。a*b=m,且a,b和m均为正整数。
在一些实施例中,下面阐述S106中确定所述m个参考像素点各自对所述待插值像素点的插值权重所涉及的相关实施例。具体参见图5所示,包括如下实施步骤:
步骤S202、终端设备根据经度方向上所述m个参考像素点中距离所述第一坐标位置最近的两个参考像素点各自在源图像中的坐标位置,计算获得在经度方向上的第一单位距离。
具体的,在经度方向上,可先从所述m个参考像素点中选取距离所述待插值像素点最近的两个参考像素点A和B,即前文所述的第一像素点和第二像素点。再根据参考像素点A和B各自的坐标位置,计算获得在第一单位距离。
如图6A示出一种参考像素点的示意图。图6A中,O为待插值像素点对应在源图像中的 第一坐标位置,A和B为在经度方向上距离第一坐标位置O最近的两个参考像素点。其中,参考像素点A的坐标位置为(φ A,λ A),参考像素点B的坐标位置为(φ B,λ B),第一坐标位置为(φ,λ)。终端设备可采用如下公式(2)计算获得经度方向上的第一单位距离Ud φ
Ud φ=|φ AB|·R cosλ A      公式(2)
其中,R为源图像对应的球面的半径。在单位球上,通常R=1。φ为经度坐标,λ为纬度坐标。
步骤S204、终端设备根据纬度方向上所述m个参考像素点中距离所述第一坐标位置最近的两个参考像素点各自在源图像中的坐标位置,计算获得在纬度方向上的第二单位距离。
在纬度方向上,可先从所述m个参考像素点中选取距离所述待插值像素点最近的两个参考像素点C和D,即前文所述的第三像素点和第四像素点。再根据参考像素点C和D各自的坐标位置,计算获得在第二单位距离。
如图6B示出又一种参考像素点的示意图。图6B中,O为待插值像素点对应在源图像中的第一坐标位置,C和D为在纬度方向上距离第一坐标位置O最近的两个参考像素点。其中,参考像素点C的坐标位置为(φ C,λ C),参考像素点D的坐标位置为(φ D,λ D),第一坐标位置为(φ,λ)。终端设备可采用如下公式(3)计算获得纬度方向上的第二单位距离Ud λ
Ud λ=|λ CD|·R      公式(3)
步骤S206、终端设备根据所述m个参考像素点各自在源图像的坐标位置和所述第一坐标位置,计算在经度方向上所述m个参考像素点的坐标位置各自与所述第一坐标位置之间的第一球面距离以及在纬度方向上所述m个参考像素点的坐标位置各自与所述第一坐标位置之间的第二球面距离。
终端设备分别从经度方向上和纬度方向上,计算所述m个参考像素点的坐标位置各自和所述第一坐标位置之间的球面距离。即,所述球面距离可包括在经度方向上的第一球面距离以及在纬度方向上的第二球面距离,具体将在下文进行详述。
步骤S208、终端设备根据所述第一单位距离以及所述m个参考像素点的坐标位置各自与所述第一坐标位置之间的第一球面距离,确定所述m个参考像素点各自对所述待插值像素点的第一权重分量。
所述图像插值算法可为用户侧或系统侧自定义设置的,其可包括但不限于Lanczos插值算法、双线性插值算法、三次卷积插值算法、最邻近插值算法、分段线性插值算法、或者其他插值算法等等,本申请不做限定。
步骤S210、终端设备根据所述第二单位距离以及所述m个参考像素点的坐标位置各自与所述第一坐标位置之间的第二球面距离,确定所述m个参考像素点各自对所述待插值像素点的第二权重分量。
步骤S212、终端设备根据所述m个参考像素点各自对所述待插值像素点的第一权重分量以及第二权重分量,确定所述m个参考像素点各自对所述待插值像素点的插值权重。
本申请下面将以目标参考像素点为例,阐述上述步骤S206-步骤S212的具体实施方式。 所述目标参考像素点为所述m个参考像素点中的任意一个像素点。所述目标参考像素点在源图像中的坐标位置(即所述目标参考像素点的坐标位置)为(φ ijij),第一坐标位置为(φ,λ)。
步骤S206中,终端设备可采用如下公式(4)计算获得在经度方向上所述目标参考像素点的坐标位置和所述第一坐标位置之间的第一球面距离
Figure PCTCN2019085787-appb-000014
在纬度方向上所述目标参考像素点的坐标位置和所述第一坐标位置之间的第二球面距离
Figure PCTCN2019085787-appb-000015
Figure PCTCN2019085787-appb-000016
Figure PCTCN2019085787-appb-000017
步骤S208中,终端设备可利用第一单位距离Ud φ以及第一球面距离
Figure PCTCN2019085787-appb-000018
并结合图像插值算法δ计算所述目标参考像素点在经度方向上对所述待插值像素点的第一权重分量
Figure PCTCN2019085787-appb-000019
示例性地,可采用如下公式(5)计算获得
Figure PCTCN2019085787-appb-000020
Figure PCTCN2019085787-appb-000021
步骤S210中,终端设备可利用第二单位距离Ud λ以及第二球面距离
Figure PCTCN2019085787-appb-000022
并结合图像插值算法δ计算所述目标参考像素点在纬度方向上对所述待插值像素点的第二权重分量
Figure PCTCN2019085787-appb-000023
示例性地,可采用如下公式(6)计算获得
Figure PCTCN2019085787-appb-000024
Figure PCTCN2019085787-appb-000025
步骤S212中,终端设备可根据所述目标参考像素点在经度方向上对所述待插值像素点的第一权重分量
Figure PCTCN2019085787-appb-000026
以及所述目标参考像素点在纬度方向上对所述待插值像素点的第二权重分量
Figure PCTCN2019085787-appb-000027
计算获得所述目标参考像素点对所述待插值像素点的插值权重L(φ ij,λ ij)。
具体的,终端设备可根据设定运算规则对第一权重分量
Figure PCTCN2019085787-appb-000028
和第二权重分量
Figure PCTCN2019085787-appb-000029
进行处理,以获得对应的插值权重。所述设定运算规则为用户侧或系统侧自定义设置的运算法则,例如加法、乘法等等。示例性地,以乘法运算为例,终端设备可采用如下公式(7)计算获得L(φ ij,λ ij):
Figure PCTCN2019085787-appb-000030
通过实施本发明实施例,能够解决现有技术中采用平面图像插值算法对非平面图像(曲面图像)进行图像处理时导致图像插值的性能和效率降低等问题,从而有效提高了非平面图像插值的性能和效率。
为方便理解,本申请下文将举例详述图像插值方法对应的相关具体实施例。
第一个实施例:CPP格式的源图像转换为ERP格式的目标图像
本实施例中,可参见图7,图像插值方法包括如下实施步骤:
S11、对于目标图像中的任意待插值像素点,在源图像中为所述待插值像素点选取m个参考像素点。
本实施例中,所述目标图像中待插值像素点的坐标位置为(m 0,n 0),其具体可为地理坐标系下的坐标位置,也可为平面坐标系下的坐标位置。假设这里(m 0,n 0)为平面坐标系下的坐标位置,m 0为横坐标,n 0为纵坐标。
终端设备可根据预设的坐标映射关系,将(m 0,n 0)转换为地理坐标系下的坐标位置(φ 1,λ 1)。即用地理坐标表示,待插值像素点在目标图像中的坐标位置为(φ 1,λ 1)。示例性地,如下公式(8)示出待插值像素点在目标图像中的地理坐标位置:
Figure PCTCN2019085787-appb-000031
其中,W 1为目标图像的宽度。H 1为目标图像的高度。φ 1为经度坐标,取值范围为[-π,π]。λ 1为纬度坐标,取值范围为[-π/2,π/2]。ε为自定义的常数,表示坐标的偏移量,取值范围为[0,1)。通常的,ε为0或0.5。
接着,终端设备根据预设的图像映射关系,将目标图像中的待插值像素点(φ 1,λ 1)映射到源图像中的坐标位置,即待插值像素点对应在源图像中的第一坐标位置(x,y)。这里的图像映射关系是指ERP格式的目标图像与CPP格式的源图像之间的位置映射关系,即同一点分别在ERP目标图像和CPP源图像中存在的关联关系。其中,所述第一坐标位置可为地理坐标系下的坐标位置,也可为平面坐标系下的坐标位置。假设这里为平面坐标下的坐标位置,x为横坐标,y为纵坐标。示例性地,如下公式(9)给出待插值像素点(φ 1,λ 1)对应在源图 像中的第一坐标位置(x,y):
Figure PCTCN2019085787-appb-000032
其中,H为源图像的高度。W为源图像的宽度。
进一步地,终端设备可在第一坐标位置的周围沿着经度方向和/或纬度方向均匀采样,从而获得为所述待插值像素点所选取的m个参考像素点。
具体的,终端设备可对公式(9)进行变形,获得如下公式(10):
Figure PCTCN2019085787-appb-000033
其中,本实施例中设定W=2H。
由于公式(9)可知第一坐标位置的纵坐标y与纬度坐标λ 1相关,因此本例中终端设备可根据地理坐标系和源图像的平面坐标系之间的位置映射关系计算获得纬度方向。具体的,终端设备根据公式(10)示出的位置映射关系,计算y对x的导数,以获得纬度方向。即,计算过待插值像素点(φ 1,λ 1)的斜率方向即为纬度方向。具体如下公式(11)示出纬度方向的计算公式:
Figure PCTCN2019085787-appb-000034
y=C,0≤C≤H      公式(11)
其中,C为常数。
进一步地,终端设备沿着该斜率方向(纬度方向)在所述第一坐标位置的周围均匀采样,以获得对应的m个参考像素点。具体如下公式(12)示出均匀采样获得的参考像素点的坐标位置(x ij,y ij)。
y ij=floor(y)+Δy i
Figure PCTCN2019085787-appb-000035
其中,dx i表示第i行的参考像素点的中心横坐标相对于待插值像素点的横坐标的偏移量;dy i表示第i行的参考像素点的纵坐标相对于待插值像素点的纵坐标的偏移量。Δy i表示 纵坐标所在方向上的增量。Δx j表示横坐标所在方向上的增量。i∈(1,2,3,...a),j∈(1,2,3,...b),a*b=m。
通常地,对于图像的亮度分量而言,a=b=6,Δy i∈(-2,-1,0,1,2,3),Δx j∈(-2,-1,0,1,2,3)。对于图像的色度分量而言,a=b=4,Δy i∈(-1,0,1,2),Δx j∈(-1,0,1,2)。floor函数作用是向下取整。可选地,本申请实施例中floor函数还可替换为ceil函数,作用是向上取整。round函数作用是四舍五入,即取最近的整数值。
S12、计算第一坐标位置和每个参考像素点的坐标位置分别沿经度方向上和纬度方向上的球面距离,所述第一坐标位置为待插值像素点对应在源图像中的坐标位置。
具体的,由于本申请中坐标位置之间在经度方向和纬度方向上的距离为球面距离,因此计算该球面距离时所使用的坐标位置需为地理坐标系下的坐标位置。当所述第一坐标位置或者参考像素点的坐标位置为平面坐标系下的坐标位置时,需先将其对应转换为地理坐标系下的坐标位置,然后再计算它们各自在经度方向上和纬度方向上的球面距离。
本实施例中,终端设备可根据预设的坐标映射关系,将第一坐标位置(x,y)转换为地理坐标系下的坐标位置(φ,λ)。具体如下公式(13)示出上述公式(9)中第一坐标位置(x,y)对应在地理坐标系下的坐标位置:
Figure PCTCN2019085787-appb-000036
相应地,终端设备可根据预设的坐标映射关系,可将源图像中参考像素点(x ij,y ij)转换为地理坐标系下的坐标位置(φ ijij),具体如下公式(14)所示:
Figure PCTCN2019085787-appb-000037
进一步地,终端设备可根据地面坐标系下参考像素点的坐标位置(φ ijij)和第一坐标位置(φ,λ),分别计算它们各自在经度方向上的第一球面距离
Figure PCTCN2019085787-appb-000038
以及在纬度方向上的第二球面距离
Figure PCTCN2019085787-appb-000039
具体可如公式(15)所示:
Figure PCTCN2019085787-appb-000040
Figure PCTCN2019085787-appb-000041
S13、计算经度方向上和纬度方向上的单位距离。
终端设备可沿经度方向上,从所述m个参考像素点中选取出距离所述第一坐标位置最近的两个参考像素点A和B。进而,计算这两个参考像素点的经度坐标之间的差值,以作为经度方向上的第一单位距离Ud φ
相应地,沿纬度方向上,从所述m个参考像素点中选取距离所述第一坐标位置最近的两个参考像素点C和D。进而,计算这两个参考像素点的纬度坐标之间的差值,以作为纬度方向上的第二单位距离Ud λ。具体可如下公式(16)所示:
Ud φ=|φ AB|·R cosλ A
Ud λ=|λ CD|·R        公式(16)
其中,参考像素点A的坐标位置为(φ AA),参考像素点B的坐标位置为(φ BB),参考像素点C的坐标位置为(φ CC),参考像素点D的坐标位置为(φ DD)。φ为经度坐标。λ为纬度坐标。
举例来说,如图8A和8B示出针对图像的亮度分量和色度分量对应选取6*6以及4*4个参考像素点。如图8A所示,沿经度方向上距离第一坐标位置O最近的两个参考像素点为第3行第3列的像素点以及第3行第4列的像素点。沿纬度方向上距离第一坐标位置O最近的两个参考像素点为第3行第3列的像素点以及第4行第3列的像素点。则图8A中经度方向上的第一单位距离Ud φ以及纬度方向上的第二单位距离Ud λ具体如下公式(17)所示:
Ud φ=|φ 3334|·R cosλ 3
Ud λ=|λ 34|·R        公式(17)
相应地如图8B所示,沿经度方向上距离第一坐标位置O最近的两个参考像素点为第2行第2列的像素点以及第2行第3列的像素点。沿纬度方向上距离第一坐标位置O最近的两个参 考像素点为第2行第2列的像素点以及第3行第2列的像素点。则图8B中经度方向上的第一单位距离Ud φ以及纬度方向上的第二单位距离Ud λ具体如下公式(18)所示:
Ud φ=|φ 2223|·R cosλ 2
Ud λ=|λ 23|·R        公式(18)
S14、计算每个参考像素点对待插值像素点的插值权重。
终端设备可根据图像插值算法、计算的在经度方向上的第一单位距离以及第一球面距离,计算在经度方向上每个参考像素点对待插值像素点的第一权重分量
Figure PCTCN2019085787-appb-000042
相应地,终端设备可根据图像插值算法、计算的在纬度方向上的第二单位距离以及第二球面距离,计算在纬度方向上每个参考像素点对待插值像素点的第二权重分量
Figure PCTCN2019085787-appb-000043
示例性地,以所述图像插值算法为Lanczos算法为例,如下公式(19)和(20)分别给出参考像素点(φ ijij)沿经度方向上和纬度方向上对待插值像素点(φ,λ)的权重分量。
Figure PCTCN2019085787-appb-000044
Figure PCTCN2019085787-appb-000045
其中,c 1和c 2为参考像素点取样时所采用的半窗口大小,
Figure PCTCN2019085787-appb-000046
其中,基于图像的亮度分量来采样时,c 1=c 2=3。基于图像的色度分量来采样时,c 1=c 2=2。
进一步地,在获得参考像素点分别沿经度方向上和纬度方向上对待插值像素点的权重分量(
Figure PCTCN2019085787-appb-000047
以及
Figure PCTCN2019085787-appb-000048
)后,可计算该参考像素点对待插值像素点的插值权重(即二维插值权重)L(φ ij,λ ij)。示例性地,如下公式(21)给出L(φ ij,λ ij)的计算公式:
Figure PCTCN2019085787-appb-000049
S15、计算待插值像素点的像素值。
终端设备可根据每个参考像素点各自的像素值以及每个参考像素点各自对待插值像素点的插值权重,可计算获得待插值像素点的像素值。具体可如下公式(22)所示:
Figure PCTCN2019085787-appb-000050
第二个实施例:CMP格式的源图像转换为ERP格式的目标图像
本实施例中,图像插值方法可包括如下实施步骤:
S21、对于目标图像中的任意待插值像素点,在源图像中为所述待插值像素点选取m个参考像素点。
本实施例中,假设目标图像中待插值像素点在地理坐标系下的坐标位置为(φ 1,λ 1)。终端设备可根据预设的图像映射关系,将待插值像素点在目标图像中的坐标位置映射到源图像中的坐标位置,即待插值像素点对应在源图像中的第一坐标位置。示例性地如下公式(23)给出待插值像素点(φ 1,λ 1)对应在源图像中的第一坐标位置(x,y):
Figure PCTCN2019085787-appb-000051
其中,H为源图像的高度。W为源图像的宽度。
进一步地,终端设备可在第一坐标位置的周围沿着经度方向和/或纬度方向均匀采样,从而获得为所述待插值像素点所选取的m个参考像素点。
具体的,终端设备可对公式(23)进行变形,获得如下公式(24):
Figure PCTCN2019085787-appb-000052
其中,本实施例中设定W=H。
由于公式(23)可知第一坐标位置的横坐标x与纬度坐标φ 1相关,因此本例中终端设备可根据地理坐标系和源图像的平面坐标系之间的位置映射关系计算获得经度方向。具体的,终端设备根据公式(24)示出的位置映射关系,计算y对x的导数,以获得经度方向。即,计算过待插值像素点(φ 1,λ 1)的斜率方向即为经度方向。具体如下公式(25)示出斜率的计算公式:
x=C,0≤C≤W
Figure PCTCN2019085787-appb-000053
其中,C为常数。
进一步地,终端设备沿着该斜率方向(经度方向)在所述第一坐标位置的周围均匀采样,以获得对应的m个参考像素点。具体如下公式(26)示出均匀采样获得的参考像素点的坐标位置(x ij,y ij)。
x ij=floor(x)+Δx j
Figure PCTCN2019085787-appb-000054
关于公式(26)涉及的相关参数可参见前述公式(12)的相关阐述,这里不再赘述。
S22、计算第一坐标位置和每个参考像素点的坐标位置分别沿经度方向上和纬度方向上的球面距离,所述第一坐标位置为待插值像素点对应在源图像中的坐标位置。
本实施例中,终端设备可根据预设的坐标映射关系,将第一坐标位置(x,y)转换为地理坐标系下的坐标位置(φ,λ)。即第一坐标位置对应在地理坐标系下的坐标位置为(φ,λ),具体如下公式(27)示出上述公式(23)中第一坐标位置(x,y)对应在地理坐标系下的坐标位置:
Figure PCTCN2019085787-appb-000055
相应地,终端设备根据预设的坐标映射关系,可将源图像中参考像素点(x ij,y ij)转换为地理坐标系下的坐标位置(φ ijij),具体如下公式(28)所示:
Figure PCTCN2019085787-appb-000056
进一步地,终端设备可根据地面坐标系下的参考像素点(φ ijij)和第一坐标位置(φ,λ),分别计算它们各自在经度方向上的第一球面距离
Figure PCTCN2019085787-appb-000057
以及在纬度方向上的第二球面距离
Figure PCTCN2019085787-appb-000058
S23、计算经度方向上和纬度方向上的单位距离。
S24、计算每个参考像素点对待插值像素点的插值权重。
S25、计算待插值像素点的像素值。
需要说明的是,本发明实施例中未示出或未描述的部分可对应参见前述第一个实施例中的相关阐述,例如步骤S23-S25可具体参见前述S13-S15中的相关阐述,这里不再赘述。
第三个实施例:低分辨率ERP格式的源图像转换为高分辨率ERP格式的目标图像
本实施例中,图像插值方法可包括如下实施步骤:
S31、对于目标图像中的任意待插值像素点,在源图像中为所述待插值像素点选取m个参考像素点。
本实施例中,所述目标图像中待插值像素点的坐标位置为(m 0,n 0),其具体可为地理坐标系下的坐标位置,也可为平面坐标系下的坐标位置。假设这里(m 0,n 0)为平面坐标系下的坐标位置,m 0为横坐标,n 0为纵坐标。
终端设备可根据预设的坐标映射关系,将(m 0,n 0)转换为地理坐标系下的坐标位置(φ 1,λ 1)。即用地理坐标表示,待插值像素点在目标图像中的坐标位置为(φ 1,λ 1)。示例性地,如下公式(29)示出待插值像素点在目标图像中的地理坐标位置:
Figure PCTCN2019085787-appb-000059
其中,W 1为目标图像的宽度。H 1为目标图像的高度。φ 1为经度坐标,取值范围为[-π,π]。λ 1为纬度坐标,取值范围为[-π/2,π/2]。ε为自定义的常数,表示坐标的偏移量,取值范围为[0,1)。通常的,ε为0或0.5。
接着,终端设备根据预设的图像映射关系,将目标图像中的待插值像素点(φ 1,λ 1)映射到源图像中的坐标位置,即待插值像素点对应在源图像中的第一坐标位置(x,y)。其中,所述第一坐标位置可为地理坐标系下的坐标位置,也可为平面坐标系下的坐标位置。假设这里为平面坐标下的坐标位置,x为横坐标,y为纵坐标。示例性地,如下公式(30)给出待插值像素点(φ 1,λ 1)对应在源图像中的坐标位置(x,y):
Figure PCTCN2019085787-appb-000060
其中,H为源图像的高度。W为源图像的宽度。
进一步地,终端设备可在所述第一坐标位置的周围沿着经度方向和/或纬度方向均匀采样,从而获得为所述待插值像素点所选取的m个参考像素点。
本实施例中,由于源图像和目标图像均为同一球面图像格式(ERP)的图像,在针对参考像素点采样时,可直接沿坐标系水平方向(即经度方向)以及竖直方向(即纬度方向)上均匀采样获得m个参考像素点。示例性地如图9示出对于图像的亮度分量而言,均匀采样 获得6*6的参考像素点。
具体的,可如下公式(31)示出均匀采样获得的参考像素点的坐标位置(x ij,y ij)。
x ij=floor(x)+Δx j
y ij=floor(y)+Δy i       公式(31)
关于公式(31)涉及的参数可参见前述公式(12)中的相关介绍,这里不再赘述。
S32、计算第一坐标位置和每个参考像素点的坐标位置分别沿经度方向上和纬度方向上的球面距离,所述第一坐标位置为待插值像素点对应在源图像中的坐标位置。
本实施例中,终端设备可根据预设的坐标映射关系,将平面坐标系下的第一坐标位置(x,y)转换为地理坐标系下的坐标位置(φ,λ)。即第一坐标位置对应在地理坐标系下的坐标位置为(φ,λ),具体如下公式(32)示出上述公式(30)中第一坐标位置(x,y)对应在地理坐标系下的坐标位置:
Figure PCTCN2019085787-appb-000061
相应地,终端设备根据预设的坐标映射关系,可将源图像中参考像素点(x ij,y ij)转换为地理坐标系下的坐标位置(φ ijij),具体如下公式(33)所示:
Figure PCTCN2019085787-appb-000062
进一步地,终端设备可根据地面坐标系下参考像素点的坐标位置(φ ijij)和第一坐标位置(φ,λ),分别计算它们各自在经度方向上的第一球面距离
Figure PCTCN2019085787-appb-000063
以及在纬度方向上的第二球面距离
Figure PCTCN2019085787-appb-000064
S33、计算经度方向上和纬度方向上的单位距离。
S34、计算每个参考像素点对待插值像素点的插值权重。
S35、计算待插值像素点的像素值。
需要说明的是,本实施例中未示出或未描述的部分可对应参见前述第一个实施例中的 相关阐述,例如步骤S33-S35可具体参见前述S13-S15中的相关阐述,这里不再赘述。
第四个实施例:基于双线性插值算法下CCP格式的源图像转换为ERP格式的目标图像
本实施例中,图像插值方法可包括如下实施步骤:
S41、对于目标图像中的任意待插值像素点,在源图像中为所述待插值像素点选取m个参考像素点。
对于某些特定的图像插值算法而言,通常该图像插值算法对应选取的参考像素点也是默认的。例如,针对双线性插值算法,通常选取第一坐标位置周边的2*2个像素点作为参考像素点。针对三次卷积插值算法而言,通常选取第一坐标位置周边的4*4个像素点作为参考像素点。其中,所述第一坐标位置为待插值像素点对应在源图像中的坐标位置。
本实施例中,以双线性插值算法为例,如图10A示出参考像素点选取的示意图。如图10A中在第一坐标位置的周围选取2*2个像素点作为参考像素点(φ ijij),具体为图示中的像素点A、B、D和E。其中,i=1,2。j=1,2。
S42、计算第一坐标位置和每个参考像素点的坐标位置分别沿经度方向上和纬度方向上的球面距离,所述第一坐标位置为待插值像素点对应在源图像中的坐标位置。
S43、计算经度方向上和纬度方向上的单位距离。
S44、计算每个参考像素点对待插值像素点的插值权重。
引用图10A的例子,终端设备可根据参考像素点A和B加权求得像素点C。其中,像素点C的经度坐标和第一坐标位置的经度坐标相同,像素点C的纬度坐标和像素点A/B的纬度坐标相同。同样地,根据参考像素点D和E可加权求得像素点F。其中,像素点F的经度坐标和第一坐标位置的经度坐标相同,像素点F的纬度坐标和像素点D/E的纬度坐标相同。
相应地,终端设备可计算参考像素点A、B、D以及E在经度方向上对待插值像素点的插值权重。具体如下公式(34)所示:
Figure PCTCN2019085787-appb-000065
Figure PCTCN2019085787-appb-000066
Figure PCTCN2019085787-appb-000067
Figure PCTCN2019085787-appb-000068
其中,L k表示像素点K对待插值像素点的权重分量。φ k表示像素点K的经度坐标。这里K可为A、B、C、D、E以及F。
进一步地,终端设备可计算参考像素点A、B、D以及E在纬度方向上对待插值像素点的权重分量。即是,计算像素点C和F在纬度方向上对待插值像素点的插值权重。具体如下公式(35)所示:
Figure PCTCN2019085787-appb-000069
Figure PCTCN2019085787-appb-000070
其中,λ k表示像素点K的纬度坐标。这里K为C、F或者第一坐标位置O。
S45、计算待插值像素点的像素值。
具体的,本实施例中采用双线性插值算法来计算待插值像素点的像素值。具体可如下公式(36)所示:
P o=L C(L AP A+L BP B)+L F(L DP D+L EP E)       公式(36)
需要说明的是,本实施例中未示出或未描述的内容可参见前述第一个实施例中的相关阐述,这里不再赘述。
第五个实施例:基于三次卷积插值算法下CCP格式的源图像转换为ERP格式的目标图像
本实施例中,图像插值方法可包括如下实施步骤:
S51、对于目标图像中的任意待插值像素点,在源图像中为所述待插值像素点选取m个参考像素点。
对于某些特定的图像插值算法而言,通常该图像插值算法对应决定选取的参考像素点也是默认的。例如,针对双线性插值算法,通常选取第一坐标位置周边的2*2个像素点作为参考像素点。针对三次卷积插值算法而言,通常选取第一坐标位置周边的4*4个像素点作为参考像素点。其中,所述第一坐标位置为待插值像素点对应在源图像中的坐标位置。
本实施例中,以三次卷积插值算法为例,如图10B示出参考像素点选取的示意图。具体的,在第一坐标位置的周围选取4*4个像素点,作为参考像素点(φ ijij)或者(x ij,y ij)。其中,i=1,2,3,4。j=1,2,3,4。
S52、计算第一坐标位置和每个参考像素点的坐标位置分别沿经度方向上和纬度方向上的球面距离,所述第一坐标位置为待插值像素点对应在源图像中的坐标位置。
S53、计算经度方向上和纬度方向上的单位距离。
S54、计算每个参考像素点对待插值像素点的插值权重。
S55、计算待插值像素点的像素值。
其中,S54中由于本实施例中采用的图像插值算法为三次卷积插值算法,下面示例性给出基于三次卷积插值算法计算参考像素点(φ ijij)分别沿经度方向上和纬度方向上对待插值像素点的权重分量(
Figure PCTCN2019085787-appb-000071
以及
Figure PCTCN2019085787-appb-000072
)。具体如公式(37)和(38)所示:
Figure PCTCN2019085787-appb-000073
Figure PCTCN2019085787-appb-000074
其中,α是三次卷积插值算法中的参数,该参数为用户侧或系统侧自定义设置的常数。
相应地,基于
Figure PCTCN2019085787-appb-000075
以及
Figure PCTCN2019085787-appb-000076
可计算获得参考像素点对待插值像素点的插值权重L(φ ij,λ ij)。需要说明的是,本申请中关于S41-S45具体可对应参见前述S11-S15所述实施例中的相关介绍,这里不再赘述。
通过实施本发明实施例,能够解决现有技术中采用平面图像插值算法来对曲面图像进行处理导致图像插值的性能及效率下降等问题,从而可有效提高非平面图像插值的性能和效率。
上述主要从终端设备的角度出发对本发明实施例提供的方案进行了介绍。可以理解的是,终端设备为了实现上述功能,其包含了执行各个功能相应的硬件结构和/或软件模块。结合本发明中所公开的实施例描述的各示例的单元及算法步骤,本发明实施例能够以硬件或硬件和计算机软件的结合形式来实现。某个功能究竟以硬件还是计算机软件驱动硬件的方式来执行,取决于技术方案的特定应用和设计约束条件。本领域技术人员可以对每个特定的应用来使用不同的方法来实现所描述的功能,但是这种实现不应认为超出本发明实施例的技术方案的范围。
本发明实施例可以根据上述方法示例对终端设备进行功能单元的划分,例如,可以对应各个功能划分各个功能单元,也可以将两个或两个以上的功能集成在一个处理单元中。 上述集成的单元既可以采用硬件的形式实现,也可以采用软件功能单元的形式实现。需要说明的是,本发明实施例中对单元的划分是示意性的,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式。
在采用集成的单元的情况下,图11A示出了上述实施例中所涉及的终端设备的一种可能的结构示意图。终端设备700包括:处理单元702和通信单元703。处理单元702用于对终端设备700的动作进行控制管理。示例性地,处理单元702用于支持网络设备700执行图2中步骤S102-S108,图5中步骤S202-S212,图7中步骤S11-S15,和/或用于执行本文所描述的技术的其它步骤。通信单元703用于支持终端设备700与其它设备的通信,例如,通信单元703用于支持终端设备700从网络设备中获取源图像,和/或用于执行本文所描述的技术的其它步骤。可选的,终端设备700还可以包括存储单元701,用于存储终端设备700的程序代码和数据。
其中,处理单元702可以是处理器或控制器,例如可以是中央处理器(英文:Central Processing Unit,CPU),通用处理器,数字信号处理器(英文:Digital Signal Processor,DSP),专用集成电路(英文:Application-Specific Integrated Circuit,ASIC),现场可编程门阵列(英文:Field Programmable Gate Array,FPGA)或者其他可编程逻辑器件、晶体管逻辑器件、硬件部件或者其任意组合。其可以实现或执行结合本发明公开内容所描述的各种示例性的逻辑方框,模块和电路。所述处理器也可以是实现计算功能的组合,例如包含一个或多个微处理器组合,DSP和微处理器的组合等等。通信单元703可以是通信接口、收发器、收发电路等,其中,通信接口是统称,可以包括一个或多个接口,例如网络设备与其他设备之间的接口。存储单元701可以是存储器。
可选的,所述终端设备700还可包括显示单元(图未示)。所述显示单元可用于预览或显示图像,例如使用显示单元显示目标图像或源图像等等。在实际应用中,所述显示单元可为显示器或播放器等,本申请不做限定。
当处理单元702为处理器,通信单元703为通信接口,存储单元701为存储器时,本发明实施例所涉及的终端设备可以为图11B所示的终端设备。
参阅图11B所示,该终端设备710包括:处理器712、通信接口713、存储器77。可选地,终端设备710还可以包括总线714。其中,通信接口713、处理器712以及存储器77可以通过总线714相互连接;总线714可以是外设部件互连标准(英文:Peripheral Component Interconnect,简称PCI)总线或扩展工业标准结构(英文:Extended Industry Standard Architecture,简称EISA)总线等。所述总线714可以分为地址总线、数据总线、控制总线等。为便于表示,图11B中仅用一条粗线表示,但并不表示仅有一根总线或一种类型的总线。
上述图11A或图11B所示的终端设备的具体实现还可以对应参照前述方法实施例的相应描述,此处不再赘述。
结合本发明实施例公开内容所描述的方法或者算法的步骤可以硬件的方式来实现,也可以是由处理器执行软件指令的方式来实现。软件指令可以由相应的软件模块组成,软件模块可以被存放于随机存取存储器(英文:Random Access Memory,RAM)、闪存、只读存储器(英文:Read Only Memory,ROM)、可擦除可编程只读存储器(英文:Erasable  Programmable ROM,EPROM)、电可擦可编程只读存储器(英文:Electrically EPROM,EEPROM)、寄存器、硬盘、移动硬盘、只读光盘(CD-ROM)或者本领域熟知的任何其它形式的存储介质中。一种示例性的存储介质耦合至处理器,从而使处理器能够从该存储介质读取信息,且可向该存储介质写入信息。当然,存储介质也可以是处理器的组成部分。处理器和存储介质可以位于ASIC中。另外,该ASIC可以位于网络设备中。当然,处理器和存储介质也可以作为分立组件存在于终端设备中。
本领域普通技术人员可以理解实现上述实施例方法中的全部或部分流程,是可以通过计算机程序来指令相关的硬件来完成,所述的程序可存储于计算机可读取存储介质中,该程序在执行时,可包括如上述各方法的实施例的流程。而前述的存储介质包括:ROM、RAM、磁碟或者光盘等各种可以存储程序代码的介质。

Claims (10)

  1. 一种图像插值方法,其特征在于,所述方法包括:
    根据待插值像素点在目标图像中的坐标位置,确定所述待插值像素点对应在源图像中的第一坐标位置,所述源图像为待转换的曲面图像,或者为待转换的具有球面图像格式的平面图像;
    根据所述第一坐标位置,确定m个参考像素点,所述m个参考像素点位于所述源图像中,m为正整数;
    根据所述m个参考像素点的坐标位置各自与所述第一坐标位置之间的球面距离,确定所述m个参考像素点各自对所述待插值像素点的插值权重;
    根据所述m个参考像素点各自对应的像素值以及所述m个参考像素点各自对所述待插值像素点的插值权重,确定所述待插值像素点的像素值,从而得到所述目标图像。
  2. 根据权利要求1所述的方法,其特征在于,所述源图像为具有球面图像格式的平面图像,所述源图像和所述目标图像各自对应的球面图像格式不相同,和/或,所述源图像和所述目标图像各自对应的图像分辨率不相同。
  3. 根据权利要求1或2所述的方法,其特征在于,所述m个参考像素点为在所述第一坐标位置的周围沿着经度方向和/或纬度方向上采样获得的;其中,所述m个参考像素点中部分参考像素点的纵坐标或者纬度坐标相同,和/或,所述m个参考像素点中部分参考像素点的横坐标或者经度坐标相同。
  4. 根据权利要求3所述的方法,其特征在于,所述源图像为具有球面图像格式的平面图像,所述第一坐标位置为平面坐标系下由横坐标和纵坐标所组成的点的位置;
    所述经度方向为根据地理坐标系和所述源图像的平面坐标系之间的位置映射关系确定的,在所述经度方向上所述源图像的坐标位置对应的纬度值不变,所述纬度方向为根据所述地理坐标系和所述源图像的平面坐标系之间的位置映射关系确定的,在所述纬度方向上所述源图像的坐标位置对应的经度值不变。
  5. 根据权利要求1-4中任一项所述的方法,其特征在于,对于所述根据所述m个参考像素点中任一参考像素点的坐标位置与所述第一坐标位置之间的球面距离,所述球面距离包括第一球面距离和第二球面距离,所述第一球面距离为在经度方向上所述任一参考像素点的坐标位置与所述第一坐标位置之间的球面距离,所述第二球面距离为在纬度方向上所述任一参考像素点的坐标位置与所述第一坐标位置之间的球面距离;
    所述根据所述m个参考像素点的坐标位置各自与所述第一坐标位置之间的球面距离,确定所述m个参考像素点各自对所述待插值像素点的插值权重包括:
    确定单位距离,所述单位距离包括第一单位距离以及第二单位距离,所述第一单位距离为第一参考像素点和第二参考像素点之间在经度方向上的距离,所述第一参考像素点和所述第二参考像素点为在经度方向上所述m个参考像素点中距离所述第一坐标位置对应的经度坐标最近的两个参考像素点;所述第二单位距离为第三参考像素点和第四参考像素点之间在纬度方向上的距离,所述第三参考像素点和所述第四参考像素点为在纬度方向上所述m个参考像素点中距离所述第一坐标位置对应的纬度坐标最近的两个参考像素点;
    根据所述单位距离以及所述m个参考像素点的坐标位置各自与所述第一坐标位置之间 的球面距离,确定所述m个参考像素点各自对所述待插值像素点的插值权重。
  6. 根据权利要求5所述的方法,其特征在于,所述根据所述单位距离以及所述m个参考像素点的坐标位置各自与所述第一坐标位置之间的球面距离,确定所述m个参考像素点各自对所述待插值像素点的插值权重包括:
    根据所述第一单位距离以及所述m个参考像素点各自的坐标位置与所述第一坐标位置之间的第一球面距离,确定所述m个参考像素点各自对所述待插值像素点的第一权重分量;
    根据所述第二单位距离以及所述m个参考像素点各自的坐标位置与所述第一坐标位置之间的第二球面距离,确定所述m个参考像素点各自对所述待插值像素点的第二权重分量;
    根据所述m个参考像素点各自对所述待插值像素点的第一权重分量以及所述m个参考像素点各自对所述待插值像素点的第二权重分量,确定所述m个参考像素点各自对所述待插值像素点的插值权重。
  7. 根据权利要求3-6中任一项所述的方法,其特征在于,所述经度方向为经度坐标数值变化最快的方向,所述纬度方向为纬度坐标数值变化最快的方向。
  8. 根据权利要求1-7中任一项所述的方法,其特征在于,所述坐标位置为所述平面图像的平面坐标系下由横坐标和纵坐标所组成的点的位置,或者,为所述曲面图像的地理坐标系下由经度坐标和纬度坐标所组成的点的位置。
  9. 一种终端设备,其特征在于,包括存储器及与所述存储器耦合的处理器;所述存储器用于存储指令,所述处理器用于执行所述指令;其中,所述处理器执行所述指令时执行如上权利要求1-8中任一项所述的方法。
  10. 一种计算机可读存储介质,所述计算机可读存储介质存储有计算机程序,其特征在于,所述计算机程序被处理器执行时实现如权利要求1至8任一项所述方法。
PCT/CN2019/085787 2018-05-07 2019-05-07 图像处理方法、相关设备及计算机存储介质 WO2019214594A1 (zh)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US17/090,394 US11416965B2 (en) 2018-05-07 2020-11-05 Image processing method, related device, and computer storage medium

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201810431381.7A CN110458755B (zh) 2018-05-07 2018-05-07 图像处理方法、相关设备及计算机存储介质
CN201810431381.7 2018-05-07

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US17/090,394 Continuation US11416965B2 (en) 2018-05-07 2020-11-05 Image processing method, related device, and computer storage medium

Publications (1)

Publication Number Publication Date
WO2019214594A1 true WO2019214594A1 (zh) 2019-11-14

Family

ID=68468459

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2019/085787 WO2019214594A1 (zh) 2018-05-07 2019-05-07 图像处理方法、相关设备及计算机存储介质

Country Status (3)

Country Link
US (1) US11416965B2 (zh)
CN (1) CN110458755B (zh)
WO (1) WO2019214594A1 (zh)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111626938B (zh) * 2020-06-04 2023-04-07 Oppo广东移动通信有限公司 图像插值方法、图像插值装置、终端设备及存储介质
CN111915673B (zh) * 2020-07-22 2022-01-11 深圳云天励飞技术股份有限公司 图像处理方法、装置、终端设备及存储介质
CN113160051B (zh) * 2021-03-30 2023-12-22 珠海全志科技股份有限公司 基于边缘方向的图像插值采样方法及装置
CN118276812A (zh) * 2022-09-02 2024-07-02 荣耀终端有限公司 一种界面交互方法及电子设备

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20030077003A1 (en) * 2001-10-24 2003-04-24 Tiger Color Inc., Image processing method for enlarging and compensating the pixel of digital video data according to viewpoint
US20080239146A1 (en) * 2007-03-30 2008-10-02 Kabushiki Kaisha Toshiba Video signal interpolation apparatus and method thereof
CN101436297A (zh) * 2007-11-14 2009-05-20 比亚迪股份有限公司 图像缩放方法
CN102129666A (zh) * 2010-12-31 2011-07-20 华亚微电子(上海)有限公司 图像缩放方法及装置
CN107230180A (zh) * 2016-10-10 2017-10-03 华为技术有限公司 一种全景图像的映射方法、装置和设备

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP4874904B2 (ja) * 2007-09-13 2012-02-15 株式会社東芝 画像処理装置及びその方法
US9794518B2 (en) * 2010-10-21 2017-10-17 Sensormatic Electronics, LLC Method and system for converting privacy zone planar images to their corresponding pan/tilt coordinates
JP5822322B2 (ja) * 2011-09-12 2015-11-24 インテル・コーポレーション ローカライズされ、セグメンテーションされた画像のネットワークキャプチャ及び3dディスプレイ
JP6396022B2 (ja) * 2014-01-21 2018-09-26 住友重機械工業株式会社 出力画像を生成する装置
CN106815805A (zh) * 2017-01-17 2017-06-09 湖南优象科技有限公司 基于Bayer图像的快速畸变校正方法
CN107169926A (zh) * 2017-04-26 2017-09-15 腾讯科技(深圳)有限公司 图像处理方法和装置

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20030077003A1 (en) * 2001-10-24 2003-04-24 Tiger Color Inc., Image processing method for enlarging and compensating the pixel of digital video data according to viewpoint
US20080239146A1 (en) * 2007-03-30 2008-10-02 Kabushiki Kaisha Toshiba Video signal interpolation apparatus and method thereof
CN101436297A (zh) * 2007-11-14 2009-05-20 比亚迪股份有限公司 图像缩放方法
CN102129666A (zh) * 2010-12-31 2011-07-20 华亚微电子(上海)有限公司 图像缩放方法及装置
CN107230180A (zh) * 2016-10-10 2017-10-03 华为技术有限公司 一种全景图像的映射方法、装置和设备

Also Published As

Publication number Publication date
US11416965B2 (en) 2022-08-16
US20210049735A1 (en) 2021-02-18
CN110458755B (zh) 2023-01-13
CN110458755A (zh) 2019-11-15

Similar Documents

Publication Publication Date Title
WO2019214594A1 (zh) 图像处理方法、相关设备及计算机存储介质
TWI622021B (zh) 具接合功能的全景影像產生方法及裝置
CN111052176B (zh) 无缝图像拼接
US8326077B2 (en) Method and apparatus for transforming a non-linear lens-distorted image
JP6458988B2 (ja) 画像処理装置および画像処理方法、並びに情報処理装置
US10593014B2 (en) Image processing apparatus, image processing system, image capturing system, image processing method
US10628916B2 (en) Image generating apparatus, image generating method, and program
JP6299124B2 (ja) 投影システム、画像処理装置、投影方法およびプログラム
US8200020B1 (en) Robust image alignment using block sums
JP4418029B2 (ja) 画像処理装置及びカメラシステム
CN110868541B (zh) 视场融合方法及装置、存储介质、终端
RU2686591C1 (ru) Устройство выработки изображения и устройство управления отображением изображения
JP2008098803A (ja) 高解像度化装置および方法
JP2009230537A (ja) 画像処理装置、画像処理プログラム、画像処理方法、および、電子機器
US20190370933A1 (en) Image Processing Method and Apparatus
KR101140953B1 (ko) 영상 왜곡 보정 장치 및 방법
US20080218606A1 (en) Image processing device, camera device, image processing method, and program
CN111598777A (zh) 天空云图的处理方法、计算机设备和可读存储介质
CN114648458A (zh) 鱼眼图像矫正方法、装置、电子设备及存储介质
US20110032269A1 (en) Automatically Resizing Demosaicked Full-Color Images Using Edge-Orientation Maps Formed In The Demosaicking Process
JP4686388B2 (ja) 画像拡大装置及び画像拡大方法
CN112237002A (zh) 图像处理方法和设备
TW201843648A (zh) 影像視角轉換方法及其系統
JP4925112B2 (ja) 自由視点画像の生成方法
CN108282609B (zh) 一种全景视频分发监控系统及方法

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19799179

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19799179

Country of ref document: EP

Kind code of ref document: A1