WO2019227958A1 - 图像处理方法、设备以及虚拟现实显示装置 - Google Patents

图像处理方法、设备以及虚拟现实显示装置 Download PDF

Info

Publication number
WO2019227958A1
WO2019227958A1 PCT/CN2019/073452 CN2019073452W WO2019227958A1 WO 2019227958 A1 WO2019227958 A1 WO 2019227958A1 CN 2019073452 W CN2019073452 W CN 2019073452W WO 2019227958 A1 WO2019227958 A1 WO 2019227958A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
distortion
pixels
lens unit
size
Prior art date
Application number
PCT/CN2019/073452
Other languages
English (en)
French (fr)
Inventor
楚明磊
张�浩
陈丽莉
王晨如
刘亚丽
孙玉坤
闫桂新
马占山
郭子强
董泽华
刘炳鑫
Original Assignee
京东方科技集团股份有限公司
北京京东方光电科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 京东方科技集团股份有限公司, 北京京东方光电科技有限公司 filed Critical 京东方科技集团股份有限公司
Priority to EP19812362.2A priority Critical patent/EP3806029A4/en
Priority to US16/494,588 priority patent/US11308588B2/en
Publication of WO2019227958A1 publication Critical patent/WO2019227958A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/80Geometric correction
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/02Viewing or reading apparatus
    • G02B27/022Viewing apparatus
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • G06T7/62Analysis of geometric attributes of area, perimeter, diameter or volume
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/006Mixed reality

Definitions

  • the present disclosure relates to the field of image processing, and in particular, to an image processing method, device, and virtual reality display device for performing anti-distortion processing on an image.
  • VR virtual reality
  • an image processing method for an electronic device including a lens unit comprising: determining an image size to be buffered according to a size of an input image and an anti-distortion parameter of the lens unit; and Generating an anti-distortion cache image having the image size based on the input image and the anti-distortion parameters.
  • generating an anti-distortion cache image having the image size includes: selecting a row of pixels from the input image and performing an anti-distortion operation on the row of pixels, and the anti-distortion operation includes according to the anti-distortion operation.
  • the distortion parameter determines pixel data of an anti-distortion image corresponding to the pixels of the row, and writes the pixel data of the anti-distortion image into the anti-distortion buffer image.
  • selecting a row of pixels from the input image and performing an anti-distortion operation on the rows of pixels includes performing an anti-distortion operation on each row of pixels in the input image line by line.
  • the anti-distortion parameter includes a height between pixels of the anti-distortion image and a height of corresponding pixels in a virtual image formed by the anti-distortion image through the lens unit.
  • the object image relationship is determined based on the optical parameters of the lens unit.
  • the optical parameters of the lens unit include a focal length of the lens unit, a distance between a display position of the input image and the lens unit, and a distance between a user's viewing position and the lens unit. distance.
  • the heights of the plurality of pixel points in the anti-distortion image are respective distances between the plurality of pixel points in the anti-distortion image and the mapping points of the lens center on the input image, so The heights of the corresponding pixels in the virtual image are the respective distances between the corresponding pixels in the virtual image and the mapping points of the lens center in the virtual image.
  • determining the image size to be buffered according to the size of the input image and the anti-distortion parameters of the lens unit includes: determining an anti-distortion grid for the input image based on the anti-distortion parameters, and the anti-distortion grid
  • the coordinate values of the four vertices of the distorted image determined on the anti-distortion grid with the center of the input image as the origin, and the absolute values of the coordinate values of the four vertices in the column direction are determined;
  • the minimum absolute value Y among the absolute values of the coordinate values of the four vertices in the column direction; the size of the image to be buffered is determined according to the minimum absolute value Y.
  • the size of the input image is W * H
  • the size of the image to be buffered in the row direction is W
  • the size in the column direction is k * H * (1-Y) + 1, where k is a real number greater than or equal to 1.
  • determining pixel data of an anti-distortion image corresponding to the pixels of the row according to the anti-distortion parameters, and writing the pixel data of the anti-distortion image to the anti-distortion cache image includes: for the input For each pixel point in each row of pixel data in the image, determine a vector from the mapping point of the lens center on the input image to the pixel point; determine based on the object-image relationship and the size of the input image The size of the virtual image; and determining an image height of a corresponding pixel point of the pixel point in the virtual image according to the vector and the size of the virtual image; based on the object image relationship, according to the corresponding pixel point in the virtual image The image height determines the object height of the corresponding pixel point of the pixel in the anti-distortion image; and writes the pixel data of the pixel point into the cache image according to the object height of the corresponding pixel point in the anti-distortion image.
  • writing the pixel data of the pixel point to the anti-distortion cache image according to the object height of the corresponding pixel point in the anti-distortion image includes: according to the object height of the corresponding pixel point in the anti-distortion image. Determine the corresponding pixel point of the corresponding pixel point in the cache image, and store the gray level value of the pixel point in the corresponding pixel point in the anti-distortion cache image.
  • determining a vector from a mapping point of the lens center on the input image to the each pixel point includes : Determine the distance and direction of the mapping point of the lens center on the input image to the pixel point.
  • the image processing method further includes: outputting the first row of pixel data of the anti-distortion cache image for displaying and clearing the displayed pixel data, and storing the non-displayed row of pixel data in the anti-distortion cache image. Move up one line.
  • the image processing method further includes: after performing an inverse distortion operation on all pixels of the input image, outputting the remaining image data in the inverse distortion buffer image line by line for display.
  • still another image processing apparatus includes: a processor; and a first memory, wherein the first memory stores instructions, and when the processor is used to execute the instructions, causes all the The processor executes the image processing method as described above; the image processing device further includes a second memory for storing a part of the anti-distortion image.
  • a virtual reality display device including: a sensor configured to collect sensor data for determining a current state of the virtual reality display device; a display unit configured to receive a sensor based on the sensor Input data determined from the data and subjecting the input image to anti-distortion processing to generate pixel data of the anti-distortion image, and performing display driving based on the pixel data of the anti-distortion image, wherein the display unit includes the An image processing apparatus; and a lens unit configured to image an image driven and displayed by the display unit.
  • the senor includes one or more of a speed sensor, an acceleration sensor, a geomagnetic sensor, a touch sensor, and a distance sensor.
  • the display unit includes one or more of a central processing unit, a graphics processing unit, an FPGA, an ASIC, and a CPLD.
  • the lens unit includes one or more of a convex lens, a Fresnel lens, and a concave lens.
  • FIG. 1 shows a schematic block diagram of an image processing system 100 according to the prior art
  • 2A is a schematic diagram illustrating a process in which a user views an image displayed on a display unit through a lens unit;
  • 2B shows a difference between a virtual image and an original display image when viewed from the front
  • FIG. 2C shows an effect diagram of performing an anti-distortion process on an image
  • FIG. 3 shows a schematic block diagram of an image processing system according to an embodiment of the present disclosure
  • FIG. 4 shows a schematic block diagram of a display unit according to an embodiment of the present disclosure
  • FIG. 5 illustrates a flowchart of an image processing method according to an embodiment of the present disclosure
  • FIG. 6 shows a schematic diagram of a function of an object image relationship of an exemplary lens unit according to an embodiment of the present disclosure.
  • FIG. 7 illustrates a schematic diagram of an anti-distortion grid according to an embodiment of the present disclosure
  • FIG. 8 shows a schematic diagram of an image processing apparatus according to an embodiment of the present disclosure.
  • FIG. 1 shows a schematic block diagram of an image processing system 100 according to the related art.
  • the image processing system 100 may include a display module 110 and an image processing module 120.
  • the display module 110 is configured to detect the current posture or motion of the device and the user, and send the posture data and / or motion data to the image processing module 120 for image rendering.
  • the image processing module 120 is configured to receive posture data and / or motion data from the display module 110, render an image to be displayed to a user for viewing according to the received posture data and / or motion data, and send the processed image
  • the display module 110 is used for display.
  • the display module 110 may include a sensor unit 111, a display unit 112, and a lens unit 113.
  • the display unit 112 is configured to display an image to be displayed to a user for viewing.
  • the display unit 112 may further include a display driving unit and a display screen.
  • the display driving unit may be an integrated circuit module for driving a display screen according to the received image data, and displaying an image corresponding to the image data on the display screen.
  • the display screen here can be any type of display screen, such as LED display, OLED display, and so on.
  • the lens unit 113 may be used to image an image displayed on a display screen to adjust a display position of the image, thereby facilitating observation by a user.
  • the display screen is generally set close to the user's eyes, and the user cannot see the image displayed on the display screen by directly viewing it.
  • the lens unit 113 is used for imaging the display screen (and the image displayed on the display screen), and the position of the image formed by the image displayed on the display screen after passing through the lens unit 113 will fall in a comfortable area focused by the user's eyes. For example, at a distance suitable for viewing.
  • the display module 110 may further include a wearer 114.
  • the wearing part 114 is used to assist a user to fix the display module 110 to be suitable for viewing.
  • the image processing module 120 is configured to perform data processing on an image to be displayed for viewing by a user. As shown in FIG. 1, the image processing module 120 may include a data processing unit 121, an image rendering unit 122, and an anti-distortion processing unit 123.
  • the data processing unit 121 may be configured to process the sensor data collected and transmitted by the sensor unit 111, and determine the current state of the display module 110, such as the current posture or movement of the device and the user, according to the received sensor data.
  • the image rendering unit 122 may be configured to perform rendering processing on an image to be displayed to a user for viewing according to the current state of the display module 110 determined by the sensor data.
  • the anti-distortion processing unit 123 may be configured to perform anti-distortion processing on the image after rendering processing.
  • the display screen formed by the lens unit 113 and the image of the image displayed on the display screen may be distorted, for example, due to the optical parameters of the lens.
  • Optical parameters of the lens unit 113 include a focal length of the lens unit 113, a distance between the display screen and the lens unit 113, and a distance between a user's viewing position and the lens unit 113, and the like.
  • FIG. 2A is a schematic diagram illustrating a process in which a user views an image displayed by the display unit 112 through the lens unit 113.
  • the lens unit 113 is shown in FIG. 2A. It should be noted that the lens unit 113 is shown only schematically in FIG. 2A, and the shape of the figure in FIG. 2A does not constitute a limitation on the shape and properties of the lens in the lens unit 113.
  • the lens unit 113 may be a single lens or a lens group composed of a plurality of lenses and optical elements.
  • the lens unit 113 may include a convex lens, a Fresnel lens, a concave lens, and any combination thereof.
  • the lens unit 113 may further include other commonly used optical imaging elements such as a filter, an aperture, and a grating. After the image displayed by the display unit 112 through the display screen is imaged by the lens unit 113, a virtual image 112 'of the image will be formed at a position far from the user's eyes.
  • Fig. 2B shows the difference between the virtual image 112 'and the original display image 112 as viewed by the user when viewed from the front.
  • pincushion distortion occurs in an image viewed by a user and imaged by the lens unit 113. The farther from the center of the image, the greater the degree of distortion of the image.
  • image distortion generated by lens imaging is related to the inherent properties of the lens. When the same lens unit is used for imaging, the distortion degree of different images is the same.
  • the pincushion distortion shown in FIG. 2B is only schematic, and the distortion may also include barrel distortion and / or linear distortion and other types of distortion.
  • one method is to perform an anti-distortion process on the image to be used for display before display.
  • the principle of the anti-distortion process is that, in consideration of the distortion effect of the lens unit, the image to be displayed is deformed in advance, and the deformation can cancel the distortion effect caused by the inherent characteristics of the lens unit.
  • FIG. 2C shows an effect diagram of performing an anti-distortion process on an image.
  • the image to be displayed may be processed into a barrel-shaped anti-distortion image in advance, and the barrel-shaped anti-distortion image may be displayed on a display screen.
  • the user views the image formed by the anti-distortion image passing through the lens unit 113, he or she will see an image without distortion (or the distortion is less than a certain threshold). This is because the anti-distortion image for display passes through as shown in FIG. 2B.
  • the distortion effect shown in the lens unit 113 will be cancelled after the illustrated deformation process.
  • the resolution of the display screen becomes higher and higher, the resolution of the displayed image also continues to increase, resulting in a continuous increase in the amount of image data and a heavy data burden on the image processing equipment.
  • the present disclosure provides an image processing method, device, and system.
  • FIG. 5 shows a flowchart of an image processing method according to an embodiment of the present disclosure. As shown in FIG. 5, the image processing method 500 may include the following steps.
  • a size of an image to be buffered is determined according to a size of an input image and an anti-distortion parameter of a lens unit.
  • the anti-distortion parameters of the lens unit may be determined by the properties of the lens unit itself. According to the principle of optical imaging, the lens unit will generate distortion during the imaging process, that is, image distortion, and the distortion is inevitable.
  • the anti-distortion parameter may be a parameter related to the image distortion, such as an object between the heights of multiple pixels of the anti-distortion image and the heights of corresponding pixels in the virtual image formed by the anti-distortion image via the lens unit. Like relationship.
  • the input image may be a rendered image for viewing by a user.
  • the resolution of the input image may be the same as that of the display screen, and the resolution may be expressed as the image includes in the row direction and the column direction. Number of pixels. For example, when the resolution of the display screen is W * H, that is, the display screen contains W * H pixels, the resolution of the input image may also be W * H.
  • step S502 an anti-distortion cache image having the image size is generated based on the input image and the anti-distortion parameters according to the image size to be buffered, and the anti-distortion buffer image is used for anti-distortion processing. For the displayed image.
  • the generating an anti-distortion cache image having the image size includes selecting a row of pixels from the input image and performing an anti-distortion operation on the rows of pixels.
  • the anti-distortion operation may include: determining pixel data of an anti-distortion image corresponding to the pixels of the row according to an anti-distortion parameter, and writing the pixel data of the anti-distortion image into a cache image.
  • selecting a row of pixels from the input image and performing an anti-distortion operation on the rows of pixels may include performing an anti-distortion operation on each row of pixels in the input image row by row.
  • an anti-distortion operation may also be performed on a part of pixels in the input image. For example, according to the principle of optical imaging, an object point closer to the optical axis of the lens unit will have less distortion during imaging, and an object point further away from the optical axis of the lens unit will have greater distortion during imaging.
  • One or more rows of pixels in the input image far from the optical axis of the lens unit may be selected to perform an anti-distortion operation, and the rows of pixels in the input image near the optical axis may not be subjected to an anti-distortion operation.
  • the anti-distortion parameter may include an object image relationship between the heights of the multiple pixel points of the anti-distortion image and the corresponding multiple pixel points in the virtual image formed by the anti-distortion image via the lens unit.
  • the object image relationship may be determined based on the optical parameters of the lens unit.
  • the optical parameters of the lens unit include the focal length of the lens unit, the distance between the input image and the lens unit, and the distance between the user's viewing position and the lens unit.
  • the object-image relationship may be determined by a measurement method.
  • the “object” of the lens unit may be an image displayed on a display screen, that is, an anti-distortion image, and each pixel point on the anti-distortion image is displayed on the center of the lens
  • the distance between the mapping points on the screen can be regarded as "object height”
  • the distance from any point on the virtual image formed by the lens unit to the virtual point formed by the lens center on the virtual image can be regarded as "image height” .
  • the height of a pixel point in the image may refer to the pixel point to the lens center in the image The distance between the mapped points on the.
  • the input image may pass through a lens unit to form a first virtual image, where the first virtual image is an image including distortion.
  • the distortion is related to the parameters of the lens unit and the size (eg, the number of pixels and the pixel size) of the input image.
  • each pixel point in the input image corresponds to each pixel point in the first virtual image, and based on the object height of the pixel points in the input image and the image height of the corresponding pixel points in the first image Function, which can characterize the object-image relationship of the lens unit.
  • the function of the object image relationship is related to the optical parameters of the lens unit and the distance of the input image from the lens unit.
  • a test method may be used to determine a function of the object image relationship of the lens unit. For example, a series of object points at the position of the object distance can be selected according to a given object distance, that is, the vertical distance value of the object point to the lens unit in the longitudinal direction, each of the object points having an object height ( For example, y m ), the height values of different object points are different. Then, an image height (for example, an image height value of x m1 ) of an image point formed by an object point (for example, an object height value of y m1 ) through the lens unit is measured.
  • a data point (y m1 , x m1 ) composed of the object height value and the corresponding image height value can be obtained.
  • other methods may also be used to determine the object image relationship of the lens unit.
  • the present disclosure does not limit the specific method for determining the object image relationship of the lens unit. .
  • an image displayed on a display screen is imaged using the lens unit, for example, an input image or an anti-distortion image
  • the display screen when the display screen is placed at an object distance where the measurement object image relationship is located, Since the parameters of the lens unit have not changed, the image points formed by the pixel points displayed on the display screen through the lens unit will also conform to the function of the object-image relationship obtained by the above measurement. In other words, the object height of the pixel points displayed on the display screen and the image height of the corresponding image point correspond to the object image relationship.
  • a function fitting tool can be used to perform a function fitting on a set of measured data points composed of the object height y m and the image height x m to obtain the image height x m and the object height y m .
  • the object-image relationship curve and get the expression of the curve's fitting function.
  • the present disclosure does not limit the fitting tool used to perform the function fitting.
  • FIG. 6 is a schematic diagram showing a function of an object image relationship of an exemplary lens unit according to an embodiment of the present disclosure.
  • a set of data points on object height and image height (y m , x m ) can be obtained through measurement, for example, a set of data points shown in FIG. 6, where the data shown in FIG. 6 The value of the point image height is based on the measurement.
  • Function fitting is performed on the discrete data points in FIG. 6 using a fitting tool to obtain a function closest to the curve formed by the discrete data points.
  • the function can characterize the numerical relationship between object height and image height. For example, the function between the object height (y m ) and the image height (x m ) corresponding to the lens unit can be obtained by fitting, as shown below:
  • the value of the anti-distortion grid on the display screen can be determined.
  • the process of determining the value of the anti-distortion grid on the display screen may include the following steps: First, the input image formed by the lens unit may be calculated according to the imaging parameters of the lens unit.
  • the first virtual image wherein the first virtual image is obtained through calculation, that is, the first virtual image does not contain distortion, in other words, the first virtual image is an ideal image that is expected to be displayed without distortion.
  • the imaging parameter may include a magnification of the lens unit, a focal length f, an object distance of an input image, and the like.
  • Table 1 shows partial data of an exemplary anti-distortion grid according to an embodiment of the present disclosure in a coordinate system with the center of the display screen as the origin, where x * and y * respectively represent normalized processing.
  • the values of the abscissa and ordinate values of the corresponding points on the anti-distortion grid, and the abscissa and ordinate values of the corresponding points in Table 1 are calculated according to the coordinate system of the anti-distortion grid.
  • the coordinate system of the anti-distortion grid described above is shown in FIG. 7, where the origin of the coordinate system is the center point of the input image.
  • the coordinate length defining the intersection of the coordinate axes x, y and the boundary of the input image is 1.
  • the abscissa and ordinate of the corresponding points in Table 1 are calculated according to the coordinate system of the anti-distortion grid.
  • step S502 may further include determining a size of the cache image.
  • the resolution of the input image may be W * H
  • the resolution in the row direction of the cached image is W
  • the resolution in the column direction may be k * H * (1-Y) +1, where k is a real number greater than or equal to 1.
  • k 1
  • H * (1-Y) +1 2 * H * (1-Y) +1.
  • k can be any real number greater than or equal to 1.
  • k can also take values of 1.5, 2.8, and so on. The above example does not constitute a restriction on the value of k.
  • FIG. 7 also schematically illustrates the size of a cached image.
  • the row direction of the image is the x-axis
  • the column direction is the y-axis.
  • the directions of the x and y axes can be set as shown in FIG. 7, or can be set to any other possible
  • the form, such as the x-axis direction is from left to right, and the y direction is from top to bottom.
  • the size of the input image is a rectangle composed of four vertices A, B, C, and D, it has W pixels in the x direction and H pixels in the y direction. That is, the input The resolution of the image is W * H.
  • the size of the cache image can be determined by comparing the coordinate positions of the four vertices of the anti-distortion grid.
  • the distances of the four vertices of the anti-distortion mesh to the corresponding vertices of the input image can be compared to determine the degree of deformation of the four vertices of the anti-distortion mesh.
  • the size of the cached image may be determined by the coordinates of a vertex that is farthest from the corresponding vertex of the input image. For example, determine the coordinates A '(x1, y1), B' (x2, y2), C '(x3, y3), and D' (x4, y4) of the four vertices of the inverse distortion grid
  • the maximum value X of the abscissa and the minimum value Y of the ordinate of the four vertices can be calculated.
  • the width of the cached image may be set to be the same as the width of the display screen, and the height may be set to be at least greater than The height of the line of image data with the most distortion. Therefore, the minimum height of the cached image can be expressed as k * H * (1-Y) +1, where k is a real number greater than or equal to 1, and the minimum height represents the minimum number of pixel rows that can be stored in the cached image.
  • the size of the cache image indicates the number of rows and columns of the pixel data of the cache image stored in the anti-distortion processing unit 2122.
  • the number of pixels in the cache image in the horizontal direction is the same as the number of pixels in the horizontal direction of the display image, and the number of pixels in the vertical direction of the cached image is determined by the vertical coordinate minimum value Y of the four vertices.
  • step S503 may further include: for each pixel point in each row of pixel data in the input image, determining a vector from the mapping point of the lens center on the input image to the pixel point ; Determining the size of the first virtual image based on the object image relationship and the size of the input image; and determining the image height of the corresponding pixel point of the pixel in the virtual image according to the vector and the size of the virtual image; based on the object image relationship, according to the virtual image The image height of the corresponding pixel point determines the object height of the corresponding pixel point in the anti-distortion image; the pixel data of the pixel point is written into the cache image according to the object height of the corresponding pixel point in the anti-distortion image.
  • Determining the vector from the mapping point of the lens center on the input image to the pixel point includes determining the distance and direction of the mapping point of the lens center on the input image to the pixel point on the input image.
  • the size of the virtual image formed by the anti-distortion image via the lens unit may be determined by the size of the input image and the anti-distortion parameters of the lens unit.
  • the length and width of the input image can be used as the object height, and the length and width of the virtual image can be determined based on the function of the object image relationship for the lens unit.
  • the length and width of the input image can be represented by the number of pixels of the input image in the length and width directions.
  • the actual size of the display screen's length and width can also be used as the length and width of the input image. In this case, there is a corresponding mapping relationship between the actual size of the display screen and the number of pixels of the input image.
  • the actual size of the pixels on the display screen by determining the actual size of the pixels on the display screen, the actual size of the display screen and the pixels of the input image can be determined. Conversion between the numbers. The calculation in the following description is based on the case where the number of pixels of the input image in the length and width directions represents the length and width of the input image.
  • the width of the virtual image can be W 0 and the height can be H 0 .
  • the height is divided into H parts
  • W refers to the resolution of the input image in the row direction, that is, the number of pixels in the row direction
  • H refers to the resolution of the input image in the column direction, that is, the number of pixels in the column direction.
  • the coordinates of the lens center on the input image, the anti-distortion image, and the virtual image of the anti-distortion image may be determined.
  • the center of the lens should coincide with the center of the display screen of the display device. That is, the center of the lens coincides with the center of the input image, the center of the anti-distortion image, and the center of the virtual image of the anti-distortion image.
  • the coordinates of the lens center can be expressed as (0,0).
  • a coordinate system that uses the upper left corner of the input image such as point A in FIG.
  • the coordinates of the lens center PCI can be expressed as (W / 2, H / 2).
  • PCI is used to indicate the position of the lens center in the input image.
  • the lens center may be offset from the center of the display.
  • the coordinates of the lens center in the coordinate system of the anti-distortion grid shown in FIG. 7 may be expressed as (x 0 , y 0 ), where x 0 , y 0 may not be zero.
  • the coordinates of the lens center in the coordinate system with the upper left corner of the input image as the origin of the coordinates and the length and width of the input image as the x and y axes, respectively, the coordinates of the lens center can be expressed as PCI ((1 + x 0 ) * W / 2, (1 + y 0 ) * H / 2).
  • the mapping points of the lens center on the input image, the anti-distortion image, and the virtual image of the anti-distortion image are coincident along the optical axis of the lens unit.
  • the anti-distortion processing unit receives one line of image data of the input image for processing at a time. For example, when processing the i-th row image, the anti-distortion processing unit receives a total of W pixel data. This can be done by calculating the distance between any pixel in this row of images in the input image and the mapping point of the lens center in the input image. For example, you can calculate the coordinates of the center of the pixel in the x and y axis of any pixel in this row of images with the upper left corner of the input image as the origin of the coordinates, and the straight lines on the long and wide sides of the input image.
  • the values of H and W are relatively large relative to 1/2.
  • the pixel point p ⁇ j, i The pixel center coordinates of ⁇ are reduced to (j, i), and vec p is reduced to (jW / 2, iH / 2).
  • the image height of the corresponding pixel point p " ⁇ j, i ⁇ on the virtual image can be calculated.
  • the x and y component of the object height corresponding to the point p ' ⁇ j, i ⁇ on the distorted image refers to the mapping of the pixel to the lens center on the anti-distortion image
  • the distance between the points can be determined to be at the center of the lens.
  • the components y px and y py in the x and y directions of the calculated object height y p can also represent the corresponding pixels p ' ⁇ j, i ⁇ on the inverse distortion image in the inverse distortion image.
  • y px represents the corresponding number of columns of p ' ⁇ j, i ⁇ in the anti-distorted image
  • y py represents the corresponding number of rows of p' ⁇ j, i ⁇ in the anti-distorted image.
  • FIG. 1 An input image ABCD and a barrel-type anti-distortion image A'B'C'D 'corresponding to the input image are shown in FIG.
  • A'B'C'D ' As mentioned before, in the related art, it is necessary to store a complete anti-distortion image A'B'C'D '.
  • the size of the cache image is smaller than the size of the input image, it is impossible to store all the pixel data in the anti-distortion image corresponding to the input image into the cache image.
  • the progressive driving method is adopted when the image display is driven, the pixel data of the anti-distortion image corresponding to the input image can be written into the cache image line by line from the first line, and The cached image is driven by progressive display.
  • the number of rows of the read input image i is greater than the number of rows h of the cached image buffer, that is, when i> h, the pixel value (or grayscale value) of
  • the pixel point buffer [y py- (ih)] [y px ] in the y py- (ih) row and the y px column in the middle, that is, buffer [y py- (ih)] [y px ] p ⁇ j , i ⁇ .
  • the number of lines h in the cache area may be half of the maximum number of lines of cache images that can be stored in the cache area, that is, h may be 1/2 * k * H * (1-
  • the buffer image buffer can store at least one row of pixel data with the most distortion in the anti-distortion image. Therefore, although for an anti-distortion image, the farther away it is from the center of the image, the greater the degree of distortion of the image.
  • the cache image buffer set as described above can at least completely store the anti-distortion image corresponding to the first line of the input image. Pixel data.
  • the cache image buffer may store the anti-distortion image within the ABEF rectangle as shown in FIG. FIG.
  • the ABEF rectangle may also have a different number of rows from the A 'point.
  • the number of lines of the anti-distortion image determined by EF may be slightly larger than the number of lines determined by A ', that is, the points E, F may be located below the point A'.
  • the position of the corresponding point A ′ of the point A in the inverse distortion image in the inverse distortion image can be determined through the aforementioned inverse distortion processing. Since the cache image buffer is not yet full, it can be determined that the cache image buffer corresponds to the first h rows of the display screen, and the corresponding position coordinates of the point A 'that can be written in the cache image can be determined as described above (for example, buffer [ y py ] [y px ]).
  • the cached image buffer If the cached image buffer is full, the first line of pixel data in the cached image is output for display, and the first line of pixel data that has been displayed is cleared.
  • the buffered data is moved up one line in the cached image buffer, which is about to be cached. Each row of pixel data in the image is moved up by one row.
  • the stored data in the cache image ABEF can be changed from the first line to the h line of the anti-distortion image to the second line to the second line. (h + 1).
  • FIG. 7 it can be considered that the storage area of the buffered image buffer is shifted down line by line with respect to the storage area of the anti-distortion image.
  • the number of rows y py and the number of columns y px in the anti-distortion image write the pixels in the input image corresponding to p ′, that is, the pixel data of point p is written into the y py row and y px column in the cache image. That is, pixel buffer [y py ] [y px ].
  • the cached image When the number of lines i of the read input image is greater than the number h of lines stored in the cached image buffer, that is, when i> h, as described above, at this time, the cached image has output ih lines of anti-distortion data, so the current p ' The point should be written to the y py- (ih) row and the y px column of the cache image buffer, which is the pixel buffer [y py- (ih)] [y px ].
  • the image processing method 500 may further include step S504: outputting the first row of pixel data of the cached image for displaying and clearing the displayed pixel data, and moving the undisplayed row of pixel data up in the cached image. One line. After performing an anti-distortion operation on all pixels of the input image, all remaining image data in the cached image is output line by line for display.
  • the display screen is driven to display according to the first line of data stored in the cache image, and at the same time, the displayed image data is deleted , And move up one row of data in the cached image.
  • the image processing is completed, there are still h lines of data in the cached image that are not displayed, and the h lines of data need to be continuously output and displayed.
  • the anti-distortion processing of the image can be integrated into the display module, thereby reducing the data processing load of the image processing module, reducing the data delay, and improving the versatility of the display module. .
  • FIG. 3 shows a schematic block diagram of an image processing system 200 according to an embodiment of the present disclosure.
  • the image processing system 200 may include a display module 210 and an image processing module 220.
  • the image processing system 200 can be used in a virtual reality display device, an augmented reality display device, and the like.
  • the display module 210 may include a sensor unit 211, a display unit 212, and a lens unit 213.
  • the display unit 212 may perform anti-distortion processing on the received image data to be displayed, and display an anti-distortion image generated after the anti-distortion processing.
  • the sensor unit 211 may be used to detect the current posture or motion of the device and the user as sensor data, and transmit the sensor data to the image processing module 220.
  • the sensor unit 211 may be an inertial sensor, a motion capture sensor, or other types of sensors.
  • the inertial sensor may include a speed sensor, an acceleration sensor, a gyroscope, a geomagnetic sensor, and the like, which are used to capture the user's movement and determine the user's posture (such as the user's head posture, body posture, etc.).
  • the motion capture sensor may include an infrared induction sensor, a body sensor, a touch sensor, a position sensor, and the like, which are used to realize the motion capture of the user, especially the user's forward, backward, left, and right movement state.
  • Other types of sensors can include brain wave sensors, positioning equipment (Global Positioning System (GPS) equipment, Global Navigation Satellite System (GLONASS) equipment, Beidou navigation system equipment, Galileo positioning system (Galileo) equipment, quasi-zenith satellite system (QAZZ) Equipment, base station positioning equipment, Wi-Fi positioning equipment, etc.), pressure sensors and other sensors that detect user status and location information, can also include light sensors, temperature sensors, humidity sensors and other sensors that detect the surrounding environment status.
  • the sensor unit 211 may further include an image acquisition device such as a camera for implementing functions such as gesture recognition and face recognition.
  • the above-mentioned multiple types of sensors can be used alone or in combination to achieve specific functions of the display module.
  • the display unit 212 is configured to receive image data to be displayed, drive a display screen according to the received image data, and display an image corresponding to the image data on the display screen.
  • the display unit 212 may be further configured to perform anti-distortion processing on the received image data to obtain an anti-distortion image, and perform display driving according to the image data of the anti-distortion image subjected to the anti-distortion processing.
  • the display unit 212 may be implemented by one or more combinations of a central processing unit CPU, a graphics processing unit GPU, FPGA, ASIC, and CPLD.
  • the lens unit 213 can be used to image an image displayed on a display screen to adjust a display position of the image, thereby facilitating observation by a user.
  • the display screen is generally set close to the user's eyes, and the user cannot see the image displayed on the display screen by directly viewing it.
  • the lens unit 213 is used for imaging the display screen (and the image displayed on the display screen). The position of the image formed by the image displayed on the display screen after passing through the lens unit 113 will fall in a comfortable area focused by the user's eyes. For example, at a distance suitable for viewing.
  • the lens unit 213 may be a lens or a lens group.
  • the lens unit 213 may be a convex lens, a Fresnel lens, or the like.
  • the display module 210 may further include a wearing piece 214.
  • the wearing part 214 is used to assist a user to fix the display module 210 to be suitable for viewing.
  • the wearing part 214 may be any accessory that can be used to fix the display screen 210 in front of the user's eyes.
  • the wearer may further include accessories such as gloves, clothes, joysticks, and the like.
  • the image processing module 220 is configured to perform data processing on an image to be displayed for viewing by a user. As shown in FIG. 3, the image processing module 220 may include a data processing unit 221 and an image rendering unit 222. In some embodiments, the image processing module 220 may be implemented by a computer.
  • the data processing unit 221 is configured to process the sensor data collected and transmitted by the sensor unit 211, and determine the current state of the display module, such as the current posture or action of the device and the user, according to the received sensor data.
  • the image rendering unit 222 is configured to perform rendering processing on an image to be displayed to a user for viewing according to the current state of the display module 210 determined by the sensor data.
  • the anti-distortion processing performed on the image can be integrated into the display module 210, thereby reducing the data processing burden of the image processing module 220 and reducing the delay caused by data transmission and the like. And improves the versatility of the display module 210.
  • FIG. 4 shows a schematic block diagram of a display unit 212 according to an embodiment of the present disclosure.
  • the display unit 212 may include an input unit 2121, an anti-distortion processing unit 2122, a driving unit 2123, and an output unit 2124.
  • the input unit 2121 may be configured to receive image data to be displayed. In some embodiments, the input unit 2121 may receive the rendered image data from the image processing module 220 shown in FIG. 3. For example, the input unit 2121 may receive image data subjected to rendering processing from the image rendering unit 222.
  • the anti-distortion processing unit 2122 may perform anti-distortion processing on the image data received through the input unit 2121.
  • the anti-distortion processing unit 2122 may be implemented by an integrated circuit.
  • the anti-distortion processing unit 2122 may further include a memory for storing data required for performing anti-distortion processing on the image.
  • the memory may be RAM.
  • the anti-distortion processing unit may implement reading and writing of data by connecting an external memory. The connection here can be wired or wireless.
  • the driving unit 2123 may perform a driving function according to the image data subjected to the anti-distortion processing.
  • the anti-distortion processing unit 2122 and the driving unit 2123 may be implemented as the same integrated circuit. In other embodiments, the anti-distortion processing unit 2122 and the driving unit 2123 may be implemented as different integrated circuits.
  • the output unit 2124 is configured to display the image data subjected to the anti-distortion processing under the control of the driving unit 2123.
  • the output unit 2124 may be an image display device.
  • the output unit 2124 may be an independent display screen or other devices including a display screen, including projection devices, mobile phones, computers, tablets, TVs, smart wearable devices (including smart glasses such as Google Glass, smart watches, smart rings , Smart helmet, etc.), a virtual display device or a display enhancement device (such as Oculus Rift, Gear VR, Hololens) and other devices.
  • the anti-distortion processing of the image can be integrated into the display module 210, thereby reducing the data processing burden of the image processing module 220, reducing the data delay, and improving the display module. Universality.
  • FIG. 8 shows a schematic diagram of an image processing apparatus 800 according to an embodiment of the present disclosure.
  • the image processing apparatus 800 may include a processor 801 and a first memory 802.
  • the first memory 802 stores instructions, and when the processor 801 is used to execute the instructions, the processor 801 is caused to execute the image processing method as described above.
  • the image processing apparatus 800 may further include a second memory 803.
  • the second memory 803 may be configured to store a part of the anti-distortion image.
  • a part of the anti-distortion image may be an anti-distortion cache image as described above.
  • the anti-distortion buffer image may be generated based on an input image and the anti-distortion parameters.
  • the size of the anti-distortion buffer image determined according to the size of the input image and the anti-distortion parameters of the lens unit may be a part of the input image, rather than the entire input image.
  • the size of the anti-distortion cache image obtained using the image processing method according to the embodiment of the present disclosure is only a part of the input image.
  • the number of lines of the buffer area for storing the anti-distortion buffer image may be h as described above.
  • storing only part of the data of the anti-distortion image can reduce the storage space for storing the anti-distortion image, and reduce the size of the data in transmission, improving transfer speed.
  • anti-distortion processing for an input image can be integrated in a display module, thereby reducing the data processing load of the image processing module, thereby reducing data delay and improving the generality of the display module Sex.
  • a computer hardware platform may be used as a hardware platform for one or more of the elements described above.
  • the hardware elements, operating systems, and programming languages of such computers are common, and it can be assumed that those skilled in the art are sufficiently familiar with these technologies to be able to provide the information required for image processing using the techniques described herein.
  • a computer containing user interface (UI) elements can be used as a personal computer (PC) or other type of workstation or terminal device, and can also be used as a server after being appropriately programmed. It can be considered that those skilled in the art are familiar with such structures, programs, and the general operation of such computer equipment, and therefore no additional explanation is required for all the drawings.
  • Such computers may include personal computers, laptops, tablets, mobile phones, personal digital assistants (PDAs), smart glasses, smart watches, smart rings, smart helmets, and any smart portable or wearable device.
  • PDAs personal digital assistants
  • the specific system in the embodiment of the present disclosure uses a functional block diagram to explain a hardware platform including a user interface.
  • Such a computer device may be a general-purpose computer device, or a special-purpose computer device. Both computer devices can be used to implement a specific system in this embodiment.
  • the computer system may include a communication port, which is connected to a network for data communication.
  • the computer system may also include a processor for executing program instructions.
  • the processor may be composed of one or more processors.
  • the computer may include an internal communication bus.
  • the computer may include various forms of program storage units and data storage units, such as hard disks, read-only memory (ROM), random access memory (RAM), which can be used to store various data files used by the computer for processing and / or communication, and Possible program instructions executed by the processor.
  • the computer system may also include an input / output component that supports the input / output data flow between the computer system and other components (such as a user interface). Computer systems can also send and receive information and data from the network through communication ports.
  • the program part in the technology may be considered as a “product” or “article of manufacture” existing in the form of executable code and / or related data, which is participated or realized through a computer-readable medium.
  • the tangible, permanent storage medium may include memory or storage used by any computer, processor, or similar device or related module. For example, various semiconductor memories, magnetic tape drives, magnetic disk drives or similar devices capable of providing storage functions for software.
  • All software or parts of it may sometimes communicate over a network, such as the Internet or other communication networks.
  • This type of communication can load software from one computer device or processor to another.
  • a hardware platform that is loaded from a server or host computer of an image processing system into a computer environment, or other computer environment that implements the system, or a system that provides similar functions related to the information required for image processing. Therefore, another medium capable of transmitting software elements can also be used as a physical connection between local devices, such as light waves, radio waves, electromagnetic waves, etc., and is transmitted through cables, optical cables, or air.
  • the physical medium used for carrier waves, such as electrical cables, wireless connections, or fiber optic cables, can also be considered as the medium that carries the software.
  • tangible “storage” media is restricted, other terms referring to computer or machine "readable media” refer to media that participates in the execution of any instruction by a processor.
  • a computer-readable medium may take many forms, including tangible storage media, carrier wave media, or physical transmission media.
  • Stable storage media may include: optical disks or disks, and storage systems used in other computers or similar devices that can implement the system components described in the figures.
  • the unstable storage medium may include dynamic memory, such as the main memory of a computer platform.
  • Tangible transmission media may include coaxial cables, copper cables, and optical fibers, such as the lines that form a bus inside a computer system.
  • the carrier wave transmission medium can transmit electrical signals, electromagnetic signals, acoustic signals or light signals. These signals can be generated by radio frequency or infrared data communication methods.
  • Common computer-readable media include hard disks, floppy disks, magnetic tapes, any other magnetic media; CD-ROM, DVD, DVD-ROM, any other optical media; punch cards, any other physical storage media containing a small hole pattern; RAM, PROM , EPROM, FLASH-EPROM, any other memory chip or tape; carrier wave for transmitting data or instructions, cable or connection device for transmitting carrier wave, any other program code and / or data that can be read by computer.
  • a processor executes instructions and passes one or more results.
  • a “module” in this disclosure may refer to logic or a set of software instructions stored in hardware, firmware.
  • the “module” referred to herein can be executed by software and / or hardware modules, or stored in any kind of computer-readable non-transitory medium or other storage device.
  • a software module can be compiled and linked into an executable program.
  • the software module here can respond to the information passed by itself or other modules, and / or can respond when certain events or interruptions are detected.
  • a software module may be provided on a computer-readable medium, and the software module may be configured to perform operations on a computing device, such as a processor.
  • the computer-readable medium herein may be an optical disk, a digital optical disk, a flash disk, a magnetic disk, or any other kind of tangible medium.
  • Software modules can also be obtained through the digital download mode (the digital download here also includes the data stored in the compressed package or installation package, which needs to be decompressed or decoded before execution).
  • the code of the software module herein may be partially or wholly stored in a storage device of a computing device that performs an operation, and applied to the operation of the computing device.
  • Software instructions can be embedded in firmware, such as erasable programmable read-only memory (EPROM).
  • a hardware module may contain logic units connected together, such as gates, flip-flops, and / or programmable units, such as a programmable gate array or processor.
  • modules or computing devices described herein are preferably implemented as software modules, but may also be represented in hardware or firmware.
  • the modules mentioned here are logical modules and are not limited by their specific physical form or memory.
  • a module can be combined with other modules or separated into a series of sub-modules.

Landscapes

  • Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Geometry (AREA)
  • Optics & Photonics (AREA)
  • Image Processing (AREA)
  • Studio Devices (AREA)
  • Transforming Electric Information Into Light Information (AREA)

Abstract

公开了一种用于包括透镜单元的电子设备的图像处理方法、设备以及虚拟现实显示装置。所述图像处理方法包括:根据输入图像的尺寸以及透镜单元的反畸变参数确定待缓存的图像尺寸(S501);以及按照所述待缓存的图像尺寸,基于输入图像和所述反畸变参数,产生具有所述图像尺寸的反畸变缓存图像(S502)。其中,产生具有所述图像尺寸的反畸变缓存图像包括:从所述输入图像中选择一行像素并对该行像素执行反畸变操作,所述反畸变操作包括根据所述反畸变参数确定对应于该行像素的反畸变图像的像素数据,并将该反畸变图像的像素数据写入所述反畸变缓存图像。

Description

图像处理方法、设备以及虚拟现实显示装置
本申请要求于2018年05月29日提交的中国专利申请第201810534923.3号的优先权,该中国专利申请的全文通过引用的方式结合于此以作为本申请的一部分。
技术领域
本公开涉及图像处理领域,具体涉及一种用于对图像进行反畸变处理的图像处理方法、设备以及虚拟现实显示装置。
背景技术
随着近两年虚拟现实(VR)技术飞速发展,人们对VR显示的需求越来越高,如,要求更高分辨率的屏幕,更短的数据延迟,更丰富的VR内容等等。随着这些需求的出现,一些问题也随之产生,如为了适应更高分辨率的显示屏幕,就需要渲染具有更高分辨率的显示图像,这会增加渲染延时,同时,由于还需要对经过渲染处理后的高分辨率的显示图像进行反畸变处理,这使得延时进一步增加。
发明内容
根据本公开的一方面,提供一种用于包括透镜单元的电子设备的图像处理方法,包括:根据输入图像的尺寸以及透镜单元的反畸变参数确定待缓存的图像尺寸;以及按照所述待缓存的图像尺寸,基于输入图像和所述反畸变参数,产生具有所述图像尺寸的反畸变缓存图像。
在一些实施例中,其中,产生具有所述图像尺寸的反畸变缓存图像包括:从所述输入图像中选择一行像素并对该行像素执行反畸变操作,所述反畸变操作包括根据所述反畸变参数确定对应于该行像素的反畸变图像的像素数据,并将该反畸变图像的像素数据写入所述反畸变缓存图像。
在一些实施例中,其中从所述输入图像中选择一行像素并对该行像素执行反畸变操作包括:对所述输入图像中的各行像素逐行执行反畸变操作。
在一些实施例中,其中所述反畸变参数包括所述反畸变图像的多个像素点的高度与所述反畸变图像经所述透镜单元形成的虚像中对应的多个像素点的高度之间的物像关系,基于透镜单元的光学参数确定所述物像关系。
在一些实施例中,其中所述透镜单元的光学参数包括所述透镜单元的焦距、所述输入图像的显示位置与所述透镜单元之间的距离以及用户观看位置与所述透镜单元之间的距离。
在一些实施例中,其中,所述反畸变图像中多个像素点的高度是所述反畸变图像中的多个像素点到透镜中心在所述输入图像上的映射点的各自的距离,所述虚像中对应的多个像素点的高度是所述虚像中的对应的多个像素点到透镜中心在所述虚像中的映射点的各自的距离。
在一些实施例中,其中根据输入图像的尺寸以及透镜单元的反畸变参数确定待缓存的图像尺寸包括:基于所述反畸变参数确定用于所述输入图像的反畸变网格,以及所述反畸变图像的四个顶点在所述反畸变网格上以所述输入图像的中心作为原点确定的坐标值,并分别确定所述四个顶点在列方向上的坐标值的绝对值;确定所述四个顶点在列方向上的坐标值的绝对值中的最小绝对值Y;根据所述最小绝对值Y确定所述待缓存的图像尺寸。
在一些实施例中,其中,所述输入图像的尺寸为W*H,所述待缓存的图像在行方向上的尺寸为W,在列方向上的尺寸为k*H*(1-Y)+1,其中k是大于等于1的实数。
在一些实施例中,其中根据所述反畸变参数确定对应于该行像素的反畸变图像的像素数据,并将该反畸变图像的像素数据写入所述反畸变缓存图像包括:对于所述输入图像中的每行像素数据中的每一个像素点,确定从所述透镜中心在所述输入图像上的映射点到该像素点的向量;基于所述物像关系以及所述输入图像的尺寸确定所述虚像的尺寸;以及根据所述向量以及所述虚像的尺寸确定该像素点在所述虚像中的对应像素点的像高;基于所述物像关系,根据所述虚像中对应像素点的像高确定该像素点在所述反畸变图像中的对应像素点的物高;根据所述反畸变图像中的对应像素点的物高将该像素点的像素数据写入缓存图像。
在一些实施例中,其中根据所述反畸变图像中的对应像素点的物高将该像素点的像素数据写入反畸变缓存图像包括:根据所述反畸变图像中的对应 像素点的物高确定所述对应像素点在所述缓存图像中的对应的像素点,并将该像素点的灰阶值存入反畸变缓存图像中的对应像素点。
在一些实施例中,其中对于所述输入图像中的每行像素数据中的每一个像素点,确定从所述透镜中心在所述输入图像上的映射点到所述每个像素点的向量包括:确定所述透镜中心在所述输入图像上的映射点到该像素点的距离和方向。
在一些实施例中,所述图像处理方法还包括:输出反畸变缓存图像的第一行像素数据用于显示并清除已显示的像素数据,以及将未显示的各行像素数据在反畸变缓存图像中上移一行。
在一些实施例中,所述图像处理方法还包括:对输入图像的所有像素执行反畸变操作后,逐行输出所述反畸变缓存图像中的剩余的图像数据用于显示。
根据本公开的另一方面,还一种图像处理设备,包括:处理器;和第一存储器,其中所述第一存储器中存储有指令,当利用所述处理器执行所述指令时,使得所述处理器执行如上所述的图像处理方法;所述图像处理设备还包括第二存储器,用于存储反畸变图像的一部分。
根据本公开的另一方面,还提供了一种虚拟现实显示装置,包括:传感器,配置成采集用于确定所述虚拟现实显示装置当前状态的传感器数据;显示单元,配置成接收基于所述传感器数据确定的输入图像并对所述输入图像进行反畸变处理以生成反畸变图像的像素数据,以及根据所述反畸变图像的像素数据进行显示驱动,其中,所述显示单元包括如前所述的图像处理设备;以及透镜单元,配置成对由所述显示单元驱动显示的图像进行成像。
在一些实施例中,所述传感器包括速度传感器、加速度传感器、地磁传感器、触摸传感器、距离传感器中的一种或多种。
在一些实施例中,所述显示单元包括中央处理单元、图形处理单元、FPGA、ASIC、CPLD中的一种或多种。
在一些实施例中,所述透镜单元包括凸透镜、菲涅尔透镜、凹透镜中的一种或多种。
附图说明
为了更清楚地说明本公开实施例的技术方案,下面将对实施例描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本公开的一些实施例,对于本领域普通技术人员而言,在没有做出创造性劳动的前提下,还可以根据这些附图获得其他的附图。以下附图并未刻意按实际尺寸等比例缩放绘制,重点在于示出本公开的主旨。
图1示出了根据现有技术的一种图像处理系统100的示意性的框图;
图2A示出了用户通过透镜单元观看显示单元显示的图像的过程的示意图;
图2B示出了从正面观看时观察到的虚像和原始的显示图像之间的区别;
图2C示出了对图像进行反畸变处理的效果图;
图3示出了根据本公开实施例的图像处理系统的示意性的框图;
图4示出了根据本公开实施例的显示单元的示意性的框图;
图5示出了根据本公开实施例的图像处理方法的流程图;
图6示出了根据本公开实施例的示例性的透镜单元的物像关系的函数的示意图;以及
图7示出了根据本公开实施例的反畸变网格的示意图;
图8示出了根据本公开实施例的图像处理设备的示意图。
具体实施方式
为使本公开实施例的目的、技术方案和优点更加清楚,下面将结合本公开实施例的附图,对本公开实施例的技术方案进行清楚、完整地描述。显然,所描述的实施例是本公开的一部分实施例,而不是全部的实施例。基于所描述的本公开的实施例,本领域普通技术人员在无需创造性劳动的前提下所获得的所有其他实施例,都属于本公开保护的范围。
除非另外定义,本公开使用的技术术语或者科学术语应当为本公开所属领域内具有一般技能的人士所理解的通常意义。本公开中使用的“第一”、“第二”以及类似的词语并不表示任何顺序、数量或者重要性,而只是用来区分不同的组成部分。同样,“包括”或者“包含”等类似的词语意指出现该词前面的元件或者物件涵盖出现在该词后面列举的元件或者物件及其等同,而不排除其他元件或者物件。“连接”或者“相连”等类似的词语并非限定于物理的或者 机械的连接,而是可以包括电性连接或信号连接,不管是直接的还是间接的。
图1示出了根据现有技术的一种图像处理系统100的示意性的框图。如图1所示,图像处理系统100可以包括显示模块110和图像处理模块120。显示模块110用于检测设备以及用户的当前姿态或动作,并将姿态数据和/或动作数据发送给图像处理模块120进行图像渲染。图像处理模块120用于接收来自显示模块110的姿态数据和/或动作数据并根据接收的姿态数据和/或动作数据对将要显示给用户用于观看的图像进行渲染,并将处理后的图像发送给显示模块110用于显示。
如图1所示,显示模块110可以包括传感器单元111、显示单元112、透镜单元113。
所述显示单元112用于显示将要显示给用户用于观看的图像。其中,显示单元112可以进一步包括显示驱动单元和显示屏。显示驱动单元可以是集成电路模块,其用于根据接收的图像数据来驱动显示屏,并在显示屏上显示与所述图像数据对应的图像。这里的显示屏可以是任何类型的显示屏,如LED显示屏、OLED显示屏等。
所述透镜单元113可以用于对显示屏上显示的图像进行成像以调整图像的显示位置,从而便于用户观察。例如,在目前的虚拟现实显示设备中,显示屏一般会设置在距离用户眼睛很近的位置,用户无法通过直接观看的方式看到显示屏上显示的图像。透镜单元113用于对显示屏(及显示屏上显示的图像)进行成像,其中显示屏上显示的图像经过所述透镜单元113后所形成的像的位置将落在用户眼睛聚焦的舒适区,例如,位于适于观看的距离。
在一些实施例中,显示模块110还可以包括佩戴件114。所述佩戴件114用于辅助用户将显示模块110固定,以适于观看。
所述图像处理模块120配置成对将要显示给用户观看的图像进行数据处理。如图1所示,图像处理模块120可以包括数据处理单元121、图像渲染单元122、反畸变处理单元123。
所述数据处理单元121可以用于对传感器单元111采集并传送的传感器数据进行处理,并根据接收的传感器数据确定显示模块110的当前状态,例如设备以及用户的当前姿态或动作。
所述图像渲染单元122可以用于根据所述传感器数据确定的显示模块 110的当前状态对将要显示给用户观看的图像进行渲染处理。
所述反畸变处理单元123可以用于对经过渲染出处理后的图像进行反畸变处理。如前所述,由于用户需要通过透镜单元113来观看显示单元112显示的适于观看的显示图像。经过透镜单元113形成的显示屏以及显示屏上显示的图像的像可能会产生畸变,例如,由于透镜的光学参数产生。所述透镜单元113的光学参数包括所述透镜单元113的焦距、所述显示屏与所述透镜单元113之间的距离以及用户观看位置与所述透镜单元113之间的距离等。
图2A示出了用户通过透镜单元113观看显示单元112显示的图像的过程的示意图。在图2A中示出了透镜单元113。应当注意的是,图2A中仅以示意性地方式示出透镜单元113,图2A中的图形形状并不构成对透镜单元113中的透镜的形状和性质的限制。事实上,透镜单元113可以是单独的一个透镜,也可以由多个透镜和光学元件组成的透镜组。例如,透镜单元113可以是包括凸透镜、菲涅尔透镜、凹透镜及其任意的组合。透镜单元113还可以包括滤光片、光阑、光栅等其他常用的辅助光学成像的元件。显示单元112通过显示屏显示的图像经过透镜单元113成像后,将在距离用户眼睛较远的位置处形成该图像的虚像112’。
由于所述透镜单元113并非是理想透镜,因此,经过透镜单元113成像后,图像会产生畸变。图2B示出了用户从正面观看时观察到的虚像112’和原始的显示图像112之间的区别。例如,在相关技术中,用户观看的经透镜单元113成像后的图像出现枕形畸变。其中距离图像中心越远,图像的畸变程度越大。可以理解的是,根据光学成像原理,经过透镜成像而产生的图像畸变与透镜的固有属性相关。当利用同一个透镜单元进行成像时,不同图像的畸变程度是相同的。需要注意的是,图2B中示出的枕形畸变仅为示意性的,所述畸变还可以包括桶形畸变和/或线性畸变等类型的畸变。
为了修正上述的图像畸变从而改善用户的观看体验,一种方法是对将要用于显示的图像在显示之前进行反畸变处理。
反畸变处理的原理在于,考虑到透镜单元的畸变效果,提前对要显示的图像进行变形处理,该变形能够抵消透镜单元的固有特性产生的畸变效果。
图2C示出了对图像进行反畸变处理的效果图。如图2C所示,为了使用户通过透镜单元113观察到没有变形的图像,可以预先将要显示的图像处理 形成桶形的反畸变图像,并将桶形的反畸变图像显示在显示屏上。这样,当用户观看由反畸变图像经过透镜单元113所形成的像时,将观看到没有发生畸变(或畸变小于一定阈值)的图像,这是由于用于显示的反畸变图像经过如图2B中所示出的变形处理后将抵消该透镜单元113可能产生的畸变效果。
然而,随着显示屏幕的分辨率越来越高,显示的图像的分辨率也不断上升,导致图像数据的数据量不断增长,为图像处理设备带来了很重的数据负担。
从图1中可以看出,利用现有的图像处理系统100进行反畸变处理的过程中,需要对整幅图像进行反畸变处理,并将经过反畸变处理后的整幅图像发送给显示单元。在这个过程中,图像的处理以及存储都需要占用巨大的系统资源。由于数据量巨大,反畸变处理过程也会加重图像显示的延时。
为了解决上述问题,本公开提供了一种图像处理方法、设备以及系统。
图5示出了根据本公开实施例的一种图像处理方法的流程图。如图5所示,所述图像处理方法500可以包括以下步骤。
首先,在步骤S501,根据输入图像的尺寸以及透镜单元的反畸变参数确定待缓存的图像尺寸。根据本公开实施例,所述透镜单元的反畸变参数可以是由透镜单元本身的性质决定的。根据光学成像原理,透镜单元在成像的过程中将产生畸变,即图像变形,所述畸变是不可避免的。所述反畸变参数可以是与所述图像变形相关的参数,例如反畸变图像的多个像素点的高度与反畸变图像经透镜单元形成的虚像中对应的多个像素点的高度之间的物像关系。
其中,所述输入图像可以是经过渲染处理的、用于用户观看的图像,输入图像的分辨率可以与显示屏幕的分辨率相同,所述分辨率可以表示为图像在行方向和列方向上包含的像素数目。例如,在显示屏幕的分辨率是W*H时,即显示屏幕上包含W*H个像素点,则输入图像的分辨率也可以是W*H。
接着,在步骤S502,按照所述待缓存的图像尺寸,基于输入图像和所述反畸变参数,产生具有所述图像尺寸的反畸变缓存图像,所述反畸变缓存图像为经过反畸变处理的用于显示的图像。
根据本公开实施例,所述产生具有所述图像尺寸的反畸变缓存图像包括,从所述输入图像中选择一行像素并对该行像素执行反畸变操作。根据本公开实施例,所述反畸变操作可以包括:根据反畸变参数确定对应于该行像素的 反畸变图像的像素数据,并将该反畸变图像的像素数据写入缓存图像。
在一些实施例中,从所述输入图像中选择一行像素并对该行像素执行反畸变操作可以包括:对所述输入图像中的各行像素逐行执行反畸变操作。根据本公开的其他实施例,也可以对输入图像中的一部分像素执行反畸变操作。例如,根据光学成像原理,越靠近透镜单元的光轴的物点在成像时所产生的畸变越小,越远离透镜单元的光轴的物点在成像时所产生的畸变越大,由此,可以选择输入图像中远离透镜单元的光轴的一行或多行像素执行反畸变操作,而对于输入图像中靠近光轴的行像素不执行反畸变操作。
在一些实施例中,反畸变参数可以包括反畸变图像的多个像素点的高度与反畸变图像经透镜单元形成的虚像中对应的多个像素点的高度之间的物像关系。其中,该物像关系可以是基于透镜单元的光学参数确定的。其中透镜单元的光学参数包括所述透镜单元的焦距、输入图像与透镜单元之间的距离以及用户观看位置与透镜单元之间的距离。在一些实施例中,可以通过测量手段确定该物像关系。
对于由前述图像处理系统的透镜单元所成的像来讲,透镜单元的“物”可以是显示屏上显示的图像,即反畸变图像,反畸变图像上的每一个像素点到透镜中心在显示屏上的映射点的距离可以看作是“物高”,所述反畸变图像经透镜单元形成的虚像上上任意一点到透镜中心在虚像上的映射点的距离可以看作是“像高”。
例如,在一些实施例中,对于输入图像、反畸变图像以及反畸变图像经透镜单元形成的虚像来说,图像中的某个像素点的高度可以指的是该像素点到透镜中心在该图像上的映射点之间的距离。
例如,根据本公开实施例,所述输入图像经过透镜单元可以形成第一虚像,所述第一虚像为包含有畸变的图像。所述畸变与透镜单元的参数以及输入图像的尺寸(例如,像素数目以及像素尺寸)相关。基于理想光学成像原理,输入图像中的各个像素点与第一虚像中的各个像素点一一对应,并且基于输入图像中的像素点的物高与第一图像中的对应的像素点的像高的函数,可以表征所述透镜单元的物像关系。物像关系的函数与所述透镜单元的光学参数以及输入图像距离透镜单元的距离相关。
根据本公开的一个实施例,可以采用测试的方式来确定所述透镜单元的 物像关系的函数。例如,可以根据给定的物距,即物点到透镜单元在纵向上垂直距离值,来选定该物距的位置处一系列的物点,所述物点中的每一个具有物高(例如,y m),不同物点的物高值各不相同。然后测量一个物点(例如,物高值为y m1)经过透镜单元成像所形成的像点的像高(例如,像高值为x m1)。由此可以得到由物高值和与之对应的像高值组成的数据点(y m1,x m1)。根据本公开的其他实施例,还可以采用其他的方式来确定所述透镜单元的物像关系,例如,采用光学成像仿真软件,本公开并不限制用于确定透镜单元的物像关系的具体方式。
根据本公开实施例,在利用所述透镜单元对显示屏幕上显示的图像进行成像时,例如,输入图像或者反畸变图像,当将显示屏幕置于该测量物像关系所处的物距时,由于透镜单元的参数并未改变,由此所述显示屏幕上显示的像素点经过透镜单元所形成的像点也将符合上述测量获得的物像关系的函数。换句话说,显示屏幕上显示的像素点的物高和与之对应的像点的像高符合该物像关系。
根据本公开实施例,可以利用函数拟合工具对测量得到的一组由物高y m与像高x m组成的数据点进行函数拟合,以得到像高x m与物高y m之间的物像关系的曲线,并得到该曲线的拟合函数的表达式。本公开并不限制所使用的进行函数拟合的拟合工具。
图6示出了根据本公开实施例的示例性的透镜单元的物像关系的函数的示意图。示例性的,经过测量可以得到一组关于物高和像高(y m,x m)的数据点,例如,如图6中示出的一组数据点,其中,图6中示出的数据点的像高的数值是基于测量得到的。利用拟合工具对图6中的离散的数据点进行函数拟合,以得到最接近由该离散的数据点构成的曲线的函数。所述函数可以表征物高和像高之间的数值关系。例如,经过拟合可以得到对应于透镜单元的物高(y m)与像高(x m)之间的函数,如下所示:
y m=F(x m)=-x 6*3 -16+x 5*3 -13+x 4*9 -11-x 3*2 -7+x 2*2 -6+0.0998*x+0.0006
图6中示出的通过拟合得到的函数y m=F(x m)可以用于表征像高x m与物高y m之间的函数关系。需要注意的是,图6中示出的拟合函数仅仅是一种可能的示例。当透镜单元的光学参数改变时,像高x m和物高y m之间的函数关系F(x m)也会随着参数改变而发生变化。
利用如上所述确定的透镜单元的物高y m和像高x m之间的函数关系y m=F(x m)可以确定显示屏幕上的反畸变网格的值。
根据本公开的一个实施例,确定显示屏幕上的反畸变网格的值的过程可以包括以下步骤:首先,可以根据透镜单元的成像参数来计算得到输入图像在经过该透镜单元成像后所形成的第一虚像,其中,所述第一虚像是通过计算得到的,即所述第一虚像中不含有畸变,换句话说,所述第一虚像为预期显示的不含有畸变的理想图像。例如,所述成像参数可以包括该透镜单元的放大倍率、焦距f、输入图像的物距等。
然后,可以基于所述函数y m=F(x m)来计算与第一虚像内的像点的对应的物点。具体的,可以确定所述第一虚像内的像点的像高(x m),然后将该像高的值带入函数F(x m),从而计算得到与该像点对应的物点的物高(y m)的值。对第一虚像中的每个像素点重复上述过程,即可得到反畸变网格中各个像素点的物高,由此可以获得所述反畸变网络。
表1中示出了根据本公开实施例的示例性的反畸变网格在以显示屏幕的中心为原点的坐标系下的部分数据,其中,以x*、y*分别代表经过归一化处理的反畸变网格上的相应点的横坐标与纵坐标的数值,表1中的相应点的横坐标与纵坐标均是根据反畸变网格的坐标系计算而得的。
表1
x* y*
-0.90076 -0.85805
-0.87876 -0.86221
-0.85631 -0.86621
-0.83378 -0.87036
-0.811 -0.87452
-0.78795 -0.87868
-0.76497 -0.88316
-0.74171 -0.88764
-0.71832 -0.89227
-0.69478 -0.89707
-0.67101 -0.90195
-0.64701 -0.90691
-0.62263 -0.91179
-0.59814 -0.9169
-0.57317 -0.92182
-0.54784 -0.92666
-0.5224 -0.93178
-0.49648 -0.93666
-0.47023 -0.94149
-0.44363 -0.94625
-0.41666 -0.95089
-0.38936 -0.95545
-0.36169 -0.95985
图7中示出了上述的反畸变网格的坐标系,其中坐标系的原点是输入图像的中心点。如图7所示,在反畸变网格的坐标系中,定义坐标轴x、y与输入图像的边界的交点的坐标长度为1。表1中的相应点的横坐标与纵坐标均是根据反畸变网格的坐标系计算而得的。
回到图5,在一些实施例中,步骤S502还可以包括确定缓存图像的尺寸。根据本公开实施例,可以确定反畸变图像的四个顶点在所述反畸变网格上以输入图像的中心(即显示屏中心)作为原点确定的坐标,并分别确定四个顶点在输入图像的列方向上的坐标值的绝对值;确定四个顶点在所述输入图像的列方向上的坐标值的绝对值中的最小绝对值Y。然后,根据最小绝对值Y来确定缓存图像的尺寸,例如,所述输入图像的分辨率可以为W*H,则缓存图像的行方向上的分辨率为W,列方向上的分辨率可以为k*H*(1-Y)+1,其中k是大于等于1的实数。
例如,当k=1时,可以通过公式H*(1-Y)+1来计算缓存图像在列方向上的尺寸。当k=2时,可以通过公式2*H*(1-Y)+1来计算缓存图像在列方向上的尺寸。可以理解的是,k可以是大于等于1的任何实数。例如,k还可以取值为1.5、2.8等。上述示例并不构成对k的取值的限制。
图7还示意性地示出了缓存图像的大小。在如图7所示的坐标系下,图像的行方向是x轴,列方向是y轴,x、y轴的方向可以设置为如图7所示的形式,也可以设置为任何其他可能的形式,如x轴方向是从左到右,y方向是从上到下。如图7所示,如果输入图像的尺寸为由四个顶点A、B、C、D构成的矩形,其在x方向上具有W个像素,在y方向具有H个像素,也就是说,输入图像的分辨率为W*H。这里的W、H分别表示输入图像在x方向和y方向上的像素数目。例如,W=2160,H=2376。
如前所述,根据透镜单元的成像原理,距离图像中心越远,图像的畸变程度越大。为了找到反畸变网格中畸变程度最大的一行图像数据,可以通过比较反畸变网格四个顶点的坐标位置来确定缓存图像的尺寸。在一些实施例 中,可以比较反畸变网格四个顶点到输入图像的对应顶点的距离来确定反畸变网格的四个顶点的变形程度。
在另一些实施例中,可以通过与输入图像的对应顶点距离最远的顶点的坐标来确定缓存图像的尺寸。例如,确定反畸变网格的四个顶点的坐标A’(x1,y1),B’(x2,y2),C’(x3,y3),D’(x4,y4),并计算四个顶点的横坐标的最大值以及纵坐标的最小值,即X=max(abs(x1),abs(x2),abs(x3),abs(x4))及Y=min(abs(y1),abs(y2),abs(y3),abs(y4)),其中abs(m)表示对数值m取绝对值。由此,可以计算得到所述四个顶点的横坐标的最大值X以及纵坐标的最小值Y。
根据本公开实施例,为了至少能够写入畸变程度最大的一行图像数据(即,在坐标系中的坐标数值最小),可以将缓存图像的宽度设置为与显示屏幕宽度相同,高度设置为至少大于畸变程度最大的一行图像数据的高度。因此,缓存图像的最小高度可以表示为k*H*(1-Y)+1,其中k是大于等于1的实数,所述最小高度表示所述缓存图像中可以存储的最少的像素行数。
缓存图像的大小指示了反畸变处理单元2122中存储的缓存图像的像素数据的行数和列数。从图7中可以看出,所述缓存图像在横向上的像素数目与显示图像在横向上的像素数目相同,所述缓存图像在纵向上的像素数目由四个顶点的纵坐标最小值Y确定,在图7中所示出的情形中,k=1,如上所述,根据本公开的其他实施例,还可以将k取不同的数值,诸如k=1.5。
回到图5,在一些实施例中,步骤S503还可以包括:对于输入图像中的每行像素数据中的每一个像素点,确定从透镜中心在输入图像上的映射点到该像素点的向量;基于物像关系以及输入图像的尺寸来确定第一虚像的尺寸;以及根据所述向量以及虚像的尺寸确定该像素点在虚像中的对应像素点的像高;基于物像关系,根据虚像中对应像素点的像高确定该像素点在反畸变图像中的对应像素点的物高;根据反畸变图像中的对应像素点的物高将该像素点的像素数据写入缓存图像。
其中确定从透镜中心在输入图像上的映射点到该像素点的向量包括:确定所述透镜中心在所述输入图像上的映射点到输入图像上的该像素点的距离和方向。
在一些实施例中,通过输入图像的尺寸以及透镜单元的反畸变参数可以 确定反畸变图像经透镜单元所成的虚像的尺寸。例如,可以以输入图像的长和宽作为物高,基于用于透镜单元的物像关系的函数确定虚像的长和宽。其中,输入图像的长和宽可以用输入图像在长、宽方向上的像素个数表示。本领域技术人员可以理解,也可以使用例如显示屏的长、宽的实际尺寸作为输入图像的长和宽。在此情况下,显示屏的实际尺寸和输入图像的像素个数之间存在对应的映射关系,例如,通过确定显示屏上的像素的实际尺寸,可以在显示屏的实际尺寸和输入图像的像素个数之间进行换算。下文的描述中的计算是以输入图像在长、宽方向上的像素个数表示输入图像的长和宽的尺寸为例。
例如,虚像的宽可以是W 0,高可以是H 0。通过将虚像的宽划分为W份,每份的长度为w 0=W 0/W,将高划分为H份,每份的长度为h 0=H 0/H。这里的W指的是输入图像在行方向上的分辨率,即在行方向上的像素数量,H指的是输入图像在列方向上的分辨率,即在列方向上的像素数量。由此可以得到输入图像上每一个像素在虚像上的对应像素。
在一些实施例中,可以确定透镜中心在输入图像、反畸变图像以及反畸变图像的虚像上的坐标。
根据一般光学透镜的成像原理,在理想情况下,透镜中心与显示设备的显示屏的中心应当重合。也就是说透镜中心与输入图像的中心、反畸变图像的中心以及反畸变图像的虚像的中心是重合的。在如图7中示出的以显示屏中心作为原点的反畸变网格的坐标系中,透镜中心的坐标可以表示为(0,0)。在以输入图像的左上角(如图7中的A点)作为坐标原点的坐标系中,以输入图像的长边和宽边所在直线分别作为x、y轴的情况下,透镜中心的坐标(PCI)可以表示为(W/2,H/2)。在以下描述中采用PCI表示透镜中心在输入图像中的位置。
然而,考虑到透镜与显示屏之间的可能存在的装配误差,透镜中心可能偏离显示屏的中心。在这种情况下,例如,透镜中心在图7中示出的反畸变网格的坐标系中的坐标可以表示为(x 0,y 0),其中x 0、y 0可以不为零。此时,在以输入图像的左上角作为坐标原点的坐标系中,以输入图像的长和宽分别作为x、y轴的情况下,透镜中心的坐标可以表示为PCI((1+x 0)*W/2,(1+y 0)*H/2)。由于在经过透镜成像时,透镜中心点不发生畸变,因此透 镜中心在输入图像、反畸变图像以及反畸变图像的虚像上的映射点是沿透镜单元的光轴重合的。
在一些实施例中,反畸变处理单元每次接收输入图像的一行图像数据进行处理。例如,当处理第i行图像时,反畸变处理单元共接收W个像素数据。可以通过计算输入图像中这一行图像中的任一像素与输入图像中的透镜中心的映射点的距离。例如,可以计算这行图像中任一像素在以输入图像的左上角作为坐标原点、输入图像的长边和宽边所在直线分别作为x、y轴的坐标系中的像素中心的坐标。例如,对于第i行第j列的像素点p{j,i},其像素中心的坐标可以表示为p(j-1/2,i-1/2)。因此,对于第i行第j列的像素点p{j,i},其中i=1,2…W,j=1,2…W,该像素点到透镜中心的映射点的向量可以表示为vec p=p-PCI=(j-1/2-W/2,i-1/2-H/2)。在一些实施例中,由于当输入图像是高分辨率的图像时,H、W的值相对1/2来说是相当大的,因此,在一些示例中,可以将像素点p{j,i}的像素中心坐标简化为(j,i),并将vec p简化为(j-W/2,i-H/2)。在一些示例中,可以对向量vec p进行归一化处理,例如,归一化后的向量norm p=vec p/|vec p|,其中|vec p|是向量vec p的长度。
利用向量vec p和先前确定的虚像的尺寸可以计算出像素点p{j,i}在虚像上的对应像素点p”{j,i}的像高。
其中,向量vec p在虚像上的对应向量可以表示为向量I,其中I X=vec p.x*w 0,I Y=vec p.y*h 0,其中I X是向量I在x方向上的分量,I Y是向量I在y方向上的分量,vec p.x是向量vec p在x方向上的分量,vec p.y是向量vec p在y方向上的分量。
根据向量I可以确定像素点p{j,i}在反畸变图像上的对应点p’{j,i}的物高y p,其中y px=F(I X),y py=F(I Y),其中F(x)是透镜单元的物像关系函数。也就是说,可以利用像素点p{j,i}在虚像上的对应点p”{j,i}的像高的x和y方向的分量,分别确定像素点p{j,i}在反畸变图像上的对应点p’{j,i}的物高的x和y方向上的分量。此外,先前已经说明了,这里的物高指的是像素到透镜中心在反畸变图像上的映射点的距离,因此,根据上述向量I的方向以及其反畸变后得到的物高,可以确定像素p{j,i}在反畸变图像上的对应点p’{j,i}与透镜中心在反畸变图像上的对应点PCI’的距离。由于在计算向量vec p和向量I时均采用了对应像素点的行数和列数作为向量在x、y方向上 的分量值,因此通过上述过程计算得到的物高y p在x、y方向上的分量y px、y py也可以表示反畸变图像上的对应像素p’{j,i}在反畸变图像中对应的行数和列数,其中y px表示p’{j,i}在反畸变图像中对应的列数,y py表示p’{j,i}在反畸变图像中对应的行数。
图7中示出了输入图像ABCD以及对应于输入图像的桶型反畸变图像A’B’C’D’。如前所述,在相关的技术中,需要存储完整的反畸变图像A’B’C’D’。而在本公开提供的方法中,由于缓存图像的尺寸小于输入图像的尺寸,因此,不可能将对应于输入图像的反畸变图像中的所有像素数据都存入缓存图像。在一些实施例中,由于在图像显示驱动时采用的是逐行驱动的方法,因此,可以将对应于输入图像的反畸变图像的像素数据从第一行开始逐行写入缓存图像,并对缓存图像实现逐行的显示驱动。
在一些实施例中,根据反畸变图像中的像素点的物高将该反畸变图像中的像素点写入缓存图像还可以包括:当读取的输入图像的行数i小于等于缓存图像的缓存区(buffer)的行数h时,即当i<=h时,对于当前正在处理的输入图像中的像素点p{j,i},可以将p{j,i}的像素值(或灰阶值)写入缓存图像buffer中第y py行、第y px列的像素点buffer[y py][y px],即,buffer[y py][y px]=p{j,i};当读取的输入图像的行数i大于缓存图像buffer存储的行数h时,即当i>h时,可以将p{j,i}的像素值(或灰阶值)写入缓存图像buffer中第y py-(i-h)行、第y px列的像素点buffer[y py-(i-h)][y px],即buffer[y py-(i-h)][y px]=p{j,i}。其中,所述缓存区的行数h可以为表示缓存区中可以存储的缓存图像的最大行数的一半,即,h可以为1/2*k*H*(1-|Y|)+1。
在反畸变处理的开始阶段,如前所述,缓存图像buffer至少可以存下反畸变图像中畸变程度最大的一行像素数据。因此,尽管对于反畸变图像来说,距离图像中心越远,图像的畸变程度越大,如前所述设置的缓存图像buffer也至少可以完整存储输入图像中第一行图像对应的反畸变图像的像素数据。
在一些实施例中,如果当前处理的输入图像的第i行数据可以写入缓存图像buffer,则对该行输入图像进行如前所述的反畸变处理,并将对应的像素的像素值(如灰阶值)写入对应于该行输入图像的反畸变图像的缓存图像中的对应像素位置处。例如,在反畸变处理的开始阶段,缓存图像buffer可 以存储如图7中所示的ABEF矩形内的反畸变图像。图7中示出的是由ABEF矩形确定的反畸变图像与A’点具有相同的行数的情形,其中点A’为输入图像中位于第1行的点A在反畸变图像中的位置,即,缓存图像buffer刚好可以存储下输入图像第1行的图像数据。在根据本公开的其他实施例中,所述ABEF矩形还可以与A’点具有不同的行数。例如,由EF确定的反畸变图像的行数可以略大于由A’确定的行数,即,点E、F可以位于点A’的下方。
对点A来说,通过前述反畸变处理可以确定点A在反畸变图像中的对应点A’在反畸变图像中的位置。由于缓存图像buffer尚未被写满,因此可以确定缓存图像buffer与显示屏的前h行对应,并可以如上所述的确定点A’在缓存图像中可以写入的对应位置坐标(例如,buffer[y py][y px])。
如果缓存图像buffer已经写满,则输出缓存图像中的第一行像素数据用于显示,并清除已经显示的第一行像素数据,将已经缓存的数据在缓存图像buffer中上移一行,即将缓存图像中的每行像素数据均上移一行。
对应于图7中的示例,可以理解,在进行了上一步骤中的处理后,可以相当于缓存图像ABEF中存储数据从反畸变图像的第1行至第h行改变为第2行至第(h+1)行。以图7为例,可以认为缓存图像buffer相对于反畸变图像的存储区域逐行下移。
因此,当读取的输入图像的行数i小于等于缓存图像buffer的高度h时,即当i<=h时,即缓存图像的存储区域尚未开始向下移动时,可以直接根据像素点p’在反畸变图像中对应的行数y py和列数y px将与p’对应的输入图像中的像素点,即p点的像素数据写入缓存图像中的第y py行第y px列,即像素点buffer[y py][y px]。当读取的输入图像的行数i大于缓存图像buffer存储的行数h时,即当i>h时,如前所述,此时缓存图像已输出i-h行反畸变数据,因此当前的p’点应写入缓存图像buffer的第y py-(i-h)行,第y px列,即像素点buffer[y py-(i-h)][y px]。
在一些实施例中,图像处理方法500还可以包括步骤S504:输出缓存图像的第一行像素数据用于显示并清除已显示的像素数据,以及将未显示的各行像素数据在缓存图像中上移一行。对输入图像的所有像素执行反畸变操作后,逐行输出所述缓存图像中的剩余的所有图像数据用于显示。
例如,当读取的输入图像的行数i小于等于缓存图像存储的行数h时, 等待,不进行显示。当读取的输入图像的行数i大于缓存图像存储的行数h时,每处理一行图像数据,根据缓存图像中存储的第一行数据驱动显示屏进行显示,同时,删除已显示的图像数据,并将缓存图像中其他行的数据上移一行。当图像处理完时,缓存图像中还有h行数据没有显示,需要将这h行数据连续输出并进行显示。
根据本公开实施例提供的图像处理方法,可以将图像的反畸变处理集成在显示模块中,从而减轻了图像处理模块的数据处理负担,减小了数据延时,并提高了显示模块的通用性。
图3示出了根据本公开实施例的图像处理系统200的示意性的框图。如图3所示,图像处理系统200可以包括显示模块210以及图像处理模块220。其中所述图像处理系统200可以用于虚拟现实显示装置、增强现实显示装置等。
所述显示模块210可以包括传感器单元211、显示单元212、透镜单元213。其中,显示单元212可以对接收的将要显示的图像数据进行反畸变处理,并显示经过反畸变处理后生成的反畸变图像。
所述传感器单元211可以用于检测设备以及用户的当前姿态或动作,作为传感器数据,并传送给图像处理模块220。例如,所述传感器单元211可以是惯性传感器、动作捕捉传感器或其他类型传感器。惯性传感器可以包括速度传感器、加速度传感器、陀螺仪、地磁传感器等,其用于捕捉用户的运动并确定用户的姿态(如用户的头部姿态、身体姿态等)。动作捕捉传感器可以包括红外感应传感器、体感传感器、触摸传感器、位置传感器等,其用于实现用户的动作捕捉,特别是用户的前后左右的移动状态。其他类型传感器可以包括脑电波传感器、定位设备(全球定位系统(GPS)设备、全球导航卫星系统(GLONASS)设备、北斗导航系统设备、伽利略定位系统(Galileo)设备、准天顶卫星系统(QAZZ)设备、基站定位设备、Wi-Fi定位设备等)、压力传感器等检测用户状态、位置信息的传感器,也可以包括光线传感器、温度传感器、湿度传感器等检测周围环境状态的传感器。此外,传感器单元211还可以包括图像采集设备如摄像头等用于实现手势识别、人脸识别等功能。上述多种类型的传感器可以单独或组合使用以实现显示模块的具体功能。
所述显示单元212用于接收将要显示的图像数据,并根据接收的图像数 据来驱动显示屏,并在显示屏上显示与所述图像数据对应的图像。在一些实施例中,显示单元212可以进一步用于对接收的图像数据进行反畸变处理以得到反畸变图像,并根据经过反畸变处理的反畸变图像的图像数据进行显示驱动。其中显示单元212可以通过中央处理单元CPU、图形处理单元GPU、FPGA、ASIC、CPLD中的一种或多种组合实现。
所述透镜单元213可以用于对显示屏上显示的图像进行成像以调整图像的显示位置,从而便于用户观察。例如,在目前的虚拟现实显示设备中,显示屏一般会设置在距离用户眼睛很近的位置,用户无法通过直接观看的方式看到显示屏上显示的图像。透镜单元213用于对显示屏(及显示屏上显示的图像)进行成像,其中显示屏上显示的图像经过所述透镜单元113后所形成的像的位置将落在用户眼睛聚焦的舒适区,例如,位于适于观看的距离。在一些实施例中,透镜单元213可以是透镜或透镜组。例如,透镜单元213可以是凸透镜或菲涅尔透镜等。
在一些实施例中,显示模块210还可以包括佩戴件214。佩戴件214用于辅助用户将显示模块210固定,以适于观看,例如,佩戴件214可以是眼镜、头盔、面罩等任何可以用于将显示屏210固定在用户眼前的配件。在一些实施例中,佩戴件还可以包括手套、衣服、操纵杆等配件。
图像处理模块220配置成对将要显示给用户观看的图像进行数据处理。如图3所示,所述图像处理模块220可以包括数据处理单元221和图像渲染单元222。在一些实施例中,图像处理模块220可以由计算机实现。
数据处理单元221用于对所述传感器单元211采集并传送的传感器数据进行处理,并根据接收的传感器数据确定显示模块的当前状态,例如设备以及用户的当前姿态或动作。
图像渲染单元222用于根据所述传感器数据确定的显示模块210的当前状态对将要显示给用户观看的图像进行渲染处理。
利用本公开实施例提供的图像处理系统200,可以将对图像进行的反畸变处理集成进显示模块210中,从而减轻了图像处理模块220的数据处理负担,减少了由于数据传输等引起的延时,并提高了显示模块210的通用性。
图4示出了根据本公开实施例的显示单元212的示意性的框图。如图4所示,显示单元212可以包括输入单元2121、反畸变处理单元2122、驱动单 元2123以及输出单元2124。
所述输入单元2121可以用于接收要显示的图像数据。在一些实施例中,输入单元2121可以从图3示出的图像处理模块220接收经过渲染的图像数据。例如,输入单元2121可以从图像渲染单元222接收经过渲染处理的图像数据。
所述反畸变处理单元2122可以对通过输入单元2121接收的图像数据进行反畸变处理。
在一些实施例中,反畸变处理单元2122可以由集成电路实现。反畸变处理单元2122还可以包括存储器,用于存储对图像进行反畸变处理时需要的数据。例如,存储器可以是RAM。又例如,反畸变处理单元可以通过连接外部存储器来实现数据的读取和写入。这里的连接可以是有线的或无线的。
所述驱动单元2123可以根据经过反畸变处理后的图像数据执行驱动功能。在一些实施例中,反畸变处理单元2122与驱动单元2123可以实现为同一个集成电路。在另一些实施例中,反畸变处理单元2122与驱动单元2123可以实现为不同的集成电路。
输出单元2124配置成在驱动单元2123的控制下显示经过反畸变处理后的图像数据。输出单元2124可以是一个图像显示设备。作为示例,输出单元2124可以是独立的显示屏或者包含显示屏的其他设备,包括投影设备、手机、计算机、平板电脑、电视、智能可穿戴设备(包括智能眼镜如Google Glass、智能手表、智能指环、智能头盔等)、虚拟显示设备或显示增强设备(如Oculus Rift、Gear VR、Hololens)等设备中的一种或多种。
利用本公开实施例提供的反畸变处理单元2122,可以将图像的反畸变处理集成在显示模块210中,从而减轻了图像处理模块220的数据处理负担,减少了数据延时,并提高了显示模块的通用性。
根据本公开的另一方面,还提供了一种图像处理设备。图8示出了根据本公开实施例的图像处理设备800的示意图。
如图8所示,所述图像处理设备800可以包括处理器801和第一存储器802。其中,所述第一存储器802中存储有指令,当利用所述处理器801执行所述指令时,使得所述处理器801执行如上所述的图像处理方法。
根据本公开实施例,所述图像处理设备800还可以包括第二存储器803。所述第二存储器803可以配置成存储反畸变图像的一部分。例如,所述反畸 变图像的一部分可以是如上所述的反畸变缓存图像。所述反畸变缓存图像可以基于输入图像和所述反畸变参数产生。根据输入图像的尺寸以及透镜单元的反畸变参数确定的所述反畸变缓存图像的尺寸可以是输入图像的一部分,而不是整个输入图像。即,利用根据本公开实施例的图像处理方法获得的反畸变缓存图像的尺寸仅为输入图像的一部分。例如,用于存储反畸变缓存图像的缓存区的行数可以为如上所述的h。相比于存储与输入的尺寸相对应的完整的反畸变图像,仅存储反畸变图像的一部分的数据可以减少用于存储所述反畸变图像的存储空间,并减小传输中的数据大小,提高传输速度。
利用根据本公开的图像处理方法可以将对于输入图像的反畸变处理集成在显示模块中,从而减轻了图像处理模块的数据处理负担,由此减小了数据延时,并提高了显示模块的通用性。
为了实现不同的模块、单元以及它们在本申请中所描述的功能,计算机硬件平台可以被用作以上描述的一个或多个元素的硬件平台。这类计算机的硬件元素、操作系统和程序语言是常见的,可以假定本领域技术人员对这些技术都足够熟悉,能够利用这里描述的技术提供图像处理所需要的信息。一台包含用户界面(user interface,UI)元素的计算机能够被用作个人计算机(personal computer,PC)或其他类型的工作站或终端设备,被适当程序化后也可以作为服务器使用。可以认为本领域技术人员对这样的结构、程序以及这类计算机设备的一般操作都是熟悉的,因此所有附图也都不需要额外的解释。
这类计算机可以包括个人电脑、笔记本电脑、平板电脑、手机、个人数码助理(personal digital assistance,PDA)、智能眼镜、智能手表、智能指环、智能头盔及任何智能便携设备或可穿戴设备。本公开的实施例中的特定系统利用功能框图解释了一个包含用户界面的硬件平台。这种计算机设备可以是一个通用目的的计算机设备,或一个有特定目的的计算机设备。两种计算机设备都可以被用于实现本实施例中的特定系统。
计算机系统可以包括通信端口,与之相连的是实现数据通信的网络。计算机系统还可以包括一个处理器,用于执行程序指令。所述处理器可以由一个或多个处理器组成。计算机可以包括一个内部通信总线。计算机可以包括不同形式的程序储存单元以及数据储存单元,例如硬盘,只读存储器(ROM), 随机存取存储器(RAM),能够用于存储计算机处理和/或通信使用的各种数据文件,以及处理器所执行的可能的程序指令。计算机系统还可以包括一个输入/输出组件,支持计算机系统与其他组件(如用户界面)之间的输入/输出数据流。计算机系统也可以通过通信端口从网络发送和接收信息及数据。
技术中的程序部分可以被认为是以可执行的代码和/或相关数据的形式而存在的“产品”或“制品”,通过计算机可读的介质所参与或实现的。有形的、永久的储存介质可以包括任何计算机、处理器、或类似设备或相关的模块所用到的内存或存储器。例如,各种半导体存储器、磁带驱动器、磁盘驱动器或者类似任何能够为软件提供存储功能的设备。
所有软件或其中的一部分有时可能会通过网络进行通信,如互联网或其他通信网络。此类通信可以将软件从一个计算机设备或处理器加载到另一个。例如:从图像处理系统的一个服务器或主机计算机加载至一个计算机环境的硬件平台,或其他实现系统的计算机环境,或与提供图像处理所需要的信息相关的类似功能的系统。因此,另一种能够传递软件元素的介质也可以被用作局部设备之间的物理连接,例如光波、电波、电磁波等,通过电缆、光缆或者空气等实现传播。用来载波的物理介质如电缆、无线连接或光缆等类似设备,也可以被认为是承载软件的介质。在这里的用法除非限制了有形的“储存”介质,其他表示计算机或机器“可读介质”的术语都表示在处理器执行任何指令的过程中参与的介质。
一个计算机可读的介质可能有多种形式,包括有形的存储介质,载波介质或物理传输介质等。稳定的储存介质可以包括:光盘或磁盘,以及其他计算机或类似设备中使用的,能够实现图中所描述的系统组件的存储系统。不稳定的存储介质可以包括动态内存,例如计算机平台的主内存等。有形的传输介质可以包括同轴电缆、铜电缆以及光纤,例如计算机系统内部形成总线的线路。载波传输介质可以传递电信号、电磁信号、声波信号或光波信号等。这些信号可以由无线电频率或红外数据通信的方法所产生。通常的计算机可读介质包括硬盘、软盘、磁带、任何其他磁性介质;CD-ROM、DVD、DVD-ROM、任何其他光学介质;穿孔卡、任何其他包含小孔模式的物理存储介质;RAM、PROM、EPROM、FLASH-EPROM,任何其他存储器片或磁带;传输数据或指令的载波、电缆或传输载波的连接装置、任何其他可以利用计算机读取的 程序代码和/或数据。这些计算机可读介质的形式中,会有很多种出现在处理器在执行指令、传递一个或更多结果的过程之中。
本公开中的“模块”可以指的是存储在硬件、固件中的逻辑或一组软件指令。这里所指的“模块”能够通过软件和/或硬件模块执行,或被存储于任何一种计算机可读的非临时媒介或其他存储设备中。在一些实施例中,一个软件模块可以被编译并连接到一个可执行的程序中。显然,这里的软件模块可以对自身或其他模块传递的信息做出回应,并且/或者可以在检测到某些事件或中断时做出回应。可以在一个计算机可读媒介上提供软件模块,该软件模块可以被设置为在计算设备上(例如处理器)执行操作。这里的计算机可读媒介可以是光盘、数字光盘、闪存盘、磁盘或任何其他种类的有形媒介。也可以通过数字下载的模式获取软件模块(这里的数字下载也包括存储在压缩包或安装包内的数据,在执行之前需要经过解压或解码操作)。这里的软件模块的代码可以被部分的或全部的储存在执行操作的计算设备的存储设备中,并应用在计算设备的操作之中。软件指令可以被植入在固件中,例如可擦可编程只读存储器(EPROM)。显然,硬件模块可以包含连接在一起的逻辑单元,例如门、触发器,以及/或包含可编程的单元,例如可编程的门阵列或处理器。这里所述的模块或计算设备的功能优选的作为软件模块实施,但是也可以被表示在硬件或固件中。一般情况下,这里所说的模块是逻辑模块,不受其具体的物理形态或存储器的限制。一个模块能够与其他的模块组合在一起,或被分隔成为一系列子模块。
需要说明的是,在本说明书中,术语“包括”、“包含”或者其任何其他变体意在涵盖非排他性的包含,从而使得包括一系列要素的过程、方法、物品或者设备不仅包括那些要素,而且还包括没有明确列出的其他要素,或者是还包括为这种过程、方法、物品或者设备所固有的要素。在没有更多限制的情况下,由语句“包括一个……”限定的要素,并不排除在包括所述要素的过程、方法、物品或者设备中还存在另外的相同要素。
最后,还需要说明的是,上述一系列处理不仅包括以这里所述的顺序按时间序列执行的处理,而且包括并行或分别地、而不是按时间顺序执行的处理。
通过以上的实施方式的描述,本领域的技术人员可以清楚地了解到本公 开可借助软件加必需的硬件平台的方式来实现,当然也可以全部通过硬件来实施。基于这样的理解,本公开的技术方案对背景技术做出贡献的全部或者部分可以以软件产品的形式体现出来,该计算机软件产品可以存储在存储介质中,如ROM/RAM、磁碟、光盘等,包括若干指令用以使得一台计算机设备(可以是个人计算机,服务器,或者网络设备等)执行本公开各个实施例或者实施例的某些部分所述的方法。
以上对本公开进行了详细介绍,本文中应用了具体个例对本公开的原理及实施方式进行了阐述,以上实施例的说明只是用于帮助理解本公开的方法及其核心思想;同时,对于本领域的一般技术人员,依据本公开的思想,在具体实施方式及应用范围上均会有改变之处,综上所述,本说明书内容不应理解为对本公开的限制。

Claims (18)

  1. 一种用于包括透镜单元的电子设备的图像处理方法,包括:
    根据输入图像的尺寸以及透镜单元的反畸变参数确定待缓存的图像尺寸;以及
    按照所述待缓存的图像尺寸,基于输入图像和所述反畸变参数,产生具有所述图像尺寸的反畸变缓存图像。
  2. 如权利要求1所述的方法,其中,产生具有所述图像尺寸的反畸变缓存图像包括:
    从所述输入图像中选择一行像素并对该行像素执行反畸变操作,所述反畸变操作包括根据所述反畸变参数确定对应于该行像素的反畸变图像的像素数据,并将该反畸变图像的像素数据写入所述反畸变缓存图像。
  3. 如权利要求2所述的图像处理方法,其中从所述输入图像中选择一行像素并对该行像素执行反畸变操作包括:对所述输入图像中的各行像素逐行执行反畸变操作。
  4. 如权利要求1-3中任一项所述的图像处理方法,其中所述反畸变参数包括所述反畸变图像的多个像素点的高度与所述反畸变图像经所述透镜单元形成的虚像中对应的多个像素点的高度之间的物像关系,基于透镜单元的光学参数确定所述物像关系。
  5. 如权利要求1-4中任一项所述的图像处理方法,其中所述透镜单元的光学参数包括所述透镜单元的焦距、所述输入图像的显示位置与所述透镜单元之间的距离以及用户观看位置与所述透镜单元之间的距离。
  6. 如权利要求1-5中任一项所述的图像处理方法,其中,所述反畸变图像中多个像素点的高度是所述反畸变图像中的多个像素点到透镜中心在所述输入图像上的映射点的各自的距离,所述虚像中对应的多个像素点的高度是所述虚像中的对应的多个像素点到透镜中心在所述虚像中的映射点的各自的距离。
  7. 如权利要求1-6中任一项所述的图像处理方法,其中根据输入图像的尺寸以及透镜单元的反畸变参数确定待缓存的图像尺寸包括:
    基于所述反畸变参数确定用于所述输入图像的反畸变网格,以及所述反 畸变图像的四个顶点在所述反畸变网格上以所述输入图像的中心作为原点确定的坐标值,并分别确定所述四个顶点在列方向上的坐标值的绝对值;
    确定所述四个顶点在列方向上的坐标值的绝对值中的最小绝对值Y;
    根据所述最小绝对值Y确定所述待缓存的图像尺寸。
  8. 如权利要求1-7中任一项所述的方法,其中,所述输入图像的尺寸为W*H,所述待缓存的图像在行方向上的尺寸为W,在列方向上的尺寸为k*H*(1-|Y|)+1,其中k是大于等于1的实数。
  9. 如权利要求1-8中任一项所述的图像处理方法,其中根据所述反畸变参数确定对应于该行像素的反畸变图像的像素数据,并将该反畸变图像的像素数据写入所述反畸变缓存图像包括:
    对于所述输入图像中的每行像素数据中的每一个像素点,确定从所述透镜中心在所述输入图像上的映射点到该像素点的向量;
    基于所述物像关系以及所述输入图像的尺寸确定所述虚像的尺寸;以及
    根据所述向量以及所述虚像的尺寸确定该像素点在所述虚像中的对应像素点的像高;
    基于所述物像关系,根据所述虚像中对应像素点的像高确定该像素点在所述反畸变图像中的对应像素点的物高;
    根据所述反畸变图像中的对应像素点的物高将该像素点的像素数据写入缓存图像。
  10. 如权利要求1-9中任一项所述的图像处理方法,其中根据所述反畸变图像中的对应像素点的物高将该像素点的像素数据写入所述反畸变缓存图像包括:
    根据所述反畸变图像中的对应像素点的物高确定所述对应像素点在所述缓存图像中的对应的像素点,并将该像素点的灰阶值存入反畸变缓存图像中的对应像素点。
  11. 如权利要求1-10中任一项所述的图像处理方法,其中对于所述输入图像中的每行像素数据中的每个像素点,确定从所述透镜中心在所述输入图像上的映射点到所述每个像素点的向量包括:
    确定所述透镜中心在所述输入图像上的映射点到该像素点的距离和方向。
  12. 如权利要求1-11中任一项所述的图像处理方法,还包括:
    输出反畸变缓存图像的第一行像素数据用于显示并清除已显示的像素数据,以及
    将未显示的各行像素数据在反畸变缓存图像中上移一行。
  13. 如权利要求1-12中任一项所述的图像处理方法,还包括:
    对输入图像的所有像素执行反畸变操作后,逐行输出所述反畸变缓存图像中的剩余的图像数据用于显示。
  14. 一种图像处理设备,包括:
    处理器;
    和第一存储器,其中所述第一存储器中存储有指令,当利用所述处理器执行所述指令时,使得所述处理器执行如权利要求1-13中任一项所述的方法;
    所述图像处理设备还包括第二存储器,用于存储所述反畸变图像的一部分。
  15. 一种虚拟现实显示装置,包括:
    传感器,配置成采集用于确定所述虚拟现实显示装置当前状态的传感器数据;
    显示单元,配置成接收基于所述传感器数据确定的输入图像并对所述输入图像进行反畸变处理以生成反畸变图像的像素数据,以及根据所述反畸变图像的像素数据进行显示驱动,其中,所述显示单元包括如权利要求14所述的图像处理设备;以及
    透镜单元,配置成对由所述显示单元驱动显示的图像进行成像。
  16. 如权利要求15所述的虚拟现实显示装置,其中所述传感器包括速度传感器、加速度传感器、地磁传感器、触摸传感器、距离传感器中的一种或多种。
  17. 如权利要求15或16所述的虚拟现实显示装置,其中所述显示单元包括中央处理单元、图形处理单元、FPGA、ASIC、CPLD中的一种或多种。
  18. 如权利要求15-17中任一项所述的虚拟现实显示装置,其中所述透镜单元包括凸透镜、菲涅尔透镜、凹透镜中的一种或多种。
PCT/CN2019/073452 2018-05-29 2019-01-28 图像处理方法、设备以及虚拟现实显示装置 WO2019227958A1 (zh)

Priority Applications (2)

Application Number Priority Date Filing Date Title
EP19812362.2A EP3806029A4 (en) 2018-05-29 2019-01-28 IMAGE PROCESSING METHOD AND DEVICE AND VIRTUAL REALITY DISPLAY DEVICE
US16/494,588 US11308588B2 (en) 2018-05-29 2019-01-28 Image processing method, device, and virtual reality display apparatus

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201810534923.3A CN110544209B (zh) 2018-05-29 2018-05-29 图像处理方法、设备以及虚拟现实显示装置
CN201810534923.3 2018-05-29

Publications (1)

Publication Number Publication Date
WO2019227958A1 true WO2019227958A1 (zh) 2019-12-05

Family

ID=68696816

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2019/073452 WO2019227958A1 (zh) 2018-05-29 2019-01-28 图像处理方法、设备以及虚拟现实显示装置

Country Status (4)

Country Link
US (1) US11308588B2 (zh)
EP (1) EP3806029A4 (zh)
CN (1) CN110544209B (zh)
WO (1) WO2019227958A1 (zh)

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107861247B (zh) * 2017-12-22 2020-08-25 联想(北京)有限公司 光学部件及增强现实设备
CN113467602B (zh) * 2020-03-31 2024-03-19 中国移动通信集团浙江有限公司 Vr显示方法及系统
CN111652959B (zh) 2020-05-29 2022-01-18 京东方科技集团股份有限公司 图像处理方法、近眼显示设备、计算机设备和存储介质
CN112367441B (zh) * 2020-11-09 2023-08-08 京东方科技集团股份有限公司 一种反畸变图像的生成方法及装置
CN113160067A (zh) * 2021-01-26 2021-07-23 睿爱智能科技(上海)有限责任公司 一种修正vr大视场角畸变的方法

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140192231A1 (en) * 2013-01-04 2014-07-10 Canon Kabushiki Kaisha Image signal processing apparatus and a control method thereof, and an image pickup apparatus and a control method thereof
CN106780758A (zh) * 2016-12-07 2017-05-31 歌尔科技有限公司 用于虚拟现实设备的成像方法、装置及虚拟现实设备
CN108062156A (zh) * 2016-11-07 2018-05-22 上海乐相科技有限公司 一种降低虚拟现实设备功耗的方法及装置

Family Cites Families (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2003283839A (ja) 2002-03-19 2003-10-03 Sanyo Electric Co Ltd 画像変換方法および装置
EP2533192B1 (en) * 2003-07-28 2017-06-28 Olympus Corporation Image processing apparatus, image processing method, and distortion correcting method
US20130249947A1 (en) * 2011-08-26 2013-09-26 Reincloud Corporation Communication using augmented reality
KR101615332B1 (ko) 2012-03-06 2016-04-26 삼성디스플레이 주식회사 유기 발광 표시 장치의 화소 배열 구조
US8928730B2 (en) * 2012-07-03 2015-01-06 DigitalOptics Corporation Europe Limited Method and system for correcting a distorted input image
EP2804144A1 (en) * 2013-05-16 2014-11-19 SMR Patents S.à.r.l. Method and device for processing input image data
CN105455285B (zh) * 2015-12-31 2019-02-12 北京小鸟看看科技有限公司 一种虚拟现实头盔适配方法
CN106204480B (zh) * 2016-07-08 2018-12-18 石家庄域联视控控制技术有限公司 基于圆锥曲线的图像畸变矫正方法及实时矫正装置
CN206039041U (zh) 2016-08-24 2017-03-22 宁波视睿迪光电有限公司 一种像素结构、显示面板及显示装置
US10373297B2 (en) * 2016-10-26 2019-08-06 Valve Corporation Using pupil location to correct optical lens distortion
CN106600555B (zh) * 2016-12-16 2019-04-16 中国航空工业集团公司洛阳电光设备研究所 一种抗单粒子翻转的dvi图像畸变校正装置
CN106873162B (zh) * 2017-03-14 2019-05-28 上海天马微电子有限公司 显示装置的像素排列方法、显示装置及近眼显示设备

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140192231A1 (en) * 2013-01-04 2014-07-10 Canon Kabushiki Kaisha Image signal processing apparatus and a control method thereof, and an image pickup apparatus and a control method thereof
CN108062156A (zh) * 2016-11-07 2018-05-22 上海乐相科技有限公司 一种降低虚拟现实设备功耗的方法及装置
CN106780758A (zh) * 2016-12-07 2017-05-31 歌尔科技有限公司 用于虚拟现实设备的成像方法、装置及虚拟现实设备

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See also references of EP3806029A4 *

Also Published As

Publication number Publication date
US11308588B2 (en) 2022-04-19
CN110544209B (zh) 2022-10-25
US20210334943A1 (en) 2021-10-28
CN110544209A (zh) 2019-12-06
EP3806029A1 (en) 2021-04-14
EP3806029A4 (en) 2022-03-23

Similar Documents

Publication Publication Date Title
WO2019227958A1 (zh) 图像处理方法、设备以及虚拟现实显示装置
EP4105766A1 (en) Image display method and apparatus, and computer device and storage medium
JP7339386B2 (ja) 視線追跡方法、視線追跡装置、端末デバイス、コンピュータ可読記憶媒体及びコンピュータプログラム
CN110322542A (zh) 重建真实世界3d场景的视图
KR20210013150A (ko) 조명 추정
EP3798986A1 (en) Location aware visual markers
WO2021164712A1 (zh) 位姿跟踪方法、可穿戴设备、移动设备以及存储介质
US20180158171A1 (en) Display apparatus and controlling method thereof
CN110290285B (zh) 图像处理方法、图像处理装置、图像处理系统及介质
US20240031678A1 (en) Pose tracking for rolling shutter camera
US11961195B2 (en) Method and device for sketch-based placement of virtual objects
US9092909B2 (en) Matching a system calculation scale to a physical object scale
WO2022159942A1 (en) Enhancing three-dimensional models using multi-view refinement
US11741620B1 (en) Plane detection using depth sensor and semantic information
CN112866559B (zh) 图像采集方法、装置、系统及存储介质
US20210264673A1 (en) Electronic device for location-based ar linking of object-based augmentation contents and operating method thereof
CN111866493A (zh) 基于头戴显示设备的图像校正方法、装置及设备
US11887228B2 (en) Perspective correct vector graphics with foveated rendering
CN112488909A (zh) 多人脸的图像处理方法、装置、设备及存储介质
KR102534449B1 (ko) 이미지 처리 방법, 장치, 전자 장치 및 컴퓨터 판독 가능 저장 매체
US12015843B1 (en) Computer and information processing method
US20220270312A1 (en) Perspective correct vector graphics rendering techniques
US20240221328A1 (en) Method and Device for Sketch-Based Placement of Virtual Objects
US20230351674A1 (en) Image processing device and image processing method
KR20230162927A (ko) 인간의 시야 범위로부터의 자기중심적 포즈 추정

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19812362

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

ENP Entry into the national phase

Ref document number: 2019812362

Country of ref document: EP

Effective date: 20210111