WO2024125159A1 - 扩展现实设备的畸变系数标定方法、装置和存储介质 - Google Patents

扩展现实设备的畸变系数标定方法、装置和存储介质 Download PDF

Info

Publication number
WO2024125159A1
WO2024125159A1 PCT/CN2023/130102 CN2023130102W WO2024125159A1 WO 2024125159 A1 WO2024125159 A1 WO 2024125159A1 CN 2023130102 W CN2023130102 W CN 2023130102W WO 2024125159 A1 WO2024125159 A1 WO 2024125159A1
Authority
WO
WIPO (PCT)
Prior art keywords
calibration
distortion
image
calibration point
standard
Prior art date
Application number
PCT/CN2023/130102
Other languages
English (en)
French (fr)
Inventor
李子祺
嵇盼
李宏东
Original Assignee
腾讯科技(深圳)有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 腾讯科技(深圳)有限公司 filed Critical 腾讯科技(深圳)有限公司
Publication of WO2024125159A1 publication Critical patent/WO2024125159A1/zh

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/80Geometric correction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • G06T7/75Determining position or orientation of objects or cameras using feature-based methods involving models
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30244Camera pose

Definitions

  • the present application relates to the field of computer and communication technology, and in particular to a method, device and storage medium for calibrating distortion coefficients of an extended reality device.
  • Extended reality devices are also called XR devices, which include virtual reality devices (also known as VR devices), augmented reality devices (AR devices) and mixed reality devices (MR devices). These devices can realize system simulation of three-dimensional dynamic vision and entity behavior.
  • VR devices virtual reality devices
  • AR devices augmented reality devices
  • MR devices mixed reality devices
  • the light emitted by the display in the extended reality device will pass through the optical lens and enter the human eye.
  • the image viewed by the user through the extended reality device may be a distorted image.
  • the distortion coefficient of the extended reality device needs to be used to correct the distortion before using the extended reality device.
  • the embodiments of the present application provide a method, apparatus, computer device, storage medium and computer program product for calibrating the distortion coefficient of an extended reality device.
  • a method for calibrating distortion coefficients of an extended reality device comprising:
  • each of the calibration point pairs comprising a first calibration point belonging to the standard calibration image and a second calibration point belonging to the distorted calibration image, wherein the second calibration point is a corresponding calibration point obtained by collecting the first calibration point through an optical machine lens of the extended reality device;
  • the distortion relationship to be fitted includes a distortion coefficient to be determined
  • a distortion coefficient calibration device for an extended reality device comprising:
  • An image acquisition module used to acquire a standard calibration image and a distortion calibration image, wherein the distortion calibration image is formed by capturing the image through an optical machine lens of the extended reality device when the display of the extended reality device displays the standard calibration image as an image;
  • a calibration point pair determination module configured to perform calibration point detection based on the standard calibration image and the distorted calibration image to obtain a plurality of calibration point pairs; each calibration point pair comprises a first calibration point belonging to the standard calibration image and a second calibration point belonging to the distorted calibration image, wherein the second calibration point is a corresponding calibration point obtained by collecting the first calibration point through an optical machine lens of the extended reality device;
  • a numerical fitting module is used to obtain a distortion relationship to be fitted; the distortion relationship to be fitted includes distortion coefficients to be determined; according to the multiple pairs of calibration point coordinate pairs, the distortion relationship to be fitted is numerically fitted to determine the value of the distortion coefficient in the distortion relationship to be fitted, and obtain the distortion relationship, wherein the distortion relationship is used to characterize the conversion relationship of the calibration points between the standard calibration image and the distorted calibration image.
  • a computer device includes a memory and a processor, the memory stores a computer program, and when the processor executes the computer program, the steps in the distortion coefficient calibration method of any extended reality device provided in the embodiments of the present application are implemented.
  • a computer-readable storage medium having a computer program stored thereon, wherein the computer program, when executed by a processor, implements the steps in the method for calibrating the distortion coefficient of any extended reality device provided in an embodiment of the present application.
  • a computer program product or a computer program includes a computer instruction stored in a computer-readable storage medium.
  • a processor of a computer device reads the computer instruction from the computer-readable storage medium, and the processor executes the computer instruction, so that the computer device executes the distortion coefficient calibration method of the extended reality device provided in the above various optional embodiments.
  • FIG1 is a diagram showing an application environment of a method for calibrating distortion coefficients of an extended reality device in one embodiment
  • FIG2 is a schematic flow chart of a method for calibrating distortion coefficients of an extended reality device in one embodiment
  • FIG3 is a schematic diagram of a calibration image in one embodiment
  • FIG4 is a schematic diagram of collecting a distortion calibration image in one embodiment
  • FIG5 is a schematic diagram of a calibration point in one embodiment
  • FIG6 is a schematic diagram of a sliding window moving in any direction in one embodiment
  • FIG7 is a schematic diagram of an extended reality device displaying an image in one embodiment
  • FIG8 is a schematic diagram of the overall process of distortion coefficient calibration in one embodiment
  • FIG9 is a schematic flow chart of a method for calibrating distortion coefficients of an extended reality device in a specific embodiment
  • FIG10 is a structural block diagram of a distortion coefficient calibration device for an extended reality device in one embodiment
  • FIG11 is a structural block diagram of a distortion coefficient calibration device for an extended reality device in another embodiment
  • FIG12 is a diagram showing the internal structure of a computer device in one embodiment
  • FIG. 13 is a diagram showing the internal structure of a computer device in one embodiment.
  • the instruction sequence of a computer program may include various branch instructions, such as conditional jump instructions, etc.
  • a branch instruction is an instruction in a computer program that can cause the computer to execute a different instruction sequence, thereby deviating from its default behavior of executing instructions in sequence.
  • the distortion coefficient calibration method of the extended reality device provided in the embodiment of the present application can be applied to the application environment shown in Figure 1.
  • the terminal 102 communicates with the server 104 through the network.
  • the data storage system can store the data that the server 104 needs to process.
  • the data storage system can be integrated on the server 104, or it can be placed on the cloud or other servers.
  • Both the terminal 102 and the server 104 can be used alone to execute the distortion coefficient calibration method of the extended reality device provided in the embodiment of the present application.
  • the terminal 102 and the server 104 can also be used in conjunction to execute the distortion coefficient calibration method of the extended reality device provided in the embodiment of the present application.
  • the terminal 102 can obtain a standard calibration image and a distortion calibration image, and send the standard calibration image and the distortion calibration image to the server 104, so that the server determines the target value of the distortion coefficient of the extended reality device according to the standard calibration image and the distortion calibration image.
  • the terminal 102 may be, but is not limited to, various desktop computers, laptop computers, smart phones, tablet computers, IoT devices, and portable wearable devices.
  • the IoT devices may be smart speakers, smart TVs, smart air conditioners, smart car-mounted devices, etc.
  • the portable wearable devices may be extended reality devices, smart watches, smart bracelets, finger rings, handles, head-mounted devices, etc.
  • the server 104 may be implemented by an independent server or a server cluster consisting of multiple servers.
  • XR technology refers to the integration of reality and virtuality through computer technology and wearable devices to create a virtual environment for human-computer interaction. It includes the technical features of VR, AR (Augmented Reality), and MR (Mediated Reality), bringing the experiencer an immersive feeling of seamless transition between the virtual world and the real world.
  • VR technology refers to the use of computers and other devices to produce a virtual world with realistic three-dimensional vision, touch, smell and other sensory experiences, so that people in the virtual world have an immersive feeling.
  • AR technology is a technology that superimposes virtual information on the real world, and even achieves technology beyond reality. To a certain extent, it is an extension of VR technology. Relatively speaking, AR equipment products have the characteristics of small size, light weight, and portability; MR technology is a further development of VR and AR technology.
  • XR technology encompasses the characteristics of the above three technologies and has broad application prospects.
  • It can be used in scenarios of remote teaching of science and experimental courses in education and training, or in immersive entertainment scenarios in film and television entertainment, such as immersive movie watching and games, or in exhibition activities such as concerts, dramas, and museums, or in 3D home decoration and architectural design scenarios in industrial modeling and design, or in new consumption scenarios, such as cloud shopping, cloud fitting, and other scenarios.
  • immersive entertainment scenarios in film and television entertainment such as immersive movie watching and games, or in exhibition activities such as concerts, dramas, and museums, or in 3D home decoration and architectural design scenarios in industrial modeling and design, or in new consumption scenarios, such as cloud shopping, cloud fitting, and other scenarios.
  • a method for calibrating the distortion coefficient of an extended reality device is provided, and the method is applied to a computer device as an example for explanation.
  • the computer device may be a terminal or a server in FIG1 .
  • the method for calibrating the distortion coefficient of an extended reality device comprises the following steps:
  • Step 202 obtaining a standard calibration image and a distortion calibration image; the distortion calibration image is formed by capturing the image through the optical machine lens of the extended reality device when the display of the extended reality device displays the standard calibration image as an image.
  • the standard calibration image refers to a standardized image used for calibration.
  • the standard calibration image can specifically be a standardized checkerboard image, and the standardized checkerboard image is a checkerboard image with evenly distributed corner points. Corner points can be extreme points. For example, the corner point when two straight lines form an angle is called a corner point.
  • the distorted calibration image refers to the image after the standard calibration image is distorted. Distortion can refer to a change in deformity, such as a morphological change.
  • Figure 301 is a standard calibration image
  • Figure 302 is a distorted calibration image.
  • Figure 3 shows a schematic diagram of a calibration image in an embodiment.
  • extended reality is a general term for various new immersive technologies such as virtual reality (VR), augmented reality (AR) and mixed reality (MR), and extended reality devices are a general term for virtual reality devices, augmented reality devices and mixed reality devices.
  • extended reality technology is a computer system that can create a virtual world or merge the real world with the virtual world.
  • the distortion coefficient refers to the coefficient used when the optical machine lens in the extended reality device produces distortion.
  • the distortion calibration image is acquired by an image acquisition device; during the process of the image acquisition device acquiring the distortion calibration image, the optomechanical lens of the extended reality device is located between the image acquisition device and the display of the extended reality device, and the optical center of the optomechanical lens is aligned with the center of the display.
  • the distortion calibration image is acquired by an image acquisition device.
  • the standard calibration image can be input into the extended reality device, and the standard calibration image can be displayed on the extended reality display, so that the image acquisition device can acquire the standard calibration image through the optical machine lens of the extended reality device to obtain the distortion calibration image.
  • the extended reality device may include an optomechanical lens and a display.
  • the optomechanical lens may include a plurality of lenses.
  • the optical center of the optomechanical lens may be aligned with the center of the display, so that the light emitted by the display when displaying the standard calibration image may be transmitted to the image acquisition device through the optomechanical lens, and then the image acquisition device may image the received light to obtain a distortion calibration image.
  • FIG4 shows a schematic diagram of the acquisition of a distortion calibration image in one embodiment.
  • the optomechanical lens in the extended reality device may be a complex folded optical path ultra-short focus XR optomechanical lens, which is also called an optomechanical pancake lens, and is an ultra-thin XR lens that folds the optical path through optical elements.
  • an optomechanical pancake lens By equipping the extended reality device with an optomechanical pancake lens, the thickness of the extended reality device can be greatly reduced.
  • the pancake lens is also called a folded optical path lens, which adopts a "folded" optical path structure to shorten the straight-line distance from the screen to the eye while ensuring the magnification of the virtual image.
  • the pancake lens includes a semi-reflective and semi-transparent lens, a phase delay plate, and a reflective polarizer.
  • the light is folded back and forth between the lens, the phase delay plate, and the reflective polarizer for many times, and finally emitted from the reflective polarizer.
  • the volume of the optical machine part can be reduced, thereby achieving the purpose of reducing the volume of the entire XR device and improving wearing comfort.
  • the distance between the image acquisition device and the optical machine lens can refer to the distance between the human eye and the optical machine lens when the user uses the augmented reality device.
  • the distance between the image acquisition device and the optical machine lens can be consistent with the distance between the human eye and the optical machine lens.
  • the center of the image acquisition device, the optical center of the optomechanical lens, and the center of the display may be aligned.
  • the center of the image acquisition device, the optical center of the optomechanical lens, and the center of the display may be located on a horizontal line.
  • the distortion calibration image can be quickly and conveniently acquired through the image acquisition device through the optical machine lens by simply displaying the standard calibration image on the display, thereby improving the acquisition efficiency of the distortion calibration image and further improving the calibration efficiency of the distortion coefficient.
  • Step 204 perform calibration point detection based on the standard calibration image and the distorted calibration image to obtain multiple pairs of calibration point pairs; each calibration point pair includes a first calibration point belonging to the standard calibration image and a second calibration point belonging to the distorted calibration image, and the second calibration point is collected by the optical machine lens of the extended reality device during the process of obtaining the distorted calibration image. The corresponding calibration point is obtained by point.
  • the calibration point refers to a position point in the calibration image with calibration characteristics.
  • the calibration point can be a corner point in the checkerboard.
  • the computer device can perform calibration point detection on the standard calibration image and the distorted calibration image to obtain multiple first calibration points in the standard calibration image and second calibration points corresponding to each first calibration point in the distorted calibration image, thereby obtaining multiple pairs of calibration point pairs.
  • the first calibration point 501 in the standard calibration image is changed into the second calibration point 502 in the distorted calibration image after distortion
  • the first calibration point 501 corresponds to the second calibration point 502.
  • the first calibration point 501 and the second calibration point 502 form a calibration point pair.
  • the coordinates of the first calibration point in the standard calibration image are called the first calibration point coordinates
  • the coordinates of the second calibration point in the distorted calibration image are called the second calibration point coordinates.
  • FIG5 shows a schematic diagram of calibration points in an embodiment.
  • the computer device can perform calibration point detection on the standard calibration image and the distorted calibration image according to the opencv (a cross-platform computer vision and machine learning software library) corner point detection method to obtain multiple pairs of calibration point pairs.
  • opencv a cross-platform computer vision and machine learning software library
  • the computer device determines whether the pixel block includes a calibration point feature. If the pixel block includes the calibration point feature, the pixel block is determined to be the first calibration point. The computer device can determine whether the pixel block includes the calibration point feature through a pre-trained machine learning model, and accordingly, the computer device can also determine the second calibration point through the above method.
  • the pixel block may include one or more pixel points.
  • a coordinate system may be established with the center of the standard calibration image as the origin, thereby determining the coordinates of the first calibration point in the standard calibration image in the coordinate system, and obtaining the coordinates of the first calibration point.
  • a coordinate system may be established with the center of the distorted calibration image as the origin, thereby determining the coordinates of the second calibration point in the distorted calibration image in the coordinate system, and obtaining the coordinates of the second calibration point.
  • the sizes of the standard calibration point image and the distorted calibration point image may be adjusted so that the sizes of the standard calibration image and the distorted calibration image are unified.
  • Step 206 obtaining the distortion relationship to be fitted; the distortion relationship to be fitted includes the distortion coefficient to be determined.
  • the computer device may obtain a preset distortion relationship to be fitted, which is used to fit the distortion relationship, and the distortion relationship is used to characterize the conversion relationship between the calibration points of the standard calibration image and the distorted calibration image, wherein the distortion coefficient in the distortion relationship to be fitted is to be determined.
  • the distortion relationship to be fitted can be determined by the following formula:
  • r u refers to the undistorted distance
  • r d refers to the distorted distance
  • the undistorted distance refers to the distance between the coordinates of the first calibration point and the center point of the standard calibration image.
  • the distorted distance refers to the distance between the coordinates of the second calibration point and the center point of the distorted calibration image. The distance between them.
  • (x u , y u ) refers to the coordinates of the first calibration point, which are the coordinates determined by establishing a coordinate system with the center point of the standard calibration image as the origin.
  • (x d , y d ) refers to the coordinates of the second calibration point, which are the coordinates determined by establishing a coordinate system with the center point of the distorted calibration image as the origin.
  • c 1 and c 2 are the distortion coefficients to be determined.
  • the coordinates of the first calibration point are the coordinates of the first calibration point in the standard calibration image.
  • the coordinates of the second calibration point are the coordinates of the
  • Step 208 numerically fitting the distortion relationship to be fitted based on the multiple pairs of calibration points to determine the value of the distortion coefficient in the distortion relationship to be fitted, and obtain the distortion relationship, which is used to characterize the conversion relationship of the calibration points between the standard calibration image and the distorted calibration image.
  • numerical fitting is also called curve fitting, commonly known as curve pulling, which refers to the process of obtaining a continuous function (that is, a curve) based on a number of discrete data, wherein the obtained continuous function matches the input discrete data.
  • curve fitting commonly known as curve pulling, which refers to the process of obtaining a continuous function (that is, a curve) based on a number of discrete data, wherein the obtained continuous function matches the input discrete data.
  • the calibration points in the standard calibration image can be converted into the calibration points in the distorted calibration image.
  • the distortion relationship can be specifically a function.
  • the function can output the coordinates of the calibration point B corresponding to the calibration point A in the standard calibration image.
  • the computer device can determine the coordinate pair corresponding to each calibration point pair.
  • the coordinate pair includes the coordinates of the first calibration point and the coordinates of the second calibration point, and the first calibration point and the second calibration point belong to the same calibration point pair.
  • the coordinate pair A corresponding to the calibration point A may include the coordinates of the first calibration point a and the coordinates of the second calibration point b.
  • the value of the distortion coefficient to be determined in the distortion relationship to be fitted is determined. For example, the value of c1 and the value of c2 in the above formula can be determined.
  • the target value of the distortion coefficient is the distortion coefficient value used when the optical machine lens of the extended reality device produces distortion.
  • the distortion coefficient of the extended reality device when the distortion is produced has been calibrated. By calibrating the distortion coefficient, distortion correction, depth estimation, and spatial positioning based on the calibrated distortion coefficient can be performed subsequently.
  • the computer device can perform numerical fitting on the distortion relationship to be fitted based on multiple pairs of calibrated point coordinates by least squares method, gradient descent learning method, trust region algorithm, Gauss-Newton iteration method, etc., to obtain the value of the distortion coefficient in the distortion relationship to be fitted.
  • the distortion relationship to be fitted can be numerically fitted based on multiple pairs of calibration points to obtain the value of the distortion coefficient in the distortion relationship to be fitted, that is, to obtain the distortion relationship that characterizes the conversion relationship of the calibration points between the standard calibration image and the distorted calibration image, thereby achieving the purpose of automatic calibration of the distortion coefficient. Since the present application only requires one standard calibration image to realize the distortion coefficient calibration of the extended reality device, the process of distortion coefficient calibration is greatly simplified and the efficiency of distortion coefficient calibration is improved.
  • calibration point detection is performed based on the standard calibration image and the distorted calibration image to obtain multiple pairs of calibration point coordinate pairs, including: performing calibration point detection on the standard calibration image to obtain multiple first calibration points, and determining The coordinates of each first calibration point in the standard calibration image are determined to obtain the first calibration point coordinates corresponding to each of the multiple first calibration points; calibration point detection is performed on the distorted calibration image to obtain multiple second calibration points, and the coordinates of each second calibration point in the distorted calibration image are determined to obtain the second calibration point coordinates corresponding to each of the multiple second calibration points; the positional relationship between each of the first calibration points is determined according to the first calibration point coordinates corresponding to each of the multiple first calibration points; the positional relationship between each of the second calibration points is determined according to the second calibration point coordinates corresponding to each of the multiple second calibration points; and the multiple first calibration points and the multiple second calibration points are matched according to the positional relationship between each of the first calibration points and the positional relationship between each of the second calibration points to obtain multiple pairs of calibration point pairs.
  • the computer device performs calibration point detection on the standard calibration image to identify each first calibration point in the standard calibration image, and outputs the first calibration point coordinates corresponding to each first calibration point.
  • the first calibration point refers to a pixel point or a pixel block in the standard calibration image
  • the first calibration point coordinates refer to the position coordinates of the pixel point or the pixel block in the standard calibration image.
  • the standard calibration image is a checkerboard image
  • the white color in the standard calibration image can be used as the background color, so that the computer device identifies the corner points of each black square, and uses the identified corner points as the first calibration points.
  • the computer device can also perform calibration point detection on the distorted calibration image, use the white color of the distorted calibration image as the background color, identify the corner points of the black squares in the distorted calibration image, obtain multiple second calibration points, and output the second calibration point coordinates corresponding to each second calibration point.
  • the computer device determines the positional relationship between each first calibration point according to the horizontal and vertical coordinate values in the coordinates of each first calibration point. Since the first calibration point and the first calibration point coordinates are one-to-one corresponding, the positional relationship between each first calibration point is also the positional relationship between the coordinates of each first calibration point. For example, when the first calibration point coordinate of the first calibration point A is (1,1), the first calibration point coordinate of the first calibration point B is (1,0), and the first calibration point coordinate of the first calibration point C is (0,1), it can be considered that the first calibration point A is located on the right side of the first calibration point C and above the first calibration point B.
  • the computer device sorts the first calibration points according to the positional relationship between the first calibration points to obtain a first calibration point matrix.
  • the identifier A of the first calibration point A is located on the right side of the identifier C of the first calibration point C and above the identifier B of the first calibration point B.
  • the computer device can determine the positional relationship between each second calibration point in the above manner. Since there is a one-to-one correspondence between the second calibration points and the second calibration point coordinates, the positional relationship between the second calibration points is also the positional relationship between the second calibration point coordinates. According to the positional relationship between the second calibration points, the second calibration points are sorted to obtain a second calibration point matrix.
  • the computer device regards the first calibration point and the second calibration point located at the same position in the first calibration point matrix and the second calibration point matrix as a pair of calibration points, and regards the first calibration point coordinates of the first calibration point and the second calibration point coordinates of the second calibration point in the pair of calibration points as a pair of coordinate pairs. For example, when it is determined that the first calibration point A is located at the intersection of the first row and the first column in the first calibration point matrix, and the second calibration point D is located at the intersection of the first row and the first column in the second calibration point matrix, it can be determined that the first calibration point coordinates corresponding to the first calibration point A and the second calibration point coordinates corresponding to the second calibration point D are a pair of calibration point pairs.
  • a calibration point pair reflecting the change of the calibration point before and after the distortion can be obtained, so that the distortion coefficient of the extended reality device can be obtained based on the calibration point pair.
  • the positional relationship between each point in the image will not change. Therefore, by determining the positional relationship between each first coordinate point and determining the positional relationship between each second coordinate point, calibration points with corresponding relationships before and after distortion can be accurately obtained based on the determined positional relationship.
  • calibration point detection is performed on a standard calibration image to obtain multiple first calibration points, including: obtaining a sliding window, and triggering the sliding window to slide on the standard calibration image according to a preset moving step length to obtain a standard local image framed by the sliding window; determining a first overall grayscale value of the standard local image; triggering the sliding window to move in an arbitrary direction multiple times to obtain multiple moved standard local images corresponding to the current standard local image; determining a second overall grayscale value of each moved standard local image; and extracting a first calibration point from the standard local image when the difference between each second overall grayscale value and the first overall grayscale value is greater than or equal to a preset difference threshold.
  • the computer device may generate a sliding window and trigger the sliding window to slide on the standard calibration image according to a preset moving step. For example, the computer device may trigger the sliding window to slide on the sliding window in a sequence from top to bottom and from left to right.
  • the image selected by the sliding window is called a standard local image.
  • the computer device may determine whether the standard local image selected by the sliding window includes the features of the calibration point. If so, it is determined that there is a calibration point in the sliding window, and the calibration point is used as the first calibration point.
  • this corner point should be expressed as gradient information of edge changes in two or more directions in a certain area. If a sliding window is used to observe the corner point area, the sliding window slides in multiple directions and can perceive strong changes in pixel density (gradient), that is, changes in the overall grayscale value can be perceived.
  • gradient pixel density
  • the standard local image currently framed by the sliding window is called the current standard local image.
  • the computer device can trigger the sliding window 601 to move multiple times in any direction to obtain multiple sliding windows after movement, and each content framed by the sliding window after movement is called the standard local image after movement.
  • the computer device determines the overall grayscale value of the current standard local image, and the overall grayscale value of the current standard local image is called the first overall grayscale value, and determines the overall grayscale value of each standard local image after sliding, which is called the second overall grayscale value, and determines the difference between the second overall grayscale value of each standard local image after sliding and the first overall grayscale value of the current standard local image.
  • FIG6 shows a schematic diagram of a sliding window moving in any direction in one embodiment.
  • the computer device may determine the grayscale value of each pixel in the current standard partial image, and use the average of the grayscale values of each pixel in the current standard partial image as the first overall grayscale value.
  • the computer device may determine the grayscale value of each pixel in the moved standard partial image, and use the average of the grayscale values of each pixel in the moved standard partial image as the second overall grayscale value.
  • the standard calibration image can be divided into a plurality of standard partial images, so that the standard calibration image is analyzed based on the plurality of standard partial images obtained by division.
  • the current standard local image is obtained by frame selection
  • the current standard local image after sliding can be obtained by triggering the sliding window to slide in any direction. In this way, based on the overall grayscale value of the current standard local image before sliding and the overall grayscale value of the current standard local image after sliding, it can be quickly and accurately determined whether the current standard local image includes corner features, and then the corresponding first calibration point is determined based on the corner feature.
  • a first calibration point is extracted from a standard local image based on the difference between each second overall grayscale value and the first overall grayscale value, including: subtracting each second overall grayscale value from the first overall grayscale value to obtain the grayscale difference corresponding to each second overall grayscale value; taking the absolute value of each grayscale difference, and filtering out the absolute value greater than or equal to a preset difference threshold from the absolute value of each grayscale difference; and determining the number of the filtered absolute values, and when the number is greater than or equal to the preset number threshold, taking the center of the standard local image as the first calibration point.
  • the computer device may subtract each second overall grayscale value from the first overall grayscale value to obtain the grayscale value difference corresponding to each second overall grayscale. For example, the computer device subtracts the first overall grayscale value from the second overall grayscale value A to obtain the grayscale difference corresponding to the second overall grayscale value A, and subtracts the first overall grayscale value from the second overall grayscale value B to obtain the grayscale difference corresponding to the second overall grayscale value B. Further, the computer device takes the absolute value of each grayscale difference to obtain multiple absolute values, determines a target absolute value greater than or equal to a preset difference threshold, and counts the number of the selected target absolute values.
  • the center of the current standard partial image is used as the first calibration point.
  • the center of the current standard partial image is used as the first calibration point.
  • the second overall grayscale value is the overall grayscale value of the standard local image after the move
  • the standard local image after the move is obtained after the standard local image is moved. Therefore, when the difference between each second overall grayscale value and the first overall grayscale value is greater than or equal to the preset difference threshold, it can be considered that there is a target area in the standard local image, so that the overall grayscale values of other areas around the target area are all smaller than the overall grayscale value of the target area. It is known that the grayscale value of the corner point area will be greater than the grayscale value of the area around the corner point.
  • the target area is the area where the corner point is located, so that the target area can be used as the first calibration point, and thus the determination of the first calibration point is achieved.
  • calibration point detection is performed on the distorted calibration image to obtain multiple second calibration points, including: obtaining a sliding window, and triggering the sliding window to slide on the distorted calibration image according to a preset moving step length to obtain a distorted local image framed by the sliding window; determining a third overall grayscale value of the distorted local image; triggering the sliding window to move in an arbitrary direction multiple times to obtain multiple moved distorted local images corresponding to the distorted local image; determining a fourth overall grayscale value of each moved distorted local image; and extracting second calibration points from the distorted local image based on the difference between each fourth overall grayscale value and the third overall grayscale value.
  • the computer device may generate a sliding window and trigger the sliding window to slide on the distorted calibration image according to a preset moving step. For example, the computer device may trigger the sliding window to slide on the sliding window in a top-to-bottom and left-to-right order.
  • the computer device can determine whether the distorted local image includes the features of the calibration point. If so, it is determined that there is a calibration point in the sliding window, and the calibration point is used as the second calibration point.
  • the above embodiment can be referred to to determine whether the distorted local image framed by the sliding window includes the features of the calibration point. This embodiment is not repeated here.
  • the size of the sliding window generated for the distorted calibration image can be consistent with the size of the sliding window generated for the standard calibration image.
  • the moving step size of the sliding window for the distorted calibration image can be consistent with the size of the sliding window for the standard calibration image.
  • the computer device may only identify some of the calibration points in the standard calibration image and the distorted calibration image, without identifying all the calibration points. For example, only the calibration points near the center area of the standard calibration image and the distorted calibration image may be identified.
  • the distortion calibration image can be divided into multiple distorted local images, so that the second calibration point in the distortion calibration image can be accurately identified based on the multiple distorted local images obtained by division.
  • numerical fitting is performed on the distortion relationship to be fitted according to multiple pairs of calibration point coordinates to determine the target value of the distortion coefficient in the distortion relationship to be fitted, including: obtaining a preset distortion coefficient value increment model; the distortion coefficient value increment model is generated according to the distortion coefficient in the initial distortion relationship; determining the predicted value of the distortion coefficient of the current round; obtaining the predicted value increment of the distortion coefficient of the current round according to the multiple pairs of calibration point coordinates and the predicted value of the distortion coefficient of the current round and through the distortion coefficient value increment model; when the predicted value increment of the distortion coefficient does not meet the numerical convergence condition, The distortion coefficient prediction value increment of the current round is superimposed with the distortion coefficient prediction value of the current round to obtain the updated prediction value of the distortion coefficient; the next round is taken as the current round, the updated prediction value is taken as the prediction value of the distortion coefficient of the current round, and the prediction value of the distortion coefficient of the current round based on multiple pairs of calibration point coordinates and the current round is returned, and the step of obtaining the distortion coefficient prediction
  • the computer device may obtain a preset distortion coefficient value increment model, wherein the distortion coefficient value increment model is used to determine the amount of change in the predicted value of the distortion coefficient in two consecutive iterations.
  • the distortion coefficient increment model may be specifically a function, or may be a function model for predicting the amount of change in the predicted value of the distortion coefficient in two consecutive iterations.
  • the computer device determines the predicted value of the distortion coefficient of the current round, and inputs the predicted value of the distortion coefficient of the current round and a plurality of pairs of calibration point coordinates into the distortion coefficient value increment model, and outputs the predicted value increment of the distortion coefficient of the current round through the distortion coefficient value increment model.
  • the computer device determines whether the predicted value increment of the distortion coefficient of the current round meets the preset numerical convergence condition.
  • the computer device If the preset numerical convergence condition is not met, the computer device superimposes the predicted value increment of the distortion coefficient with the predicted value of the distortion coefficient of the current round to obtain an updated predicted value of the distortion coefficient. Enter the next round of iteration, use the updated prediction value of the distortion coefficient as the prediction value of the distortion coefficient of the current round, and continue to trigger the distortion coefficient value increment model to output the distortion coefficient prediction value increment of the current round through multiple pairs of calibration point coordinates and the prediction value of the distortion coefficient of the current round. Until the output distortion coefficient prediction value increment of the current round meets the numerical convergence condition.
  • the computer device When the predicted value of the distortion coefficient of the current round meets the numerical convergence condition, the computer device will calculate the distortion coefficient of the current round.
  • the predicted value of the coefficient is used as the value of the distortion coefficient in the distortion relationship, and the value of the distortion coefficient is the distortion coefficient value used when the extended reality device produces distortion.
  • the computer device may obtain a preset convergence value, for example, the convergence value may be 1e -8 , and compare the distortion coefficient prediction value increment of the current round with the convergence value. If the distortion coefficient prediction value increment of the current round is less than or equal to the convergence value, it is determined that the distortion coefficient prediction value increment of the current round meets the numerical convergence condition; otherwise, it is determined that the numerical convergence condition is not met.
  • the preset value may be used as the predicted value of the distortion coefficient of the first round.
  • the distortion coefficient value increment model is determined based on a residual model; the residual model is determined based on the distortion relationship to be fitted; the residual model characterizes the residual between the first coordinate change and the second coordinate change; the first coordinate change is the coordinate change before and after the distortion determined based on the predicted value of the distortion coefficient; the second coordinate change is the residual between the coordinate change before and after the distortion determined based on the actual value of the distortion coefficient.
  • the corresponding residual model can be determined based on the distortion relationship to be fitted.
  • the residual model is used to characterize the residual between the coordinate change before and after the distortion determined based on the predicted value of the distortion coefficient and the coordinate change before and after the distortion determined based on the actual value of the predicted value of the distortion coefficient, so as to determine the difference between the predicted value and the actual value of the distortion coefficient in the distortion relationship according to the residual between the coordinate change before and after the distortion determined based on the predicted value of the distortion coefficient and the coordinate change before and after the distortion determined based on the actual value of the predicted value of the distortion coefficient.
  • the predicted value of the distortion coefficient is the fitting value obtained by numerical fitting; the actual value of the distortion coefficient refers to the target of numerical fitting.
  • the residual model can be Taylor expanded to obtain the incremental model of the distortion coefficient value.
  • the partial derivative of the residual model in the direction of the distortion coefficient can be determined to obtain the Jacobian matrix model, and then the incremental model of the distortion coefficient value can be obtained through Taylor display, Gauss-Newton conditions and the first-order derivative being zero.
  • the distortion coefficient value increment model can be determined by the following formula:
  • ⁇ c 1 and ⁇ c 2 are both increments of the distortion coefficient prediction value
  • F(c 1 ,c 2 ) and G(c 1 ,c 2 ) are both residual models
  • (x ui ,y ui ) refers to the coordinates of the first calibration point in the i-th coordinate pair
  • (x di ,y di ) refers to the coordinates of the second calibration point in the i-th coordinate pair
  • r ui refers to the undistorted distance determined based on (x ui ,y ui )
  • r di refers to the distance determined based on (x di , y di ) is the distortion distance determined
  • c 1 and c 2 are the distortion coefficients to be determined.
  • the model in the present application can be a calculation formula, and the matrix can be a specific value output based on the calculation formula.
  • the above-mentioned residual matrix can be a matrix calculation formula related to variables c 1 and c 2 , and the result calculated by the calculation formula can be called the residual matrix.
  • the i-th coordinate pair corresponds to the i-th calibration point pair.
  • the increment of the distortion coefficient prediction value converges to near zero, it means that even if the iteration continues, the prediction value of the fitted distortion coefficient will remain nearly unchanged; at the same time, the coordinate residual of each iteration in the system is generated by the distortion coefficient of the iteration, so the system uses the residual model as the relationship (F and G) and the partial derivatives of the distortion coefficient (c1 and c2) for optimization.
  • the iterative residual of the distortion coefficient converges to a small range, it means that the residual model has reached the minimum value of the optimization target, that is, it means that the distortion coefficient value corresponding to the distortion relationship has been fitted.
  • the predicted value increment of the distortion coefficients of the current round is obtained through the distortion coefficient value increment model matrix, including: based on multiple pairs of calibration point pairs, the Hessian matrix of the current round is obtained through the Hessian matrix model in the distortion coefficient value increment model; based on multiple pairs of calibration point pairs and the predicted values of the distortion coefficients of the current round, the iterative matrix of the current round is obtained through the iterative matrix model in the distortion coefficient value increment model; the Hessian matrix of the current round and the iterative matrix of the current round are merged to obtain the predicted value increment of the distortion coefficients of the current round.
  • the distortion coefficient value increment model includes a Hessian matrix model and an iterative matrix model.
  • the Hessian matrix model is a model for outputting a Hessian matrix, for example, the Hessian matrix model can be specifically a function model.
  • the iterative matrix model is a model for outputting an iterative matrix, for example, the iterative matrix model can be specifically a function model.
  • the computer device can obtain the Hessian matrix of the current round according to multiple pairs of calibration points and the Hessian matrix, and output the iterative matrix of the current round according to multiple pairs of calibration points, the predicted value of the distortion coefficient of the current round and the iterative matrix model.
  • the computer device fuses the iterative matrix of the current round with the Hessian matrix, for example, multiplying the negative first power of the Hessian matrix by the iterative matrix to obtain the increment of the predicted value of the distortion coefficient of the current round.
  • the Hessian matrix is also called the Hessian matrix, which describes the local curvature of the function.
  • the distortion coefficient value increment model is , (J(c 1 ,c 2 ) T J(c 1 ,c 2 )) is the Hessian matrix model, and (-J(c 1 ,c 2 ) T r(c 1 ,c 2 )) is the iterative matrix model.
  • the change between the distortion coefficient prediction values in two consecutive iterations can be predicted based on the Hessian matrix and the iteration matrix, so that the value of the distortion coefficient can be subsequently determined based on the change between the distortion coefficient prediction values in two consecutive iterations.
  • the Hessian matrix of the current round is obtained according to the multiple pairs of calibration point coordinates and the Hessian matrix model in the distortion coefficient value increment model, including: for each pair of calibration point pairs in the multiple pairs of calibration point pairs, according to the coordinate pair corresponding to the calibration point pair, and through the Jacobian matrix model in the Hessian matrix model, obtain The current Jacobian matrix corresponding to the current calibration point coordinate pair; the current Jacobian matrix and the transpose of the current Jacobian matrix are merged to obtain the fused Jacobian matrix corresponding to the calibration point pair; the fused Jacobian matrices corresponding to multiple pairs of calibration points are superimposed to obtain the Hessian matrix of the current round.
  • the Hessian matrix model may include a Jacobian matrix model.
  • the computer device may input the coordinate pairs corresponding to each pair of calibration points into the Jacobian matrix model in the Hessian matrix model, respectively, to obtain the Jacobian matrix corresponding to each pair of calibration points. Further, for each pair of calibration point pairs in the multiple calibration point pairs, the computer device determines the transposition of the Jacobian matrix corresponding to the calibration point pair, obtains the Jacobian matrix transposition, and fuses the Jacobian matrix corresponding to the same calibration point pair and the Jacobian matrix transposition to obtain the fused Jacobian matrix corresponding to each calibration point pair.
  • the Jacobian matrix corresponding to the calibration point pair A is multiplied by the Jacobian matrix transposition to obtain the fused Jacobian matrix corresponding to the calibration point pair A.
  • the computer device may superimpose the fused Jacobian matrices to obtain the Hessian matrix of the current round.
  • the computer device may set an initial value of a Hessian matrix and traverse each pair of calibration points. For the first calibration point pair traversed to, the computer device may determine the Jacobian matrix of the first calibration point pair traversed to, and multiply the Jacobian matrix with the transpose of the Jacobian matrix to obtain the fused Jacobian matrix of the first calibration point pair traversed to. The computer device superimposes the initial value of the Hessian matrix with the fused Jacobian matrix to obtain the superimposed Hessian matrix of the first calibration point coordinate pair traversed to.
  • the computer device continues to determine the fused Jacobian matrix of the next traversed calibration point pair, and superimposes the fused Jacobian matrix of the next traversed calibration point coordinate pair with the superimposed Hessian matrix of the first traversed calibration point pair to obtain the superimposed Hessian matrix of the next traversed calibration point pair. Iterate in this way until the last calibration point pair is traversed, and the superimposed Hessian matrix of the last traversed calibration point pair is used as the Hessian matrix of the current round.
  • the computer determines the non-distorted distance r ui corresponding to the coordinates of the first calibration point (x ui , y ui ), and determines the distorted distance r di corresponding to the coordinates of the second calibration point (x di , y di ), and inputs (x ui , y ui ), r ui , (x di , y di ) and r di into the Jacobian matrix model
  • the Jacobian matrix of the i-th coordinate pair [(x ui , y ui ), (x di , y di )] is obtained Among them, the Jacobian matrix corresponding to the i-th coordinate pair is also the Jacobian matrix
  • the Hessian matrix of the current round can be quickly obtained based on the fused Jacobian matrix of each pair of calibration point coordinates.
  • an iterative matrix of the current round is obtained, including: for each pair of calibration point pairs in the multiple pairs of calibration point pairs, a Jacobian matrix corresponding to the calibration point coordinate pair is determined; according to the calibration point pairs and predicted values of distortion coefficients of the current round, a residual model in the iterative matrix model is used to obtain a Jacobian matrix corresponding to the calibration point coordinate pair; The current residual matrix corresponding to the targeted calibration point pair; the transpose of the Jacobian matrix corresponding to the targeted calibration point pair and the residual matrix corresponding to the targeted calibration point pair are fused to obtain the fused iterative matrix corresponding to the targeted calibration point pair; the fused iterative matrices corresponding to multiple pairs of calibration point pairs are superimposed to obtain the iterative matrix of the current round.
  • the iterative matrix model may include a Jacobian matrix model and a residual model.
  • the computer device may input each pair of calibration point coordinates and the predicted value of the distortion coefficient of the current round into the Jacobian matrix model in the iterative matrix model to obtain the Jacobian matrix corresponding to each pair of calibration point coordinates. It is easy to understand that the computer device may also reuse the current Jacobian matrix corresponding to the current calibration point coordinate pair generated when calculating the Hessian matrix.
  • the calibration point pairs for each pair are symmetrically regarded as the current calibration point pairs, and the computer device inputs the coordinate pairs corresponding to the current calibration point pairs and the predicted values of the distortion coefficients of the current round into the residual model in the iterative matrix model to obtain the residual matrix corresponding to the current calibration point pairs output by the residual model.
  • the computer device determines the transpose of the Jacobian matrix corresponding to the current calibration point pair, and multiplies the transpose of the Jacobian matrix corresponding to the current calibration point pair with the residual matrix corresponding to the current calibration point pair to obtain the fused iterative matrix corresponding to the current calibration point pair.
  • the computer device can superimpose the fused iterative matrices corresponding to each calibration point pair to obtain the iterative matrix of the current round.
  • the computer device may set an initial value of an iterative matrix and traverse each pair of calibration point coordinates. For the first calibration point pair traversed, the computer device may determine the Jacobian matrix and residual matrix of the first calibration point pair traversed, and fuse the Jacobian matrix and residual matrix of the first calibration point pair traversed to obtain the fused iterative matrix of the first calibration point pair traversed. The computer device superimposes the initial value of the fused iterative matrix with the fused Jacobian matrix to obtain the superimposed iterative matrix of the first calibration point pair traversed.
  • the computer device continues to determine the fused iterative matrix of the next traversed calibration point pair, superimposes the fused iterative matrix of the next traversed calibration point pair with the superimposed iterative matrix of the first traversed calibration point coordinate pair, and obtains the superimposed iterative matrix of the next traversed calibration point pair. Iterate in this way until the last calibration point pair is traversed, and the superimposed iterative matrix of the last traversed calibration point pair is used as the iterative matrix of the current round.
  • the computer device determines the non-distorted distance r ui corresponding to the first calibration point coordinate (x ui , y ui ), and determines the distorted distance r di corresponding to the second calibration point coordinate (x di , y di ), and inputs (x ui , y ui ), r ui , (x di , y di ), r di and the predicted value of the distortion coefficient of the current round (c 1j , c 2j ) into the residual model
  • the residual matrix of the i-th pair of calibration point coordinates [(x ui , y ui ), (x di , y di )] is obtained Among them, the residual matrix corresponding to the i-th coordinate pair
  • the iteration matrix of the current round is quickly obtained by fusion iteration matrix.
  • the Hessian matrix model is generated based on the Jacobian matrix model, and the Jacobian matrix model represents the partial derivative of the residual model in the direction of the distortion coefficient; the iterative matrix is generated based on the Jacobian matrix model and the residual model; the residual model represents the residual between the coordinate change before and after the distortion determined by the predicted value of the distortion coefficient and the coordinate change before and after the distortion determined by the actual value of the distortion coefficient.
  • the goal of numerical fitting is to make the residual between the predicted value of the distortion coefficient and the actual value of the distortion coefficient as close to 0 as possible.
  • a residual matrix can be generated to characterize the difference between the coordinate change before and after the distortion determined based on the predicted value of the distortion coefficient and the coordinate change before and after the distortion determined based on the actual value of the predicted value of the distortion coefficient.
  • the residual output of the residual matrix when the residual output of the residual matrix is close to zero, it can be considered that the predicted value of the distortion coefficient obtained by fitting at this time is close to the actual value of the distortion coefficient, and the fitting goal is achieved at this time.
  • the goal of numerical fitting in the present application becomes to achieve the minimum value of the least squares optimization problem for the residual.
  • the predicted value of the distortion coefficient obtained by iteration is the target value.
  • the above method also includes: integrating the value of the distortion coefficient and the distortion relationship to obtain the distortion relationship; obtaining the image to be displayed, and performing anti-distortion processing on each pixel in the image to be displayed according to the distortion relationship to determine the distortion correction position corresponding to each pixel; moving each pixel in the image to be displayed to the corresponding distortion correction position to obtain an anti-distortion image; triggering the extended reality device to display the anti-distortion image.
  • the computer device can obtain the image to be displayed, and perform anti-distortion processing on the image to be displayed according to the distortion relationship to obtain an anti-distortion image.
  • the computer device inputs the anti-distortion image into the extended reality device and triggers the extended reality device to display the anti-distortion image, so that the human eye sees a normal undistorted image through the extended reality device. It is easy to understand that the computer device can also trigger the extended reality device to perform anti-distortion processing on the image to be displayed according to the calibrated distortion relationship to obtain an anti-distortion image.
  • the distortion relationship is specifically a function. For each pixel coordinate corresponding to each pixel point in the image to be displayed, the computer device substitutes the pixel coordinate of the current pixel into the inverse function of the distortion relationship to obtain the distortion correction position of the current pixel output by the inverse function. The computer device integrates the distortion correction positions corresponding to each pixel to obtain an anti-distorted image.
  • FIG7 shows a schematic diagram of an extended reality device displaying an image in one embodiment.
  • the target distortion relationship can be determined based on the calibrated distortion coefficient, so that a normally displayed distortion-free image can be output based on the target distortion relationship, thereby greatly improving the user experience.
  • the computer device performs corner point detection on the checkerboard image collected before and after the distortion to obtain multiple pairs of calibration point coordinate pairs.
  • the computer device obtains the distortion coefficient value increment model, and enters the iteration, and obtains the distortion coefficient prediction value increment through the distortion coefficient value increment model and multiple pairs of calibration point coordinate pairs.
  • the computer device determines whether the distortion coefficient prediction value increment converges. If converged, the iteration ends. S804 If not converged, the iteration continues until the distortion coefficient prediction value increment converges, and the distortion coefficient prediction value at the time of convergence is used as the target value of the distortion coefficient.
  • FIG8 shows a schematic diagram of the overall process of distortion coefficient calibration in one embodiment.
  • the method for calibrating the distortion coefficient of an extended reality device includes:
  • the computer device obtains a standard calibration image and a distortion calibration image; the distortion calibration image is formed by collecting an image through the optical machine lens of the extended reality device when the display of the extended reality device displays the standard calibration image as an image.
  • the computer device obtains a sliding window, and triggers the sliding window to slide on the standard calibration image according to a preset moving step length, to obtain a current standard local image framed by the sliding window; and determines a first overall grayscale value of the current standard local image.
  • the computer device triggers the sliding window to move in any direction for multiple times to obtain multiple moved standard local images corresponding to the current standard local image; and determines the second overall grayscale value of each moved standard local image.
  • the computer device obtains a sliding window, and triggers the sliding window to slide on the distortion calibration image according to a preset moving step length, to obtain a current distorted local image framed by the sliding window; and determines a third overall grayscale value of the current distorted local image.
  • the computer device triggers the sliding window to move in an arbitrary direction for multiple times to obtain multiple moved distorted local images corresponding to the current distorted local image; and determines a fourth overall grayscale value of each moved distorted local image.
  • the computer device matches the plurality of first calibration point coordinates and the plurality of second calibration point coordinates according to the positional relationship between the first calibration point coordinates and the positional relationship between the second calibration point coordinates to obtain a plurality of calibration point coordinate pairs.
  • the computer device obtains a current Jacobian matrix corresponding to the current calibration point coordinate pair based on the current calibration point coordinate pair and through the Jacobian matrix model in the Hessian matrix model.
  • the computer device fuses the current Jacobian matrix and the transpose of the current Jacobian matrix to obtain a fused Jacobian matrix corresponding to the current calibration point coordinate pair; superimposes the fused Jacobian matrices corresponding to multiple pairs of calibration point coordinate pairs to obtain the Hessian matrix of the current round.
  • the computer device obtains the current calibration point coordinate pair corresponding to the current calibration point coordinate pair by iterating the Jacobian matrix model in the matrix model. Pre-Jacobian matrix.
  • the computer device obtains a current residual matrix corresponding to the current calibration point coordinate pair based on the current calibration point coordinate pair and the predicted value of the distortion coefficient of the current round, and through the residual model in the iterative matrix model.
  • the computer device fuses the transpose of the current Jacobian matrix with the current residual matrix to obtain a fused iterative matrix corresponding to the current calibration point coordinate pair; superimposes the fused iterative matrices corresponding to multiple pairs of calibration point coordinates to obtain the iterative matrix of the current round.
  • the computer device fuses the Hessian matrix of the current round and the iteration matrix of the current round to obtain the incremental value of the distortion coefficient prediction of the current round; when the incremental value of the distortion coefficient prediction does not meet the numerical convergence condition, the computer device superimposes the incremental value of the distortion coefficient prediction of the current round and the distortion coefficient prediction value of the current round to obtain an updated prediction value of the distortion coefficient.
  • the computer device taking the next round as the current round, the computer device will update the predicted value as the predicted value of the distortion coefficient of the current round, return to step S922 to continue execution until the increment of the distortion coefficient prediction value meets the numerical convergence condition, and take the predicted value of the distortion coefficient of the last round as the target prediction value corresponding to the distortion coefficient in the initial distortion relationship.
  • steps in the flowcharts involved in the above-mentioned embodiments can include multiple steps or multiple stages, and these steps or stages are not necessarily executed at the same time, but can be executed at different times, and the execution order of these steps or stages is not necessarily carried out in sequence, but can be executed in turn or alternately with other steps or at least a part of the steps or stages in other steps.
  • the present application also provides an application scenario, which applies the above-mentioned method for calibrating the distortion coefficient of an extended reality device.
  • the application of the method for calibrating the distortion coefficient of an extended reality device in the application scenario is as follows:
  • the model represents the distortion distance r d based on the coefficient c n of the non-distortion distance r u , as follows:
  • this system first inputs a standardized chessboard image, and then directly inputs the image into the optical machine display, and obtains the distorted chessboard image (with a white background) through a high-definition camera; finally, the coordinate positions of the chessboard corner points before and after distortion (also called multiple pairs of calibration point coordinates) are obtained through corner point detection (including but not limited to opencv corner point detection).
  • the goal of numerical fitting is to make the fitted distortion coefficients (c 1 , c 2 ) and the actual distortion coefficients (c′ 1 , c′ 2 )
  • the residuals caused by the coordinate changes are as close to zero as possible (here the Brown model is taken as an example), so the residual relationship caused is as follows:
  • the increments ⁇ c 1 and ⁇ c 2 of the c 1 and c 2 dimensions of each iteration can be obtained. Then, by continuously iteratively calculating ⁇ c 1 and ⁇ c 2 , it is determined whether their values converge to a very small range (set to 1e-8 in this system). If converged, the coefficients relative to all chessboard corner points before and after the distortion can be obtained, that is, the calibration function is completed.
  • the obtained distortion coefficients can be directly used in the subsequent distortion correction processing. Only after the distorted image is accurately corrected can the related subsequent processing (including but not limited to image display, image quality enhancement, etc.) become smooth.
  • the specific meanings of the parameters in the above formulas can refer to the above embodiments.
  • the present application also provides another application scenario, which applies the above-mentioned method for calibrating the distortion coefficient of an extended reality device.
  • the application of the method for calibrating the distortion coefficient of an extended reality device in this application scenario is as follows:
  • Users can enter the virtual reality world through extended reality devices.
  • users can play games through extended reality devices (XR devices) to enter interactive virtual reality game scenes.
  • XR devices extended reality devices
  • they can calibrate the distortion coefficients in advance using the distortion coefficient calibration method proposed in this application, so that the XR device can adjust the virtual reality game scene to be displayed according to the calibrated distortion coefficients and then display it.
  • the virtual reality game scene viewed by the user is a non-distorted scene.
  • users can experience an immersive gaming experience.
  • the distortion coefficient calibration method for the extended reality device provided in each embodiment of the present application is not limited to the above scenarios.
  • the distortion coefficient can be calibrated in advance using the above distortion coefficient calibration method.
  • the distortion coefficient can be calibrated in advance, so that the extended reality device can display a non-distorted movie image to the user based on the calibrated distortion coefficient.
  • the distortion coefficient can be calibrated in advance, so that the extended reality device can display a non-distorted road image with virtual scenes and real scenes superimposed on it to the user based on the calibrated distortion coefficient.
  • the embodiment of the present application also provides a distortion coefficient calibration device for an extended reality device for implementing the distortion coefficient calibration method for the extended reality device involved above.
  • the implementation scheme for solving the problem provided by the device is similar to the implementation scheme recorded in the above method, so the specific limitations in the embodiments of the distortion coefficient calibration device for one or more extended reality devices provided below can refer to the limitations of the distortion coefficient calibration method for the extended reality device above, and will not be repeated here.
  • a distortion coefficient calibration device for an extended reality device comprising: an image acquisition module 1002, a calibration point pair determination module 1004, and a numerical fitting module 1006, wherein:
  • the image acquisition module 1002 is used to acquire a standard calibration image and a distortion calibration image; the distortion calibration image is formed by collecting the image through the optical machine lens of the extended reality device when the display of the extended reality device displays the standard calibration image as a picture.
  • the calibration point pair determination module 1004 is used to perform calibration point detection based on the standard calibration image and the distorted calibration image to obtain multiple pairs of calibration point coordinate pairs; each calibration point pair includes a first calibration point belonging to the standard calibration image and a second calibration point belonging to the distorted calibration image, and the second calibration point is a corresponding calibration point obtained by collecting the first calibration point through the optical machine lens of the extended reality device.
  • the numerical fitting module 1006 is used to obtain the distortion relationship to be fitted; according to multiple pairs of calibration points, the distortion relationship to be fitted is numerically fitted to determine the value of the distortion coefficient in the distortion relationship to be fitted, and obtain the distortion relationship, wherein the distortion relationship is used to characterize the conversion relationship between the calibration points of the standard calibration image and the distorted calibration image.
  • the distortion calibration image is acquired by an image acquisition device; during the process of the image acquisition device acquiring the distortion calibration image, the optomechanical lens of the extended reality device is located between the image acquisition device and the display of the extended reality device, and the optical center of the optomechanical lens is aligned with the center of the display.
  • the calibration point pair determination module 1004 is further used to perform calibration point detection on the standard calibration image to obtain a plurality of first calibration points, and determine the coordinates of each first calibration point in the standard calibration image to obtain first calibration point coordinates corresponding to each of the plurality of first calibration points; perform calibration point detection on the distorted calibration image to obtain second calibration points corresponding to each of the plurality of second calibration points, and determine the coordinates of each second calibration point in the distorted calibration image to obtain a plurality of second calibration point coordinates; determine the positional relationship between each of the first calibration points according to the first calibration point coordinates corresponding to each of the plurality of first calibration points; determine the positional relationship between each of the second calibration points according to the second calibration point coordinates corresponding to each of the plurality of second calibration points; match the plurality of first calibration points with the plurality of second calibration points according to the positional relationship between each of the first calibration points and the positional relationship between each of the second calibration points to obtain a plurality of calibration point
  • the calibration point pair determination module 1004 further includes a first calibration point determination module 1041, which is used to obtain a sliding window and trigger the sliding window to slide on the standard calibration image according to a preset moving step length to obtain a standard local image framed by the sliding window; and determine the standard local image according to the grayscale value of the pixel point in the standard local image.
  • a first overall grayscale value triggering the sliding window to move in any direction for multiple times to obtain multiple moved standard local images corresponding to the standard local image; determining a second overall grayscale value of each moved standard local image according to the grayscale values of the pixels in each moved standard local image; and extracting a first calibration point from the standard local image according to the difference between each second overall grayscale value and the first overall grayscale value.
  • the first calibration point determination module 1041 is also used to subtract each second overall grayscale value from the first overall grayscale value to obtain the grayscale difference corresponding to each second overall grayscale value; obtain a preset difference threshold, and filter out absolute values greater than or equal to the preset difference threshold from the absolute values of each grayscale difference; determine the number of filtered absolute values; obtain a preset number threshold, and when the number is greater than or equal to the preset number threshold, use the center of the standard local image as the first calibration point.
  • the calibration point pair determination module 1004 also includes a second calibration point determination module 1042, which is used to obtain a sliding window and trigger the sliding window to slide on the distorted calibration image according to a preset moving step size to obtain a distorted local image framed by the sliding window; determine a third overall grayscale value of the distorted local image according to the grayscale values of the pixels in the distorted local image; trigger the sliding window to move in any direction multiple times to obtain a plurality of moved distorted local images corresponding to the distorted local image; determine a fourth overall grayscale value of each moved distorted local image according to the grayscale values of the pixels in each moved distorted local image; and extract a second calibration point from the distorted local image according to the difference between each fourth overall grayscale value and the third overall grayscale value.
  • a second calibration point determination module 1042 is used to obtain a sliding window and trigger the sliding window to slide on the distorted calibration image according to a preset moving step size to obtain a distorted local image framed by the sliding window; determine a third overall gray
  • the numerical fitting module 1006 is further used to obtain a preset distortion coefficient value increment model;
  • the distortion coefficient value increment model is a model for determining the change amount of the predicted value of the distortion coefficient in two consecutive iterations; determining the predicted value of the distortion coefficient of the current round; obtaining the predicted value increment of the distortion coefficient of the current round according to multiple pairs of calibration points and the predicted value of the distortion coefficient of the current round through the distortion coefficient value increment model; obtaining the numerical convergence condition, and when the predicted value increment of the distortion coefficient does not meet the numerical convergence condition, increasing the predicted value of the distortion coefficient of the current round by 0.
  • the predicted value of the distortion coefficient of the current round is superimposed to obtain an updated predicted value of the distortion coefficient; the next round is taken as the current round, the updated predicted value is taken as the predicted value of the distortion coefficient of the current round, and the predicted value of the distortion coefficient of the current round based on multiple pairs of calibration point coordinates is returned, and the step of obtaining the incremental predicted value of the distortion coefficient of the current round through the distortion coefficient value incremental model is continued until the incremental predicted value of the distortion coefficient meets the numerical convergence condition; the predicted value of the distortion coefficient of the last round is taken as the value corresponding to the distortion coefficient in the distortion relationship to be fitted.
  • the distortion coefficient value increment model is determined based on a residual model; the residual model is determined based on the distortion relationship to be fitted; the residual model characterizes the residual between the first coordinate change and the second coordinate change; the first coordinate change is the coordinate change before and after the distortion determined based on the predicted value of the distortion coefficient; the second coordinate change is the coordinate change before and after the distortion determined based on the actual value of the distortion coefficient.
  • the numerical fitting module 1006 is also used to obtain the Hessian matrix of the current round based on multiple pairs of calibration points and through the Hessian matrix model in the distortion coefficient value increment model; obtain the iterative matrix of the current round based on multiple pairs of calibration points and the predicted values of the distortion coefficients of the current round and through the iterative matrix model in the distortion coefficient value increment model; fuse the Hessian matrix of the current round and the iterative matrix of the current round to obtain the distortion coefficient prediction value increment of the current round.
  • the numerical fitting module 1006 also includes a Hessian matrix determination module 1061, which is used to determine, for each pair of calibration point coordinate pairs in the multiple pairs of calibration point coordinate pairs, the coordinates of the first calibration point in the calibration point pair belonging to the standard calibration image, and determine the coordinates of the second calibration point in the calibration point pair belonging to the distorted calibration image; determine the coordinate pair corresponding to the calibration point pair according to the coordinates of the first calibration point in the calibration point pair belonging to the standard calibration image and the coordinates of the second calibration point in the distorted calibration image; obtain the Jacobian matrix corresponding to the calibration point pair according to the coordinate pair and the Jacobian matrix model in the Hessian matrix model; fuse the Jacobian matrix and the transpose of the Jacobian matrix to obtain the fused Jacobian matrix corresponding to the calibration point pair; superimpose the fused Jacobian matrices corresponding to the multiple pairs of calibration point coordinate pairs to obtain the Hessian matrix of the current round.
  • a Hessian matrix determination module 1061 is used to determine, for
  • the numerical fitting module 1006 also includes an iterative matrix determination module 1062, which is used to obtain a residual matrix corresponding to the calibration point coordinate pair according to the calibration point pair and the predicted value of the distortion coefficient of the current round through the residual model in the iterative matrix model; the transpose of the Jacobian matrix corresponding to the calibration point pair and the residual matrix corresponding to the calibration point pair are fused to obtain a fused iterative matrix corresponding to the calibration point pair corresponding to the calibration point pair; the fused iterative matrices corresponding to multiple pairs of calibration point pairs are superimposed to obtain the iterative matrix of the current round.
  • an iterative matrix determination module 1062 is used to obtain a residual matrix corresponding to the calibration point coordinate pair according to the calibration point pair and the predicted value of the distortion coefficient of the current round through the residual model in the iterative matrix model; the transpose of the Jacobian matrix corresponding to the calibration point pair and the residual matrix corresponding to the calibration point pair are fused to obtain a fused iterative matrix
  • the Hessian matrix model is generated based on the Jacobian matrix model, and the Jacobian matrix model represents the partial derivative of the residual model in the direction of the distortion coefficient; the iterative matrix is generated based on the Jacobian matrix model and the residual model; the residual model represents the residual between the coordinate change before and after the distortion determined by the predicted value of the distortion coefficient and the coordinate change before and after the distortion determined by the actual value of the distortion coefficient.
  • the distortion coefficient calibration device 1000 of the extended reality device also includes an anti-distortion module, which is used to integrate the value of the distortion coefficient and the distortion relationship to obtain the distortion relationship; obtain the image to be displayed, and perform anti-distortion processing on each pixel in the image to be displayed according to the distortion relationship to determine the distortion correction position corresponding to each pixel in the image to be displayed; move each pixel to the corresponding distortion correction position to obtain an anti-distortion image; and trigger the extended reality device to display the anti-distortion image.
  • an anti-distortion module which is used to integrate the value of the distortion coefficient and the distortion relationship to obtain the distortion relationship; obtain the image to be displayed, and perform anti-distortion processing on each pixel in the image to be displayed according to the distortion relationship to determine the distortion correction position corresponding to each pixel in the image to be displayed; move each pixel to the corresponding distortion correction position to obtain an anti-distortion image; and trigger the extended reality device to display the anti-distortion image.
  • Each module in the distortion coefficient calibration device of the above-mentioned extended reality device can be implemented in whole or in part by software, hardware and a combination thereof.
  • Each of the above-mentioned modules can be embedded in or independent of the processor in the computer device in the form of hardware, or can be stored in the memory of the computer device in the form of software, so that the processor can call and execute the operations corresponding to each of the above modules.
  • a computer device which may be a server, and its internal structure diagram may be as shown in FIG12.
  • the computer device includes a processor, a memory, an input/output interface (Input/Output, referred to as I/O) and a communication interface.
  • the processor, the memory and the input/output interface are connected via a system bus, and the communication interface is connected to the system bus via the input/output interface.
  • the processor of the computer device is used to provide computing and control capabilities.
  • the memory of the computer device includes a non-volatile storage medium and an internal memory.
  • the non-volatile storage medium stores an operating system, a computer program and a database.
  • the internal memory provides an environment for the operation of the operating system and the computer program in the non-volatile storage medium.
  • the database of the computer device is used to store the distortion coefficient calibration data of the extended reality device.
  • the input/output interface of the computer device is used to exchange information between the processor and an external device.
  • the computer device The communication interface is used to communicate with an external terminal through a network connection.
  • a computer device which may be a terminal, and its internal structure diagram may be shown in FIG13.
  • the computer device includes a processor, a memory, an input/output interface, a communication interface, a display unit, and an input device.
  • the processor, the memory, and the input/output interface are connected via a system bus, and the communication interface, the display unit, and the input device are connected to the system bus via the input/output interface.
  • the processor of the computer device is used to provide computing and control capabilities.
  • the memory of the computer device includes a non-volatile storage medium and an internal memory.
  • the non-volatile storage medium stores an operating system and a computer program.
  • the internal memory provides an environment for the operation of the operating system and the computer program in the non-volatile storage medium.
  • the input/output interface of the computer device is used to exchange information between the processor and an external device.
  • the communication interface of the computer device is used to communicate with an external terminal in a wired or wireless manner, and the wireless manner can be implemented through WIFI, a mobile cellular network, NFC (near field communication) or other technologies.
  • a method for calibrating the distortion coefficient of an extended reality device is implemented.
  • the display unit of the computer device is used to form a visually visible image, and can be a display screen, a projection device or an extended reality imaging device.
  • the display screen can be a liquid crystal display screen or an electronic ink display screen.
  • the input device of the computer device can be a touch layer covered on the display screen, or it can be a button, trackball or touchpad set on the computer device casing, or it can be an external keyboard, touchpad or mouse, etc.
  • FIGS. 12 to 13 are merely block diagrams of partial structures related to the scheme of the present application, and do not constitute a limitation on the computer device to which the scheme of the present application is applied.
  • the specific computer device may include more or fewer components than shown in the figures, or combine certain components, or have a different arrangement of components.
  • a computer device including a memory and a processor, wherein a computer program is stored in the memory, and the processor implements the steps in the above method embodiments when executing the computer program.
  • a computer-readable storage medium which stores a computer program.
  • the computer program is executed by a processor, the steps in the above method embodiments are implemented.
  • a computer program product or computer program includes computer instructions, the computer instructions are stored in a computer-readable storage medium.
  • a processor of a computer device reads the computer instructions from the computer-readable storage medium, and the processor executes the computer instructions, so that the computer device performs the steps in the above-mentioned method embodiments.
  • user information including but not limited to user device information, user personal information, etc.
  • data including but not limited to data used for analysis, stored data, displayed data, etc.
  • Non-volatile memory may include read-only memory (ROM), magnetic tape, floppy disk, flash memory, optical memory, high-density embedded non-volatile memory, resistive random access memory (ReRAM), magnetic random access memory (MRAM), ferroelectric random access memory (FRAM), phase change memory (PCM), graphene memory, etc.
  • Volatile memory may include random access memory (RAM) or external cache memory, etc.
  • RAM may be in various forms, such as static random access memory (SRAM) or dynamic random access memory (DRAM).
  • SRAM static random access memory
  • DRAM dynamic random access memory
  • the database involved in the embodiments provided in this application may include at least one of a relational database and a non-relational database.
  • Non-relational databases may include distributed databases based on blockchains, etc., but are not limited thereto.
  • the processor involved in each embodiment provided in this application may be a general-purpose processor, a central processing unit, a graphics processor, a digital signal processor, a programmable logic unit, a data processing logic unit based on quantum computing, etc., but is not limited thereto.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Processing (AREA)

Abstract

本申请涉及一种扩展现实设备的畸变系数标定方法、装置、计算机设备、存储介质和计算机程序产品。所述方法包括:获取标准标定图像和畸变标定图像,畸变标定图像,是在扩展现实设备的显示器将所述标准标定图像显示为画面时,透过扩展现实设备的光机镜头采集所述画面形成的(步骤202);基于标准标定图像和畸变标定图像进行标定点检测,得到多对标定点对,每个标定点对,包括属于标准标定图像的第一标定点和属于畸变标定图像的第二标定点,第二标定点坐标透过扩展现实设备的光机镜头采集第一标定点得到的对应标定点(步骤204);获取待拟合畸变关系,待拟合畸变关系包括待确定的畸变系数(步骤206);根据多对标定点对,对待拟合畸变关系进行数值拟合,以确定待拟合畸变关系中畸变系数的值,得到畸变关系,畸变关系用于表征所述标准标定图像与所述畸变标定图像之间标定点的转换关系(步骤208)。

Description

扩展现实设备的畸变系数标定方法、装置和存储介质
相关申请
本申请要求2022年12月13日日申请的,申请号为2022115939727、发明名称为“扩展现实设备的畸变系数标定方法、装置和存储介质”的中国专利申请的优先权,在此将其全文引入作为参考。
技术领域
本申请涉及计算机及通信技术领域,具体而言,涉及一种扩展现实设备的畸变系数标定方法、装置和存储介质。
背景技术
随着扩展现实技术的兴起,扩展现实设备应运而生。扩展设备又称XR设备,其包括有虚拟现实设备(又称VR设备)、增强现实设备(AR设备)和混合现实设备(MR设备)。这些设备可实现三维动态视景和实体行为的系统仿真。
用户在使用扩展现实设备时,扩展现实设备中显示器发出的光线会经过光机镜头进入人眼中。但是,由于光线在透过扩展现实设备中的光机镜头后会产生畸变,因此用户通过扩展现实设备观看到的图像可能为一个畸变图像。为了使得用户能够观看到正常的图像,在使用扩展现实设备之前,需要通过扩展现实设备的畸变系数进行畸变矫正。然而,目前并没有统一的标准的畸变系数自动化标定方法。
发明内容
本申请的各实施例提供了一种扩展现实设备的畸变系数标定方法、装置、计算机设备、存储介质和计算机程序产品。
一种扩展现实设备的畸变系数标定方法,所述方法包括:
获取标准标定图像和畸变标定图像,所述畸变标定图像,是在扩展现实设备的显示器将所述标准标定图像显示为画面时,透过所述扩展现实设备的光机镜头采集所述画面形成的;
基于所述标准标定图像和所述畸变标定图像进行标定点检测,得到多对标定点对,每个所述标定点对,包括属于所述标准标定图像的第一标定点和属于所述畸变标定图像的第二标定点,所述第二标定点是透过所述扩展现实设备的光机镜头采集所述第一标定点得到的对应标定点;
获取待拟合畸变关系,所述待拟合畸变关系包括待确定的畸变系数;及
根据所述多对标定点对,对所述待拟合畸变关系进行数值拟合,以确定所述待拟合畸变关系中所述畸变系数的值,得到畸变关系,所述畸变关系用于表征所述标准标定图像与 所述畸变标定图像之间标定点的转换关系。
一种扩展现实设备的畸变系数标定装置,所述装置包括:
图像获取模块,用于获取标准标定图像和畸变标定图像,所述畸变标定图像,是在扩展现实设备的显示器将所述标准标定图像显示为画面时,透过所述扩展现实设备的光机镜头采集所述画面形成的;
标定点对确定模块,用于基于所述标准标定图像和所述畸变标定图像进行标定点检测,得到多对标定点对;每个所述标定点对,包括属于所述标准标定图像的第一标定点和属于所述畸变标定图像的第二标定点,所述第二标定点是是透过所述扩展现实设备的光机镜头采集所述第一标定点得到的对应标定点;
数值拟合模块,用于获取待拟合畸变关系;所述待拟合畸变关系包括待确定的畸变系数;根据所述多对标定点坐标对,对所述待拟合畸变关系进行数值拟合,以确定所述待拟合畸变关系中所述畸变系数的值,得到畸变关系,所述畸变关系用于表征所述标准标定图像与所述畸变标定图像之间标定点的转换关系。一种计算机设备,所述计算机设备包括存储器和处理器,所述存储器存储有计算机程序,所述处理器执行所述计算机程序时实现本申请实施例提供的任一种扩展现实设备的畸变系数标定方法中的步骤。
一种计算机可读存储介质,所述计算机可读存储介质,其上存储有计算机程序,所述计算机程序被处理器执行时实现本申请实施例提供的任一种扩展现实设备的畸变系数标定方法中的步骤。
一种计算机程序产品或计算机程序,该计算机程序产品或计算机程序包括计算机指令,该计算机指令存储在计算机可读存储介质中。计算机设备的处理器从计算机可读存储介质读取该计算机指令,处理器执行该计算机指令,使得该计算机设备执行上述各种可选实施例中提供的扩展现实设备的畸变系数标定方法。
本申请的一个或多个实施例的细节在下面的附图和描述中提出。本申请的其它特征和优点将从说明书、附图以及权利要求书变得明显。
附图说明
为了更清楚地说明本申请实施例或传统技术中的技术方案,下面将对实施例或传统技术描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本申请的实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据公开的附图获得其他的附图。
图1为一个实施例中扩展现实设备的畸变系数标定方法的应用环境图;
图2为一个实施例中扩展现实设备的畸变系数标定方法的流程示意图;
图3为一个实施例中标定图像的示意图;
图4为一个实施例中畸变标定图像的采集示意图;
图5为一个实施例中标定点的示意图;
图6为一个实施例中滑动窗口朝任意方向移动的示意图;
图7为一个实施例中扩展现实设备显示图像的示意图;
图8为一个实施例中畸变系数标定的整体流程示意图;
图9为一个具体实施例中扩展现实设备的畸变系数标定方法的流程示意图;
图10为一个实施例中扩展现实设备的畸变系数标定装置的结构框图;
图11为另一个实施例中扩展现实设备的畸变系数标定装置的结构框图;
图12为一个实施例中计算机设备的内部结构图;
图13为一个实施例中计算机设备的内部结构图。
具体实施方式
下面将结合本申请实施例中的附图,对本申请实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例仅仅是本申请一部分实施例,而不是全部的实施例。基于本申请中的实施例,本领域普通技术人员在没有做出创造性劳动前提下所获得的所有其他实施例,都属于本申请保护的范围。
需要说明的是:在本文中提及的“多个”是指两个或两个以上。“和/或”描述关联对象的关联关系,表示可以存在三种关系,例如,A和/或B可表示:单独存在A,同时存在A和B,单独存在B这三种情况。字符“/”一般表示前后关联对象是一种“或”的关系。
计算机程序的指令序列可以包括各种分支指令,如条件跳转指令等,分支指令是计算机程序中的一种指令,它可以使计算机执行不同的指令序列,从而偏离其按顺序执行指令的默认行为。
本申请实施例提供的扩展现实设备的畸变系数标定方法,可以应用于如图1所示的应用环境中。其中,终端102通过网络与服务器104进行通信。数据存储系统可以存储服务器104需要处理的数据。数据存储系统可以集成在服务器104上,也可以放在云上或其他服务器上。终端102和服务器104均可单独用于执行本申请实施例中提供的扩展现实设备的畸变系数标定方法。终端102和服务器104也可协同用于执行本申请实施例中提供的扩展现实设备的畸变系数标定方法。以终端102和服务器104可协同用于执行本申请实施例中提供的扩展现实设备的畸变系数标定方法为例进行说明,终端102可获取标准标定图像和畸变标定图像,并将标准标定图像和畸变标定图像发送至服务器104,以使服务器根据标准标定图像和畸变标定图像确定扩展现实设备的畸变系数的目标值。
其中,终端102可以但不限于是各种台式计算机、笔记本电脑、智能手机、平板电脑、物联网设备和便携式可穿戴设备,物联网设备可为智能音箱、智能电视、智能空调、智能车载设备等。便携式可穿戴设备可为扩展现实设备、智能手表、智能手环、指环、手柄、头戴设备等。服务器104可用独立的服务器或者是多个服务器组成的服务器集群来实现。
需要说明的是,本申请涉及XR技术,例如,本申请涉及对XR设备中的畸变系数进 行标定。XR技术,即扩展现实技术,是指通过计算机技术和可穿戴设备将真实与虚拟相融合,打造一个可人机交互的虚拟环境,囊括了VR、AR(Augmented Reality,增强现实)、MR(Mediated Reality,混合现实)的技术特点,为体验者带来虚拟世界与现实世界之间无缝转换的沉浸感。VR技术是指借助计算机等设备产生一个逼真的三维视觉、触觉、嗅觉等多种感官体验的虚拟世界,从而使处于虚拟世界中的人产生一种身临其境的感觉,多用于游戏娱乐场景,如VR眼镜、VR显示、VR一体机;AR技术是将虚拟信息叠加到真实世界,甚至是实现超越现实的技术,在一定程度上是VR技术的延伸,相对来说,AR设备产品具有体积小、重量轻、便携等特点;MR技术是VR与AR技术的进一步发展,通过在现实场景呈现虚拟场景,在用户之间搭建交流闭环,极大增强用户体验感。XR技术囊括以上三种技术的特点,具有广泛应用前景,可应用于教育培训中实现科学、实验课程远程教学的场景,或影视娱乐中沉浸式娱乐场景,如沉浸式观影、游戏等,或音乐会、话剧、博物馆等展览活动场景,或工业建模与设计中3D家装、建筑设计场景,或新型场景消费,如云购物、云试装等场景。
需要说明的是,本申请中使用的“第一”、“第二”以及类似的词语并不表示任何顺序、数量或者重要性,而只是用来区分不同的组成部分。除非上下文另外清楚地指出,否则单数形式“一个”、“一”或者“该”等类似词语也不表示数量限制,而是表示存在至少一个。本申请各实施例中提及的“多个”或“多份”等的数量均指代“至少两个”的数量,比如,“多个”指“至少两个”,“多份”指“至少两份”。
在其中一个实施例中,如图2所示,提供了一种扩展现实设备的畸变系数标定方法,以该方法应用于计算机设备为例进行说明。计算机设备可为图1中的终端或者服务器。扩展现实设备的畸变系数标定方法包括以下步骤:
步骤202,获取标准标定图像和畸变标定图像;所述畸变标定图像,是在扩展现实设备的显示器将所述标准标定图像显示为画面时,透过所述扩展现实设备的光机镜头采集所述画面形成的。
其中,标准标定图像指的是用以进行标定的标准化图像。比如,由于棋盘格图像的角点分布均匀,能够包含更多的畸变变化趋势,因此,标准标定图像具体可为标准化的棋盘格图像,标准化的棋盘格图像为角点分布均匀的棋盘格图像。角点可为极值点,比如,两条直线构成角时的角点,就称作角点。畸变标定图像指的是标准标定图像发生畸变后的图像。畸变可为是指畸形的变化,比如,为形态上的变化。比如,参考图3,图301即为标准标定图像,图302即为畸变标定图像。图3示出了一个实施例中标定图像的示意图。
具体地,当需要确定扩展现实设备的畸变系数时,计算机设备可获取标准标定图像和畸变标定图像。其中,扩展现实(XR)是虚拟现实(VR)、增强现实(AR)和混合现实(MR)等各种新沉浸式技术的统称,扩展现实设备则是虚拟现实设备、增强现实设备和混合现实设备的统称。从理论上来讲,扩展现实技术是一种可以创建虚拟世界或将现实世界与虚拟世界融合的计算机系统,通过创建虚拟世界或将现实世界与虚拟世界融合,可使 得用户体验高度沉浸的内容。畸变系数指的是扩展现实设备中的光机镜头产生畸变时采用的系数。
在其中一个实施例中,畸变标定图像是图像采集设备采集得到的;在图像采集设备采集畸变标定图像的过程中,扩展现实设备的光机镜头位于图像采集设备和扩展现实设备的显示器之间,且光机镜头的光心与显示器的中心对齐。
具体地,畸变标定图像是通过图像采集设备采集到的。在对畸变标定图像进行采集的过程中,可将标准标定图像输入至扩展现实设备,由扩展现实的显示器显示该标准标定图像,从而图像采集设备可透过扩展现实设备的光机镜头对标准标定图像进行采集,得到畸变标定图像。
比如,参考图4,扩展现实设备可包括光机镜头和显示器。其中,光机镜头可包括多个镜片。可先将光机镜头的光心与显示器的中心对齐,从而显示器在显示标准标定图像时发出的光线可通过光机镜头传输至图像采集设备,进而图像采集设备可对接收到的光线进行成像,得到畸变标定图像。图4示出了一个实施例中畸变标定图像的采集示意图。
在其中一个实施例中,扩展现实设备中的光机镜头具体可为复杂折叠光路超短焦XR光机镜头,该光机镜头也称作光机pancake镜头,该光机镜头是通过光学元件进行光路折叠的超薄XR镜头。通过在扩展现实设备搭载光机pancake镜头,可大大降低扩展现实设备的厚度。
在其中一个实施例中,pancake镜头又称作折叠光路镜头,其采用了“折叠式”的光路结构,在保证虚像放大的同时,缩短屏幕到眼睛的直线距离。pancake镜头包括半反半透功能的镜片、相位延迟片和反射式偏振片。在pancake镜头中,XR设备中的显示器发出的光线进入半反半透功能的镜片之后,光线在镜片、相位延迟片以及反射式偏振片之间多次折返,最终从反射式偏振片射出。通过此种方案,可以降低光机部分体积,进而达到降低整个XR设备体积,提高佩戴舒适性的目的。
在其中一个实施例中,图像采集设备与光机镜头之间的距离,可参考用户在使用扩展现实设备时人眼至光机镜头之间的距离。比如,图像采集设备与光机镜头之间的距离可与人眼至光机镜头之间的距离一致。
在其中一个实施例中,图像采集设备的中心、光机镜头的光心以及显示器的中心,三者可对齐。比如,图像采集设备的中心、光机镜头的光心以及显示器的中心可位于一条水平线上。
上述实施例中,只需通过显示器显示标准标定图像,即可通过图像采集设备透过光机镜头方便快捷地采集到畸变标定图像,从而提升了畸变标定图像的采集效率,进而提升了畸变系数的标定效率。
步骤204,基于标准标定图像和畸变标定图像进行标定点检测,得到多对标定点对;每个标定点对,包括属于标准标定图像的第一标定点和属于畸变标定图像的第二标定点,第二标定点是在得到畸变标定图像的过程中,透过扩展现实设备的光机镜头采集第一标定 点得到的对应标定点。
其中,标定点指的是标定图像中具有标定特性的位置点。在标准标定图像和畸变标定图像为棋盘格图像时,标定点可为棋盘格中的角点。
具体地,当获取得到标准标定图像和畸变标定图像时,计算机设备可对标准标定图像和畸变标定图像进行标定点检测,以得到标准标定图像中的多个第一标定点和畸变标定图像中的各第一标定点各自对应的第二标定点,从而得到多对标定点对。比如,参考图5,在标准标定图像中的第一标定点501经过畸变后变化为畸变标定图像中的第二标定点502时,该第一标定点501即与第二标定点502相对应,此时,第一标定点501与第二标定点502,即组成了一个标定点对。其中,第一标定点在标准标定图像中的坐标称作第一标定点坐标;第二标定点在畸变标定图像中的坐标称作第二标定点坐标。图5示出了一个实施例中标定点的示意图。
在其中一个实施例中,在标定点为角点时,计算机设备可根据opencv(一种跨平台计算机视觉和机器学习软件库)角点检测方法对标准标定图像和畸变标定图像进行标定点检测,得到多对标定点对。
在其中一个实施例中,针对标准标定图像中的每个像素块,计算机设备均确定像素块是否包括标定点特征。若像素块包括标定点特征,则确定该像素块为第一标定点。计算机设备可通过预训练的机器学习模型确定像素块是否包括标定点特征,相应的,计算机设备也可通过上述方式,确定第二标定点。其中,像素块可包括一个或者多个像素点。
在其中一个实施例中,可以标准标定图像的中心为原点建立坐标系,从而确定标准标定图像中的第一标定点在该坐标系中的坐标,得到第一标定点坐标。相应的,也可以畸变标定图像的中心为原点建立坐标系,从而确定畸变标定图像中的第二标定点在该坐标系中的坐标,得到第二标定点坐标。
在其中一个实施例中,在对标准标定图像和畸变标定图像进行标定点检测之前,可调整标准标定点图像和畸变标定点图像的尺寸,使得标准标定图像与畸变标定图像的尺寸相统一。
步骤206,取待拟合畸变关系;待拟合畸变关系包括待确定的畸变系数。
具体地,计算机设备可获取预先设置的待拟合畸变关系,该待拟合畸变关系用于拟合得到畸变关系,畸变关系用于表征标准标定图像与畸变标定图像之间标定点的转换关系。其中,待拟合畸变关系中的畸变系数是待确定的。
在其中一个实施例中,以经典布朗(Brown)模型为例,待拟合畸变关系可通过下述公式确定:
其中,ru是指非畸变距离;rd是指畸变距离;非畸变距离是指第一标定点坐标与标准标定图像的中心点之间的距离。畸变距离是指第二标定点坐标与畸变标定图像的中心点之 间的距离。(xu,yu)是指第一标定点坐标,第一标定点坐标,是以标准标定图像的中心点为原点建立坐标系而确定的坐标。(xd,yd)是指第二标定点坐标,第二标定点坐标是以畸变标定图像的中心点为原点建立坐标系而确定的坐标。c1和c2为待确定的畸变系数。第一标定点坐标为第一标定点在标准标定图像中的坐标。第二标定点坐标为第二标定点在畸变标定图像中的坐标。
步骤208,根据多对标定点对,对待拟合畸变关系进行数值拟合,以确定待拟合畸变关系中畸变系数的值,得到畸变关系,畸变关系用于表征标准标定图像与畸变标定图像之间标定点的转换关系。
其中,数值拟合又称曲线拟合,俗称拉曲线,是指基于若干离散的数据,得到一个连续的函数(也就是曲线)的过程,其中,所得到的连续的函数与输入的若干离散的数据相吻合。通过标准标定图像与畸变标定图像之间标定点的转换关系,可将标准标定图像中的标定点,转换为畸变标定图像中的标定点。比如,畸变关系具体可为一个函数,将畸变标定图像中的标定点A的坐标输入至该函数后,该函数即可输出位于标准标定图像中的与标定点A相对应的标定点B的坐标。
具体地,当获取得到多对标定点对时,计算机设备即可确定每个标定点对各自对应的坐标对。其中,坐标对包括第一标定点的坐标和第二标定点的坐标,且该第一标定点和该第二标定点,同属于一个标定点对。比如,在标定点对A包括第一标定点a和第二标定点b时,则与标定点A相对应的坐标对A可包括第一标定点a的坐标和第二标定点b的坐标。根据多个标定点各自对应的坐标对,确定待拟合畸变关系中的待确定的畸变系数的值。比如,可确定上述公式中c1的值和c2的值。容易理解地,该畸变系数的目标值,即为扩展现实设备的光机镜头产生畸变时采用的畸变系数值,至此,扩展现实设备产生畸变时的畸变系数已被标定。通过对畸变系数进行标定,可使得后续基于标定的畸变系数进行畸变矫正、深度估计和空间定位等。
在其中一个实施例中,计算机设备可通过最小二乘法、梯度下降学习法、信赖域算法、高斯牛顿迭代法等,并根据多对标定点坐标对,对待拟合畸变关系进行数值拟合,以得到待拟合畸变关系中畸变系数的值。
上述扩展现实设备的畸变系数标定方法中,通过获取标准标定图像和该标准标定图像进畸变后的畸变标定图像,可对标准标定图像和畸变标定图像进行标定点检测,从而得到包括发生畸变前的第一标定点和发生畸变后的第二标定点的标定点对,如此,便可基于多对标定点对,待拟合畸变关系进行数值拟合处理,得到待拟合畸变关系中畸变系数的值,也即,得到表征标准标定图像与畸变标定图像之间标定点的转换关系的畸变关系,从而实现畸变系数自动标定的目的。由于本申请只需要一张标准标定图像即可实现扩展现实设备的畸变系数标定,因此,大大简化了畸变系数标定的流程,提升了畸变系数标定的效率。
在其中一个实施例中,基于标准标定图像和所述畸变标定图像进行标定点检测,得到多对标定点坐标对,包括:对标准标定图像进行标定点检测,得到多个第一标定点,并确 定每个第一标定点在标准标定图像中的坐标,得到多个第一标定点各自对应的第一标定点坐标;对畸变标定图像进行标定点检测,得到多个第二标定点,并确定每个第二标定点在畸变标定图像中的坐标,得到多个第二标定点各自对应的第二标定点坐标;根据多个第一标定点各自对应的第一标定点坐标,确定各第一标定点之间的位置关系;根据多个第二标定点各自对应的第二标定点坐标,确定各第二标定点之间的位置关系;根据各第一标定点之间的位置关系和各第二标定点之间的位置关系,对多个第一标定点和多个第二标定点进行匹配处理,得到多对标定点对。
具体地,计算机设备对标准标定图像进行标定点检测,以识别出标准标定图像中的各第一标定点,并输出各第一标定点各自对应的第一标定点坐标。容易理解地,第一标定点指的是标准标定图像中的像素点或像素块,相应的,第一标定点坐标指的是该像素点或者像素块在标准标定图像中的位置坐标。比如,参考图5,在标准标定图像为棋盘格图像时,可将该标准标定图像中的白色作为背景色,从而计算机设备识别出各黑色方格的角点,并将识别出的角点作为第一标定点。相应的,计算机设备也可对畸变标定图像进行标定点检测,将畸变标定图像的白色作为背景色,识别出畸变标定图像中的黑色方格的角点,得到多个第二标定点,并输出各第二标定点各自对应的第二标定点坐标。
进一步地,容易理解地,在计算机设备根据各第一标定点坐标中的横纵坐标值,确定各第一标定点之间的位置关系。由于第一标定点与第一标定点坐标之间是一一对应的,因此,各第一标定点之间的位置关系,也就是各第一标定点坐标之间的位置关系。比如,在第一标定点A的第一标定点坐标为(1,1),第一标定点B的第一标定点坐标为(1,0),第一标定点C的第一标定点坐标为(0,1)时,即可认为第一标定点A位于第一标定点C的右侧,以及位于第一标定点B的上方。进一步地,计算机设备按照各第一标定点之间的位置关系,对各第一标定点进行排序,得到第一标定点矩阵。比如,在第一标定点矩阵中,第一标定点A的标识A位于第一标定点C的标识C的右侧,且位于第一标定点B的标识B的上方。相应的,计算机设备可按照上述方式,确定各第二标定点之间的位置关系。由于第二标定点与第二标定点坐标之间是一一对应的,因此,各第二标定点之间的位置关系,也就是各第二标定点坐标之间的位置关系,并按照各第二标定点之间的位置关系,对各第二标定点进行排序,得到第二标定点矩阵。
进一步地,计算机设备将位于第一标定点矩阵和第二标定点矩阵中的,位于相同位置处的第一标定点和第二标定点作为一对标定点,并将一对标定点中的第一标定点的第一标定点坐标和第二标定点的第二标定点坐标,作为一对坐标对。比如,在确定第一标定点A位于第一标定点矩阵中的第一行与第一列的交汇处、第二标定点D位于第二标定点矩阵中的第一行与第一列的交汇处时,可确定第一标定点A所对应的第一标定点坐标和第二标定点D所对应的第二标定点坐标,为一对标定点对。
本实施例中,通过进行标定点检测,可得到反映标定点畸变前后变化情况的标定点对,从而后续可基于标定点对,得到扩展现实设备的畸变系数。此外,由于图像发生畸变后, 图像中的各点之间的位置关系是不会发生变化的,因此,通过确定各第一坐标点之间的位置关系,以及确定各第二坐标点之间的位置关系,可基于确定的位置关系,准确得到畸变前后具有对应关系的标定点。
在其中一个实施例中,对标准标定图像进行标定点检测,得到多个第一标定点,包括:获取滑动窗口,并触发滑动窗口按照预设的移动步长在标准标定图像上滑动,得到滑动窗口所框选的标准局部图像;确定标准局部图像的第一整体灰度值;触发滑动窗口多次朝任意方向移动,得到与当前标准局部图像相对应的多个移动后标准局部图像;确定每个移动后标准局部图像的第二整体灰度值;根据每个第二整体灰度值与第一整体灰度值之间的差异均大于或等于预设差异阈值时,从标准局部图像中提取出第一标定点。
具体地,当需要识别出第一标定点时,计算机设备可生成一个滑动窗口,并触发滑动窗口按照预设的移动步长在标准标定图像上滑动,比如,计算机设备可触发滑动窗口按照从上到下,从左到右的顺序,在滑动窗口上滑动。将滑动窗口框选的图像称作标准局部图像。针对滑动窗口每次框选的标准局部图像,计算机设备均可确定滑动窗口所框选的标准局部图像是否包括标定点的特征,若包括,则确定滑动窗口中具有标定点,并将该标定点作为第一标定点。
在其中一个实施例中,在标定点为角点时,由于对角点来说,角点是边的角点,边的特征是梯度在某一方向上产生突变,因此,这个角点应该表现为某一区域内有两个或多方向上的边缘性变化的梯度信息,如果用滑动窗口来观察角点区域的话,滑动窗口向多个方向滑动,都能感知到像素密度的强变化(梯度),也即,都能感知到整体灰度值的变化。
基于上述原理,将滑动窗口当前框选住的标准局部图像称作当前标准局部图像,当滑动窗口框选当前标准局部图像时,参考图6,计算机设备可触发滑动窗口601朝任意方向多次移动,得到多个移动后的滑动窗口,并将移动后的滑动窗口所框选的每个内容均称作移动后标准局部图像。计算机设备确定当前标准局部图像的整体灰度值,将当前标准局部图像的整体灰度值,称作第一整体灰度值,以及确定每个滑动后的标准局部图像的整体灰度值,称作第二整体灰度值,并确定每个滑动后的标准局部图像的第二整体灰度值与当前标准局部图像的第一整体灰度值之间的差异,在各差异均大于或等于预设差异阈值时,则可确定当前标准局部图像包括角点特征,此时可将该角点特征所指向的点,作为第一标定点。比如,若当前标准局部图像的中心点为角点,则可将当前标准局部图像的中心点作为第一标定点。图6示出了一个实施例中滑动窗口朝任意方向移动的示意图。
在其中一个实施例中,计算机设备可确定当前标准局部图像中各像素点的灰度值,将当前标准局部图像中各像素点的灰度值的平均数,作为第一整体灰度值。相应的,计算机设备可确定移动后标准局部图像中各像素点的灰度值,将移动后标准局部图像中各像素点的灰度值的平均数,作为第二整体灰度值。
上述实施例中,通过触发滑动窗口按照移动步长在标准标定图像中滑动,可将标准标定图像划分为多个标准局部图像,从而基于划分得到的多个标准局部图像对标准标定图像 中的第一标定点进行精准的识别。当在框选得到当前标准局部图像时,通过触发滑动窗口朝任意方向滑动,可得到滑动后的当前标准局部图像,如此,便可基于滑动前的当前标准局部图像的整体灰度值和滑动后的当前标准局部图像的整体灰度值,快速准确地确定当前标准局部图像是否包括角点特征,进而基于该角点特征确定相应的第一标定点。
在其中一个实施例中,根据各第二整体灰度值分别与第一整体灰度值之间的差异,从标准局部图像中提取出第一标定点,包括:将每个第二整体灰度值分别与第一整体灰度值相减,得到每个第二整体灰度值各自对应的灰度差异;取每个灰度差异的绝对值,并从每个灰度差异的绝对值中筛选出大于或等于预设差异阈值的绝对值;及确定筛选出的绝对值的数量,并在数量大于或等于预设数量阈值时,将标准局部图像的中心作为第一标定点。
具体地,当得到各移动后标准局部图像各自的第二整体灰度值时,计算机设备可将各第二整体灰度值分别与第一整体灰度值相减,得到每个第二整体灰度至各自对应的灰度值差异。比如,计算机设备将第二整体灰度值A减去第一整体灰度值,得到第二整体灰度值A相对应的灰度差异,以及将第二整体灰度值B减去第一整体灰度值,得到第二整体灰度值B相对应的灰度差异。进一步地,计算机设备取每个灰度差异的绝对值,得到多个绝对值,并确定大于或等于预设差异阈值的目标绝对值,并统计筛选出的目标绝对值的数量。当筛选出的目标绝对值的数量大于或等于预设数量阈值时,将当前标准局部图像的中心作为第一标定点。比如,在每个第二整体灰度值与第一整体灰度值之间的差异均大于或等于预设差异阈值时,将当前标准局部图像的中心作为第一标定点。
上述实施例中,由于第二整体灰度值为移动后标准局部图像的整体灰度值,而移动后标准局部图像是对标准局部图像进行移动后得到的。因此,在各第二整体灰度值与第一整体灰度值之间的差异均大于或等于预设差异阈值时,可认为标准局部图像中存在一个目标区域,使得位于该目标区域周围的其他区域的整体灰度值均小于目标区域的整体灰度值。而已知角点区域的灰度值会大于角点周围区域的灰度值,因此,当确定标准局部图像中存在一个目标区域,使得位于该目标区域周围的其他区域的整体灰度值均小于目标区域的整体灰度值时,即可认为该目标区域为角点所在区域,从而可将该目标区域作为第一标定点,如此,便实现了第一标定点的确定。
在其中一个实施例中,对畸变标定图像进行标定点检测,得到多个第二标定点,包括:获取滑动窗口,并触发滑动窗口按照预设的移动步长在畸变标定图像上滑动,得到滑动窗口所框选的畸变局部图像;确定畸变局部图像的第三整体灰度值;触发滑动窗口多次朝任意方向移动,得到与畸变局部图像相对应的多个移动后畸变局部图像;确定每个移动后畸变局部图像的第四整体灰度值;根据每个第四整体灰度值与第三整体灰度值之间的差异,从畸变局部图像中提取出第二标定点。
具体地,当需要识别出第二标定点时,计算机设备可生成一个滑动窗口,并触发滑动窗口按照预设的移动步长在畸变标定图像上滑动,比如,计算机设备可触发滑动窗口按照从上到下,从左到右的顺序,在滑动窗口上滑动。针对滑动窗口所框选的每个畸变局部图 像,计算机设备均可确定畸变局部图像是否包括标定点的特征,若包括,则确定滑动窗口中具有标定点,并将该标定点作为第二标定点。其中,可参照上述实施例,确定滑动窗口所框选的畸变局部图像是否包括标定点的特征。本实施例在此不再赘述。在其中一个实施例中,针对畸变标定图像的生成的滑动窗口的尺寸可与针对标准标定图像生成的滑动窗口的尺寸一致。针对畸变标定图像的滑动窗口的移动步长,可与针对标准标定图像的滑动窗口的尺寸一致。
在其中一个实施例中,计算机设备也可仅识别标准标定图像和畸变标定图像中的部分标定点,而无需识别全部的标定点。比如,可仅识别标准标定图像和畸变标定图像中靠近中心区域处的标定点。
上述实施例中,通过触发滑动窗口按照移动步长在畸变标定图像中滑动,可将畸变标定图像划分为多个畸变局部图像,从而基于划分得到的多个畸变局部图像对畸变标定图像中的第二标定点进行精准的识别。
在其中一个实施例中,根据多对标定点坐标对,对待拟合畸变关系进行数值拟合,以确定待拟合畸变关系中所述畸变系数的目标值,包括:获取预设的畸变系数值增量模型;畸变系数值增量模型是根据初始畸变关系中的畸变系数生成得到的;确定当前轮次的畸变系数的预测值;根据多对标定点坐标对和当前轮次的畸变系数的预测值,并通过畸变系数值增量模型得到当前轮次的畸变系数预测值增量;当畸变系数预测值增量不满足数值收敛条件时,将当前轮次的畸变系数预测值增量和当前轮次的畸变系数预测值进行叠加,得到畸变系数的更新预测值;将下一轮次作为当前轮次,将更新预测值作为当前轮次的畸变系数的预测值,并返回根据多对标定点坐标对和当前轮次的畸变系数的预测值,并通过畸变系数值增量模型得到当前轮次的畸变系数预测值增量的步骤继续执行,直至畸变系数预测值增量满足数值收敛条件;将最后一个轮次的畸变系数的预测值,作为待拟合畸变关系中的畸变系数的值。
具体地,计算机设备可获取预设的畸变系数值增量模型,其中,畸变系数值增量模型用以确定两次连续迭代中畸变系数的预测值的变化量。畸变系数增量模型具体可为一个函数,或者,可为一个用以预测两次连续迭代中畸变系数的预测值的变化量函数模型。计算机设备确定当前轮轮次的畸变系数的预测值,并将当前轮次的畸变系数的预测值和多对标定点坐标对输入至该畸变系数值增量模型,通过该畸变系数值增量模型,输出当前轮次的畸变系数预测值增量。计算机设备判断当前轮次的畸变系数预测值增量是否满足预设的数值收敛条件,若不满足预设的数值收敛条件,则计算机设备将畸变系数预测值增量与当前轮次的畸变系数的预测值进行叠加,得到畸变系数的更新预测值。并进入下一轮迭代,将得到的畸变系数的更新预测值作为当前轮次的畸变系数的预测值,并继续触发畸变系数值增量模型通过多对标定点坐标对和当前轮次的畸变系数的预测值,输出当前轮次的畸变系数预测值增量。直至输出的当前轮次的畸变系数预测值增量满足数值收敛条件。
在当前轮次的畸变系数预测值满足数值收敛条件时,计算机设备即将当前轮次的畸变 系数的预测值作为畸变关系中畸变系数的值,该畸变系数的值即为扩展现实设备产生畸变时采用的畸变系数值。
在其中一个实施例中,计算机设备可获取预设的收敛数值,比如,收敛数值可为1e-8,并将当前轮次的畸变系数预测值增量与该收敛数值进行对比,在当前轮次的畸变系数预测值增量小于或等于收敛数值,则确定当前轮次的畸变系数预测值增量满足数值收敛条件,反之,则确定不满足数值收敛条件。
在其中一个实施例中,可将预设值作为首个轮次的畸变系数的预测值。
在其中一个实施例中,畸变系数值增量模型是基于残差模型确定的;残差模型是基于待拟合畸变关系确定的;残差模型表征,第一坐标变化与第二坐标变化之间的残差;第一坐标变化,为基于畸变系数的预测值所确定的畸变前后坐标变化;第二坐标变化,为基于畸变系数的实际值所确定的畸变前后坐标变化之间的残差。
具体地,可先基于待拟合的畸变关系,确定相应的残差模型。其中,残差模型,用于表征基于畸变系数的预测值所确定的畸变前后坐标变化与基于畸变系数的预测值实际值所确定的畸变前后坐标变化之间的残差,从而根据基于畸变系数的预测值所确定的畸变前后坐标变化与基于畸变系数的预测值实际值所确定的畸变前后坐标变化之间的残差,确定畸变关系中的畸变系数的预测值与实际值之间的差异。其中,畸变系数的预测值,是数值拟合出的拟合值;畸变系数的实际值指的是数值拟合的目标。
进一步地,为达到针对残差模型的最小二乘优化问题达到最小值,可对残差模型进行泰勒展开,从而得到畸变系数值增量模型。在其中一个实施例中,为达到针对残差模型的最小二乘优化问题达到最小值,可先对确定残差模型在畸变系数方向上的偏导数,得到雅可比矩阵模型,再通过泰勒展示、高斯牛顿的条件以及一阶导数为零,得到畸变系数值增量模型。
在其中一个实施例中,可通过下述公式确定畸变系数值增量模型:



其中,为数值增量模型;Δc1和Δc2均为畸变系数预测值增量;F(c1,c2)和G(c1,c2)均为残差模型;(xui,yui)是指第i对坐标对中的第一标定点坐标;(xdi,ydi)是指第i对坐标对中的第二标定点坐标;rui是指基于(xui,yui)确定的非畸变距离;rdi是指基于 (xdi,ydi)确定的畸变距离;c1和c2为待确定的畸变系数。容易理解地,本申请中的模型可为一个计算公式,矩阵可为基于计算公式输出的具体值。比如,上述的残差矩阵可为一个与变量c1和c2相关的矩阵计算公式,通过该计算公式计算出的结果即可称作残差矩阵。第i对坐标对与第i对标定点对相对应。
上述实施例中,当畸变系数预测值增量收敛到近于零,就表示即使继续迭代,拟合出的畸变系数的预测值也会近于不变;同时本系统中的每次迭代的坐标残差是由该次迭代的畸变系数产生,所以本系统中是以残差模型为关系式(F和G),对畸变系数(c1和c2)做的偏导数进行的优化,当畸变系数迭代残差收敛到小范围时,就表示残差模型已达到优化目标极小值,即表示已拟合出畸变关系对应的畸变系数值。
在其中一个实施例中,根据多对标定点对和当前轮次的畸变系数的预测值,并通过畸变系数值增量模型矩阵得到当前轮次的畸变系数预测值增量,包括:根据多对标定点对,并通过畸变系数值增量模型中的海森矩阵模型得到当前轮次的海森矩阵;根据多对标定点对和当前轮次的畸变系数的预测值,并通过畸变系数值增量模型中的迭代矩阵模型得到当前轮次的迭代矩阵;对当前轮次的海森矩阵和当前轮次的迭代矩阵进行融合,得到当前轮次的畸变系数预测值增量。
具体地,畸变系数值增量模型包括海森矩阵模型和迭代矩阵模型。其中,海森矩阵模型为用于输出海森矩阵的一个模型,比如,海森矩阵模型具体可为一个函数模型。迭代矩阵模型为用于输出迭代矩阵的一个模型,比如,迭代矩阵模型具体可为一个函数模型。在当前轮次中,计算机设备可根据多对标定点对和海森矩阵,得到当前轮次的海森矩阵,以及根据多对标定点对、当前轮次的畸变系数的预测值和迭代矩阵模型,输出当前轮次的迭代矩阵。计算机设备将当前轮次的迭代矩阵和海森矩阵进行融合,比如,将海森矩阵的负一次方乘以迭代矩阵,得到当前轮次的畸变系数预测值增量。
在其中一个实施例中,海森矩阵又称作Hessian矩阵,其描述了函数的局部曲率,当畸变系数值增量模型为上述实施例中的时,(J(c1,c2)TJ(c1,c2))即为海森矩阵模型,(-J(c1,c2)Tr(c1,c2))即为迭代矩阵模型。从而,将多对标定点对各自对应的坐标对输入至(J(c1,c2)TJ(c1,c2))中,即可得到具体的海森矩阵,将多对标定点对各自对应的坐标对和当前轮次的畸变系数的预测值输入至公式(-J(c1,c2)Tr(c1,c2))中,即可得到具体的迭代矩阵,并将海森矩阵的负一次方乘以迭代矩阵,得到当前轮次的畸变系数预测值增量。
上述实施例中,通过确定海森矩阵和迭代矩阵,可基于海森矩阵和迭代矩阵,预测连续两次迭代中畸变系数预测值之间的变化量,从而使得后续可基于连续两次迭代中畸变系数预测值之间的变化量来确定畸变系数的值。
在其中一个实施例中,根据多对标定点坐标对,并通过畸变系数值增量模型中的海森矩阵模型得到当前轮次的海森矩阵,包括:针对多对标定点对中的每对标定点对,均根据与所针对的标定点对相对应的坐标对,并通过海森矩阵模型中的雅可比矩阵模型,得到与 当前标定点坐标对相对应的当前雅可比矩阵;将当前雅可比矩阵和当前雅可比矩阵的转置相融合,得到与所针对的标定点对相对应的融合雅可比矩阵;将多对标定点对各自对应的融合雅可比矩阵进行叠加,得到当前轮次的海森矩阵。
具体地,海森矩阵模型可包括雅可比矩阵模型。当需要确定当前轮次的海森矩阵时,计算机设备可分别将每对标定点各自对应的坐标对输入至海森矩阵模型中的雅可比矩阵模型中,得到每对标定点对各自对应的雅可比矩阵。进一步地,针对多对标定点对中的每对标定点对,计算机设备均确定所针对的标定点对所对应的雅可比矩阵的转置,得到雅克比矩阵转置,并将对应于同一个标定点对的雅可比矩阵和雅克比矩阵转置相融合,得到各标定点对各自对应的融合雅可比矩阵。比如,将对应于标定点对A的雅克比矩阵与雅克比矩阵转置相乘,得到与标定点对A相对应的融合雅克比矩阵。当得到每个标定点对各自对应的融合雅可比矩阵时,计算机设备可将各融合雅可比矩阵进行叠加,得到当前轮次的海森矩阵。
在其中一个实施例中,计算机设备可设置一个海森矩阵的初始值,并遍历每对标定点对。对于首个遍历至的标定点对,计算机设备可确定首个遍历至的标定点对的雅可比矩阵,并将该雅可比矩阵与该雅可比矩阵的转置相乘,得到首个遍历至的标定点对的融合雅可比矩阵。计算机设备将海森矩阵的初始值与融合雅可比矩阵相叠加,得到首个遍历至的标定点坐标对的叠加海森矩阵。计算机设备继续确定下一个遍历的标定点对的融合雅可比矩阵,并将下一个遍历的标定点坐标对的融合雅可比矩阵与首个遍历至的标定点对的叠加海森矩阵进行叠加,得到下一个遍历的标定点对的叠加海森矩阵。如此迭代,直至遍历至最后一个标定点对,并将最后一个遍历至的标定点对的叠加海森矩阵,作为当前轮次的海森矩阵。
在其中一个实施例中,在计算与第i对标定点点对相对应的雅可比矩阵的过程中,假设与第i对标定点对相对应的坐标对为[(xui,yui),(xdi,ydi)],计算机确定第一标定点坐标(xui,yui)所对应的非畸变距离rui,以及确定第二标定点坐标(xdi,ydi)所对应畸变距离的rdi,并将(xui,yui)、rui、(xdi,ydi)和rdi输入至雅可比矩阵模型 中,得到第i对坐标对[(xui,yui),(xdi,ydi)]的雅可比矩阵其中,第i对坐标对所对应的雅克比矩阵,也为第i对标定点坐标对所对应的雅克比矩阵。
上述实施例中,通过确定每对标定点坐标对各自的融合雅可比矩阵,可基于每对标定点坐标对各自的融合雅可比矩阵快速得到当前轮次的海森矩阵。
在其中一个实施例中,根据多对标定点坐标对和当前轮次的畸变系数的预测值,并通过畸变系数值增量模型中的迭代矩阵模型得到当前轮次的迭代矩阵,包括:对于多对标定点对中的每对标定点对,均确定与所针对的标定点坐标对相对应的雅可比矩阵;根据所针对的标定点对和当前轮次的畸变系数的预测值,通过迭代矩阵模型中的残差模型,得到与 所针对的标定点对相对应的当前残差矩阵;将与所针对的标定点对相对应的雅可比矩阵的转置,以及与所针对的标定点对相对应的残差矩阵进行融合,得到与所针对的标定点对相对应的融合迭代矩阵;将多对标定点对各自对应的融合迭代矩阵进行叠加,得到当前轮次的迭代矩阵。
具体地,迭代矩阵模型可包括雅可比矩阵模型和残差模型。当需要确定当前轮次的迭代矩阵时,计算机设备可分别将每对标定点坐标和对和当前轮次的畸变系数的预测值输入至迭代矩阵模型中的雅可比矩阵模型中,得到每对标定点坐标对各自对应的雅可比矩阵。容易理解地,计算机设备也可复用在计算海森矩阵时生成的当前标定点坐标对所对应的当前雅可比矩阵。
进一步地,针对多对标定点坐标对中的每对标定点对,将各针对的标定点对称作当前标定点对,计算机设备将当前标定点对所对应的坐标对和当前轮次的畸变系数的预测值输入至迭代矩阵模型中的残差模型中,得到由残差模型输出的当前标定点对所对应的残差矩阵。计算机设备确定当前标定点对所对应的雅可比矩阵的转置,并将当前标定点对所对应的雅可比矩阵的转置与当前标定点对所对应的残差矩阵相乘,得到与当前标定点对相对应的融合迭代矩阵。当得到各标定点对各自对应的融合迭代矩阵时,计算机设备即可将各标定点对各自对应的融合迭代矩阵进行叠加,得到当前轮次的迭代矩阵。
在其中一个实施例中,计算机设备可设置一个迭代矩阵的初始值,并遍历每对标定点坐标对。对于首个遍历至的标定点对,计算机设备可确定首个遍历至的标定点对的雅可比矩阵和残差矩阵,并将首个遍历至的标定点对的雅可比矩阵和残差矩阵进行融合,得到首个遍历至的标定点对的融合迭代矩阵。计算机设备将融合迭代矩阵的初始值与融合雅可比矩阵相叠加,得到首个遍历至的标定点对的叠加迭代矩阵。计算机设备继续确定下一个遍历的标定点对的融合迭代矩阵,将下一个遍历的标定点对的融合迭代矩阵与首个遍历至的标定点坐标对的叠加迭代矩阵进行叠加,得到下一个遍历的标定点对的叠加迭代矩阵。如此迭代,直至遍历至最后一个标定点对,并将最后一个遍历至的标定点对的叠加迭代矩阵,作为当前轮次的迭代矩阵。
在其中一个实施例中,在计算第i对坐标对相对应的残差矩阵的过程中,假设第i对坐标对为[(xui,yui),(xdi,ydi)],计算机设备确定第一标定点坐标(xui,yui)所对应的非畸变距离rui,以及确定第二标定点坐标(xdi,ydi)所对应畸变距离的rdi,并将(xui,yui)、rui、(xdi,ydi)、rdi和当前轮次的畸变系数的预测值(c1j,c2j)输入至残差模型中,得到第i对标定点坐标对[(xui,yui),(xdi,ydi)]的残差矩阵其中,第i对坐标对相对应的残差矩阵,也即为第i对标定点对相对应的残差矩阵。
上述实施例中,通过确定每对标定点对各自的融合迭代矩阵,可基于每对标定点对各 自的融合迭代矩阵快速得到当前轮次的迭代矩阵。
在其中一个实施例中,海森矩阵模型是根据雅可比矩阵模型生成得到的,雅可比矩阵模型表征残差模型在畸变系数方向上的偏导数;迭代矩阵是根据雅可比矩阵模型和残差模型生成得到的;残差模型表征,基于畸变系数的预测值所确定的畸变前后坐标变化与基于畸变系数的实际值所确定的畸变前后坐标变化之间的残差。
具体地,在进行数值拟合的过程中,数值拟合的目标是拟合出的畸变系数的预测值与畸变系数的实际值之间的残差尽可能的接近于0。基于此目标,可生成表征,基于畸变系数的预测值所确定的畸变前后坐标变化与基于畸变系数的预测值实际值所确定的畸变前后坐标变化之间差异的残差矩阵。在进行数值拟合的过程中,当残差矩阵输出的残差接近零时,则可认为此时拟合得到的畸变系数的预测值近似于畸变系数的实际值,此时达到了拟合目标。因此,本申请的数值拟合的目标就变成了针对该残差的最小二乘优化问题达到最小值。在求解针对该残差的最小二乘优化问题达到最小值的过程中,就需要通过基于残差模型生成的海森矩阵和雅可比矩阵来确定每次迭代中畸变系数的预测值增量,从而通过畸变系数的预测值增量完成最终的畸变系数预测值增量收敛判断,并在确定畸变系数预测值增量收敛时,确定残差矩阵输出的残差接近零,此时迭代得到的畸变系数的预测值即为目标值。
在其中一个实施例中,上述方法还包括:综合畸变系数的值和畸变关系,得到畸变关系;获取待显示图像,并根据畸变关系对待显示图像中的各像素点进行反畸变处理,以确定各像素点各自对应的畸变矫正位置;将待显示图像中的各像素点分别移动至相应的畸变矫正位置处,得到反畸变图像;触发扩展现实设备显示反畸变图像。
具体地,当标定扩展现实模型的畸变系数时,也即,当得到畸变系数值确定的畸变关系时,计算机设备即可获取待显示图像,并根据畸变关系对待显示图像进行反畸变处理,得到反畸变图像。计算机设备将反畸变图像输入至扩展现实设备中并触发扩展现实设备显示该反畸变图像,从而人眼通过扩展现实设备观看到正常无畸变图像。容易理解地,计算机设备也可触发扩展现实设备根据标定的畸变关系对待显示图像进行反畸变处理,得到反畸变图像。
在其中一个实施例中,畸变关系具体为一个函数。对于待显示图像中的各像素点各自对应的像素坐标,计算机设备均将当前像素的像素坐标代入至畸变关系的反函数中,得到反函数输出的当前像素的畸变矫正位置。计算机设备综合各像素各自对应的畸变矫正位置,得到反畸变图像。
在其中一个实施例中,参考图7,当扩展现实设备的显示器显示反畸变图像时,显示器显示反畸变图像时发出的光线透过扩展现实设备的光机镜头后,呈现在人眼中的即为一张正常显示的无畸变图像。图7示出了一个实施例中扩展现实设备显示图像的示意图。
上述实施例中,通过对畸变系数进行标定,可基于标定的畸变系数确定目标畸变关系,从而基于目标畸变关系输出正常显示的无畸变图像,如此,大大提升了用户体验。
在其中一个实施例中,参考图8,S801,计算机设备对畸变前后采集的棋盘格图片进行角点检测,得到多对标定点坐标对。S802,计算机设备获取畸变系数值增量模型,并进入迭代,通过畸变系数值增量模型和多对标定点坐标对得到畸变系数预测值增量。S803,计算机设备判断畸变系数预测值增量是否收敛。若收敛,则结束迭代。S804若未收敛,则继续迭代,直至畸变系数预测值增量收敛,并将收敛时的畸变系数的预测值,作为畸变系数的目标值。图8示出了一个实施例中畸变系数标定的整体流程示意图。
在其中一个具体实施例中,参考图9,扩展现实设备的畸变系数标定方法包括:
S902,计算机设备获取标准标定图像和畸变标定图像;畸变标定图像,是在扩展现实设备的显示器将标准标定图像显示为画面时,透过扩展现实设备的光机镜头采集画面形成的。
S904,计算机设备获取滑动窗口,并触发滑动窗口按照预设的移动步长在标准标定图像上滑动,得到滑动窗口所框选的当前标准局部图像;确定当前标准局部图像的第一整体灰度值。
S906,计算机设备触发滑动窗口多次朝任意方向移动,得到与当前标准局部图像相对应的多个移动后标准局部图像;确定每个移动后标准局部图像的第二整体灰度值。
S908,在每个第二整体灰度值与第一整体灰度值之间的差异均大于或等于预设差异阈值时,计算机设备将当前标准局部图像的中心作为第一标定点。
S910,计算机设备获取滑动窗口,并触发滑动窗口按照预设的移动步长在畸变标定图像上滑动,得到滑动窗口所框选的当前畸变局部图像;确定当前畸变局部图像的第三整体灰度值。
S912,计算机设备触发滑动窗口多次朝任意方向移动,得到与当前畸变局部图像相对应的多个移动后畸变局部图像;确定每个移动后畸变局部图像的第四整体灰度值。
S914,在每个第四整体灰度值与第三整体灰度值之间的差异均大于或等于预设差异阈值时,计算机设备将当前畸变局部图像的中心作为第二标定点。
S916,计算机设备根据各第一标定点坐标之间的位置关系和各第二标定点坐标之间的位置关系,对多个第一标定点坐标和多个第二标定点坐标进行匹配处理,得到多对标定点坐标对。
S918,对于多对标定点坐标对中的每对标定点坐标对,计算机设备均根据当前标定点坐标对,并通过海森矩阵模型中的雅可比矩阵模型,得到与当前标定点坐标对相对应的当前雅可比矩阵。
S920,计算机设备将当前雅可比矩阵和当前雅可比矩阵的转置相融合,得到与当前标定点坐标对相对应的融合雅可比矩阵;将多对标定点坐标对各自对应的融合雅可比矩阵进行叠加,得到当前轮次的海森矩阵。
S922,对于多对标定点坐标对中的每对标定点坐标对,计算机设备均根据当前标定点坐标对,并通过迭代矩阵模型中的雅可比矩阵模型,得到与当前标定点坐标对相对应的当 前雅可比矩阵。
S924,对于多对标定点坐标对中的每对标定点坐标对,计算机设备均根据当前标定点坐标对和当前轮次的畸变系数的预测值,并通过迭代矩阵模型中的残差模型,得到与当前标定点坐标对相对应的当前残差矩阵。
S926,计算机设备将当前雅可比矩阵的转置与当前残差矩阵进行融合,得到与当前标定点坐标对相对应的融合迭代矩阵;将多对标定点坐标对各自对应的融合迭代矩阵进行叠加,得到当前轮次的迭代矩阵。
S928,计算机设备对当前轮次的海森矩阵和当前轮次的迭代矩阵进行融合,得到当前轮次的畸变系数预测值增量;当畸变系数预测值增量不满足数值收敛条件时,计算机设备将当前轮次的畸变系数预测值增量和当前轮次的畸变系数预测值进行叠加,得到畸变系数的更新预测值。
S930,将下一轮次作为当前轮次,计算机设备将更新预测值作为当前轮次的畸变系数的预测值,返回S922的步骤继续执行,直至畸变系数预测值增量满足数值收敛条件,并将最后一个轮次的畸变系数的预测值,作为初始畸变关系中的畸变系数所对应的目标预测值。
应该理解的是,虽然如上所述的各实施例所涉及的流程图中的各个步骤按照箭头的指示依次显示,但是这些步骤并不是必然按照箭头指示的顺序依次执行。除非本文中有明确的说明,这些步骤的执行并没有严格的顺序限制,这些步骤可以以其它的顺序执行。而且,如上所述的各实施例所涉及的流程图中的至少一部分步骤可以包括多个步骤或者多个阶段,这些步骤或者阶段并不必然是在同一时刻执行完成,而是可以在不同的时刻执行,这些步骤或者阶段的执行顺序也不必然是依次进行,而是可以与其它步骤或者其它步骤中的步骤或者阶段的至少一部分轮流或者交替地执行。
本申请还提供一种应用场景,该应用场景应用上述的扩展现实设备的畸变系数标定方法。具体地,该扩展现实设备的畸变系数标定方法在该应用场景的应用如下:
当需要对XR设备进行畸变系数的标定时,首先需要获取待拟合的畸变关系。以经典Brown模型为例,该模型是根据非畸变距离ru的系数cn来表示畸变距离rd,具体如下:
可以看到以上存在着两个未知的系数c1和c2,而本系统中的标定目的就是要拟合出这两个系数的具体值。具体地说,在标定中,本系统首先输入标准化的棋盘格图片,然后将图片直接输入到光机显示中,通过高清相机获得畸变后的棋盘格图片(背景为白色);最后通过角点检测(包括但不限于opencv角点检测)得到畸变前后的棋盘格角点坐标位置(也称作多对标定点坐标对)。至此,标定过程所需的准备工作均已完成,然后通过将畸变前后的棋盘格对应角点坐标(也称作多对标定点坐标对)作为数值拟合的输入,拟合出畸变关系中的系数,得到完备的正向畸变关系。数值拟合过程具体如下:
在标定中,数值拟合的目标是,拟合的畸变系数(c1,c2)与实际的畸变系数(c′1,c′2)所 引起的坐标变化的残差尽量接近于零(此处以Brown模型为例),所以引起的残差关系如下:

为达到针对于该残差的最小二乘优化问题达到最小值(即近于0),就需要求出c1和c2方向的偏导数形成雅可比矩阵,具体如下:
然后通过泰勒展开、高斯牛顿的条件以及一阶导数为零,来得到如下每次迭代中两个坐标维度的变化量:
其中的r(c1,c2)关系如下:
至此,可以得到每一次迭代的c1和c2维度的增量Δc1和Δc2就可以得到了,然后通过不断迭代计算Δc1和Δc2,判断其数值收敛到一个很小的范围(本系统中设置为1e-8),若收敛,则可以得到相对于所有畸变前后棋盘格角点的系数,即完成标定功能。其所得畸变系数就可以在后续的畸变校正处理中直接使用,待畸变图像准确的校正后才可以使相关后续处理(包括但不限于,图像显示、画质增强等)变得顺利。其中,上述公式中的各参数的具体含义可参照上述实施例。
本申请还提供另一种应用场景,该应用场景应用上述的扩展现实设备的畸变系数标定方法。具体地,该扩展现实设备的畸变系数标定方法在该应用场景的应用如下:
用户可通过扩展现实设备进入虚拟现实世界,比如,用户可通过扩展现实设备(XR设备)来进行游戏娱乐,以进入可交互式虚拟现实游戏场景。在用户通过XR设备进行游戏娱乐之前,可预先通过本申请提出的畸变系数标定方法来进行畸变系数的标定,从而XR设备可根据标定的畸变系数对待展示的虚拟现实游戏场景进行调整后展示,如此,用户观看到的虚拟现实游戏场景即为非畸变的场景。通过展示非畸变的虚拟现实游戏场景,可使得用户感受到身临其境的游戏体验。
上述应用场景仅为示意性的说明,可以理解,本申请各实施例所提供的扩展现实设备的畸变系数标定方法的应用不局限于上述场景。比如,在用户通过扩展现实设备进行娱乐、导航之前,也可预先通过上述的畸变系数标定方法进行畸变系数的标定。示例性地,在用 户通过扩展现实设备观看VR电影(又称,虚拟现实电影)之前,可预先进行畸变系数的标定,从而扩展现实设备可基于标定的畸变系数向用户展示非畸变的电影画面。或者,在用户通过扩展现实设备进行导航之前,也可预先进行畸变系数的标定,从而扩展现实设备可基于标定的畸变系数向用户展示非畸变的叠加有虚拟场景和现实场景的路面画面。
基于同样的发明构思,本申请实施例还提供了一种用于实现上述所涉及的扩展现实设备的畸变系数标定方法的扩展现实设备的畸变系数标定装置。该装置所提供的解决问题的实现方案与上述方法中所记载的实现方案相似,故下面所提供的一个或多个扩展现实设备的畸变系数标定装置实施例中的具体限定可以参见上文中对于扩展现实设备的畸变系数标定方法的限定,在此不再赘述。
在一个实施例中,如图10所示,提供了一种扩展现实设备的畸变系数标定装置,包括:图像获取模块1002、标定点对确定模块1004和数值拟合模块1006,其中:
图像获取模块1002,用于获取标准标定图像和畸变标定图像;畸变标定图像,是在扩展现实设备的显示器将标准标定图像显示为画面时,透过扩展现实设备的光机镜头采集画面形成的.
标定点对确定模块1004,用于基于标准标定图像和畸变标定图像进行标定点检测,得到多对标定点坐标对;每个标定点对,包括属于标准标定图像的第一标定点和属于畸变标定图像的第二标定点,第二标定点是透过所述扩展现实设备的光机镜头采集所述第一标定点得到的对应标定点。
数值拟合模块1006,用于获取待拟合畸变关系;根据多对标定点对,对待拟合畸变关系进行数值拟合,以确定待拟合畸变关系中畸变系数的值,得到畸变关系,所述畸变关系用于表征所述标准标定图像与所述畸变标定图像之间标定点的转换关系。
在其中一个实施例中,畸变标定图像是图像采集设备采集得到的;在图像采集设备采集畸变标定图像的过程中,扩展现实设备的光机镜头位于图像采集设备和扩展现实设备的显示器之间,且光机镜头的光心与显示器的中心对齐。
在其中一个实施例中,参考图11,标定点对确定模块1004还用于对标准标定图像进行标定点检测,得到多个第一标定点,并确定每个第一标定点在标准标定图像中的坐标,得到多个第一标定点各自对应的第一标定点坐标;对畸变标定图像进行标定点检测,得到多个第二标定点各自对应的第二标定点,并确定每个第二标定点在畸变标定图像中的坐标,得到多个第二标定点坐标;根据多个第一标定点各自对应的第一标定点坐标,确定各第一标定点之间的位置关系;根据多个第二标定点各自对应的第二标定点坐标,确定各第二标定点之间的位置关系;根据各第一标定点之间的位置关系和各第二标定点之间的位置关系,对多个第一标定点和多个第二标定点进行匹配,得到多对标定点对。
在其中一个实施例中,标定点对确定模块1004还包括第一标定点确定模块1041,用于获取滑动窗口,并触发滑动窗口按照预设的移动步长在标准标定图像上滑动,得到滑动窗口所框选的标准局部图像;根据标准局部图像中像素点的灰度值,确定标准局部图像的 第一整体灰度值;触发滑动窗口多次朝任意方向移动,得到与标准局部图像相对应的多个移动后标准局部图像;根据各移动后标准局部图像中像素点的灰度值,确定每个移动后标准局部图像的第二整体灰度值;根据每个第二整体灰度值与第一整体灰度值之间的差异,从标准局部图像中提取出第一标定点。
在其中一个实施例中,第一标定点确定模块1041还用于对将每个第二整体灰度值分别与第一整体灰度值相减,得到每个第二整体灰度值各自对应的灰度差异;获取预设差异阈值,从每个灰度差异的绝对值中筛选出大于或等于预设差异阈值的绝对值;确定筛选出的绝对值的数量;获取预设数量阈值,在数量大于或等于预设数量阈值时,将标准局部图像的中心作为第一标定点。
在其中一个实施例中,标定点对确定模块1004还包括第二标定点确定模块1042,用于获取滑动窗口,并触发滑动窗口按照预设的移动步长在畸变标定图像上滑动,得到滑动窗口所框选的畸变局部图像;根据畸变局部图像中像素点的灰度值,确定畸变局部图像的第三整体灰度值;触发滑动窗口多次朝任意方向移动,得到与畸变局部图像相对应的多个移动后畸变局部图像;根据各移动后畸变局部图像中像素点的灰度值,确定每个移动后畸变局部图像的第四整体灰度值;根据每个第四整体灰度值与第三整体灰度值之间的差异,从畸变局部图像找中提取出第二标定点。
在其中一个实施例中,数值拟合模块1006还用于获取预设的畸变系数值增量模型;畸变系数值增量模型是用以确定两次连续迭代中畸变系数的预测值的变化量的模型;确定当前轮次的畸变系数的预测值;根据多对标定点对和当前轮次的畸变系数的预测值,并通过畸变系数值增量模型得到当前轮次的畸变系数预测值增量;获取数值收敛条件,当畸变系数预测值增量不满足数值收敛条件时,将当前轮次的畸变系数预测值增量和当前轮次的畸变系数预测值进行叠加,得到畸变系数的更新预测值;将下一轮次作为当前轮次,将更新预测值作为当前轮次的畸变系数的预测值,并返回根据多对标定点坐标对和当前轮次的畸变系数的预测值,并通过畸变系数值增量模型得到当前轮次的畸变系数预测值增量的步骤继续执行,直至畸变系数预测值增量满足数值收敛条件;将最后一个轮次的畸变系数的预测值,作为待拟合畸变关系中的畸变系数所对应的值。
在其中一个实施例中,畸变系数值增量模型是基于残差模型确定的;残差模型是基于待拟合畸变关系确定的;残差模型表征,第一坐标变化与第二坐标变化之间的残差;所述第一坐标变化,为基于畸变系数的预测值所确定的畸变前后坐标变化;所述第二坐标变化,为基于畸变系数的实际值所确定的畸变前后坐标变化。
在其中一个实施例中,数值拟合模块1006还用于根据多对标定点对,并通过畸变系数值增量模型中的海森矩阵模型得到当前轮次的海森矩阵;根据多对标定点对和当前轮次的畸变系数的预测值,并通过畸变系数值增量模型中的迭代矩阵模型得到当前轮次的迭代矩阵;对当前轮次的海森矩阵和当前轮次的迭代矩阵进行融合,得到当前轮次的畸变系数预测值增量。
在其中一个实施例中,数值拟合模块1006还包括海森矩阵确定模块1061,用于针对多对标定点坐标对中的每对标定点对,确定所针对的标定点对中属于所述标准标定图像的第一标定点的坐标,并确定所针对的标定点对中属于所述畸变标定图像的第二标定点的坐标;根据所针对的标定点对中属于标准标定图像的第一标定点的坐标,以及属于畸变标定图像的第二标定点的坐标,确定与所针对的标定点对相对应的坐标对;根据坐标对,并通过海森矩阵模型中的雅可比矩阵模型,得到与所针对的标定点对相对应的雅可比矩阵;将雅可比矩阵和雅可比矩阵的转置相融合,得到与所针对的标定点对相对应的融合雅可比矩阵;将多对标定点坐标对各自对应的融合雅可比矩阵进行叠加,得到当前轮次的海森矩阵。
在其中一个实施例中,数值拟合模块1006还包括迭代矩阵确定模块1062,用于根据所针对的标定点对和当前轮次的畸变系数的预测值,通过迭代矩阵模型中的残差模型,得到与所针对的标定点坐标对相对应的残差矩阵;将与所针对的标定点对相对应的雅可比矩阵的转置,以及与对的标定点对相对应的残差矩阵进行融合,得到与对的标定点对相对应的标定点对相对应的融合迭代矩阵;将多对标定点对各自对应的融合迭代矩阵进行叠加,得到当前轮次的迭代矩阵。
在其中一个实施例中,海森矩阵模型是根据雅可比矩阵模型生成得到的,雅可比矩阵模型表征残差模型在畸变系数方向上的偏导数;迭代矩阵是根据雅可比矩阵模型和残差模型生成得到的;残差模型表征,基于畸变系数的预测值所确定的畸变前后坐标变化与基于畸变系数的实际值所确定的畸变前后坐标变化之间的残差。
在其中一个实施例中,扩展现实设备的畸变系数标定装置1000还包括反畸变模块,用于综合畸变系数的值和畸变关系,得到畸变关系;获取待显示图像,并根据畸变关系对待显示图像中的各像素点进行反畸变处理,以确定待显示图像中各像素点各自对应的畸变矫正位置;将各像素点分别移动至相应的畸变矫正位置处,得到反畸变图像;触发扩展现实设备显示反畸变图像。
上述扩展现实设备的畸变系数标定装置中的各个模块可全部或部分通过软件、硬件及其组合来实现。上述各模块可以硬件形式内嵌于或独立于计算机设备中的处理器中,也可以以软件形式存储于计算机设备中的存储器中,以便于处理器调用执行以上各个模块对应的操作。
在一个实施例中,提供了一种计算机设备,该计算机设备可以是服务器,其内部结构图可以如图12所示。该计算机设备包括处理器、存储器、输入/输出接口(Input/Output,简称I/O)和通信接口。其中,处理器、存储器和输入/输出接口通过系统总线连接,通信接口通过输入/输出接口连接到系统总线。其中,该计算机设备的处理器用于提供计算和控制能力。该计算机设备的存储器包括非易失性存储介质和内存储器。该非易失性存储介质存储有操作系统、计算机程序和数据库。该内存储器为非易失性存储介质中的操作系统和计算机程序的运行提供环境。该计算机设备的数据库用于存储扩展现实设备的畸变系数标定数据。该计算机设备的输入/输出接口用于处理器与外部设备之间交换信息。该计算机设备 的通信接口用于与外部的终端通过网络连接通信。该计算机程序被处理器执行时以实现一种扩展现实设备的畸变系数标定方法。
在一个实施例中,提供了一种计算机设备,该计算机设备可以是终端,其内部结构图可以如图13所示。该计算机设备包括处理器、存储器、输入/输出接口、通信接口、显示单元和输入装置。其中,处理器、存储器和输入/输出接口通过系统总线连接,通信接口、显示单元和输入装置通过输入/输出接口连接到系统总线。其中,该计算机设备的处理器用于提供计算和控制能力。该计算机设备的存储器包括非易失性存储介质、内存储器。该非易失性存储介质存储有操作系统和计算机程序。该内存储器为非易失性存储介质中的操作系统和计算机程序的运行提供环境。该计算机设备的输入/输出接口用于处理器与外部设备之间交换信息。该计算机设备的通信接口用于与外部的终端进行有线或无线方式的通信,无线方式可通过WIFI、移动蜂窝网络、NFC(近场通信)或其他技术实现。该计算机程序被处理器执行时以实现一种扩展现实设备的畸变系数标定方法。该计算机设备的显示单元用于形成视觉可见的画面,可以是显示屏、投影装置或扩展现实成像装置,显示屏可以是液晶显示屏或电子墨水显示屏,该计算机设备的输入装置可以是显示屏上覆盖的触摸层,也可以是计算机设备外壳上设置的按键、轨迹球或触控板,还可以是外接的键盘、触控板或鼠标等。
本领域技术人员可以理解,图12至图13中示出的结构,仅仅是与本申请方案相关的部分结构的框图,并不构成对本申请方案所应用于其上的计算机设备的限定,具体的计算机设备可以包括比图中所示更多或更少的部件,或者组合某些部件,或者具有不同的部件布置。
在一个实施例中,还提供了一种计算机设备,包括存储器和处理器,存储器中存储有计算机程序,该处理器执行计算机程序时实现上述各方法实施例中的步骤。
在一个实施例中,提供了一种计算机可读存储介质,存储有计算机程序,该计算机程序被处理器执行时实现上述各方法实施例中的步骤。
在一个实施例中,提供了一种计算机程序产品或计算机程序,该计算机程序产品或计算机程序包括计算机指令,该计算机指令存储在计算机可读存储介质中。计算机设备的处理器从计算机可读存储介质读取该计算机指令,处理器执行该计算机指令,使得该计算机设备执行上述各方法实施例中的步骤。
需要说明的是,本申请所涉及的用户信息(包括但不限于用户设备信息、用户个人信息等)和数据(包括但不限于用于分析的数据、存储的数据、展示的数据等),均为经用户授权或者经过各方充分授权的信息和数据,且相关数据的收集、使用和处理需要遵守相关国家和地区的相关法律法规和标准。
本领域普通技术人员可以理解实现上述实施例方法中的全部或部分流程,是可以通过计算机程序来指令相关的硬件来完成,所述的计算机程序可存储于一非易失性计算机可读取存储介质中,该计算机程序在执行时,可包括如上述各方法的实施例的流程。其中, 本申请所提供的各实施例中所使用的对存储器、数据库或其它介质的任何引用,均可包括非易失性和易失性存储器中的至少一种。非易失性存储器可包括只读存储器(Read-Only Memory,ROM)、磁带、软盘、闪存、光存储器、高密度嵌入式非易失性存储器、阻变存储器(ReRAM)、磁变存储器(Magnetoresistive Random Access Memory,MRAM)、铁电存储器(Ferroelectric Random Access Memory,FRAM)、相变存储器(Phase Change Memory,PCM)、石墨烯存储器等。易失性存储器可包括随机存取存储器(Random Access Memory,RAM)或外部高速缓冲存储器等。作为说明而非局限,RAM可以是多种形式,比如静态随机存取存储器(Static Random Access Memory,SRAM)或动态随机存取存储器(Dynamic Random Access Memory,DRAM)等。本申请所提供的各实施例中所涉及的数据库可包括关系型数据库和非关系型数据库中至少一种。非关系型数据库可包括基于区块链的分布式数据库等,不限于此。本申请所提供的各实施例中所涉及的处理器可为通用处理器、中央处理器、图形处理器、数字信号处理器、可编程逻辑器、基于量子计算的数据处理逻辑器等,不限于此。
以上实施例的各技术特征可以进行任意的组合,为使描述简洁,未对上述实施例中的各个技术特征所有可能的组合都进行描述,然而,只要这些技术特征的组合不存在矛盾,都应当认为是本说明书记载的范围。
以上所述实施例仅表达了本申请的几种实施方式,其描述较为具体和详细,但并不能因此而理解为对本申请专利范围的限制。应当指出的是,对于本领域的普通技术人员来说,在不脱离本申请构思的前提下,还可以做出若干变形和改进,这些都属于本申请的保护范围。因此,本申请的保护范围应以所附权利要求为准。

Claims (20)

  1. 一种扩展现实设备的畸变系数标定方法,由计算机设备执行,所述方法包括:
    获取标准标定图像和畸变标定图像,所述畸变标定图像,是在扩展现实设备的显示器将所述标准标定图像显示为画面时,透过所述扩展现实设备的光机镜头采集所述画面形成的;
    基于所述标准标定图像和所述畸变标定图像进行标定点检测,得到多对标定点对,每个所述标定点对,包括属于所述标准标定图像的第一标定点和属于所述畸变标定图像的第二标定点,所述第二标定点是透过所述扩展现实设备的光机镜头采集所述第一标定点得到的对应标定点;
    获取待拟合畸变关系,所述待拟合畸变关系包括待确定的畸变系数;及
    根据所述多对标定点对,对所述待拟合畸变关系进行数值拟合,以确定所述待拟合畸变关系中所述畸变系数的值,得到畸变关系,所述畸变关系用于表征所述标准标定图像与所述畸变标定图像之间标定点的转换关系。
  2. 根据权利要求1所述的方法,其特征在于,所述畸变标定图像是图像采集设备采集得到的;及
    在所述图像采集设备采集所述畸变标定图像的过程中,所述扩展现实设备的光机镜头位于所述图像采集设备和所述扩展现实设备的显示器之间,且所述光机镜头的光心与所述显示器的中心对齐。
  3. 根据权利要求1至2任意一项所述的方法,其特征在于,所述基于所述标准标定图像和所述畸变标定图像进行标定点检测,得到多对标定点对,包括:
    对所述标准标定图像进行标定点检测,得到多个第一标定点,并确定每个所述第一标定点在所述标准标定图像中的坐标,得到所述多个第一标定点各自对应的第一标定点坐标;
    对所述畸变标定图像进行标定点检测,得到多个第二标定点,并确定每个所述第二标定点在所述畸变标定图像中的坐标,得到所述多个第二标定点各自对应的第二标定点坐标;
    根据所述多个第一标定点各自对应的第一标定点坐标,确定各所述第一标定点之间的位置关系;
    根据所述多个第二标定点各自对应的第二标定点坐标,确定各所述第二标定点之间的位置关系;及
    根据各所述第一标定点之间的位置关系和各所述第二标定点之间的位置关系,对所述多个第一标定点和所述多个第二标定点进行匹配,得到多对标定点对。
  4. 根据权利要求3所述的方法,其特征在于,所述对所述标准标定图像进行标定点检测,得到多个第一标定点,包括:
    获取滑动窗口,并触发所述滑动窗口按照预设的移动步长在所述标准标定图像上滑动,得到所述滑动窗口所框选的标准局部图像;
    根据所述标准局部图像中像素点的灰度值,确定所述标准局部图像的第一整体灰度值;
    触发所述滑动窗口多次朝任意方向移动,得到与所述标准局部图像相对应的多个移动后标准局部图像;
    根据各所述移动后标准局部图像中像素点的灰度值,确定每个所述移动后标准局部图像的第二整体灰度值;及
    根据各所述第二整体灰度值分别与所述第一整体灰度值之间的差异,从所述标准局部图像中提取出第一标定点。
  5. 根据权利要求4所述的方法,其特征在于,所述根据各所述第二整体灰度值分别与所述第一整体灰度值之间的差异,从所述标准局部图像中提取出第一标定点,包括:
    将每个所述第二整体灰度值分别与所述第一整体灰度值相减,得到每个所述第二整体灰度值各自对应的灰度差异;
    获取预设差异阈值,从每个所述灰度差异的绝对值中筛选出大于或等于所述预设差异阈值的绝对值;
    确定筛选出的绝对值的数量;及
    获取预设数量阈值,在所述数量大于或等于所述预设数量阈值时,将所述标准局部图像的中心作为第一标定点。
  6. 根据权利要求4所述的方法,其特征在于,所述对所述畸变标定图像进行标定点检测,得到多个第二标定点,包括:
    获取滑动窗口,并触发所述滑动窗口按照预设的移动步长在所述畸变标定图像上滑动,得到所述滑动窗口所框选的畸变局部图像;
    根据所述畸变局部图像中像素点的灰度值,确定所述畸变局部图像的第三整体灰度值;
    触发所述滑动窗口多次朝任意方向移动,得到与所述畸变局部图像相对应的多个移动后畸变局部图像;
    根据各所述移动后畸变局部图像中像素点的灰度值,确定每个所述移动后畸变局部图像的第四整体灰度值;及
    根据每个所述第四整体灰度值与所述第三整体灰度值之间的差异,从所述畸变局部图像中提取出第二标定点。
  7. 根据权利要求1所述的方法,其特征在于,所述根据所述多对标定点对,对所述待拟合畸变关系进行数值拟合,以确定所述待拟合畸变关系中所述畸变系数的值,包括:
    获取预设的畸变系数值增量模型;所述畸变系数值增量模型是用以确定两次连续迭代中畸变系数的预测值的变化量的模型;
    获取当前轮次的畸变系数的预测值;
    根据所述多对标定点对和所述当前轮次的畸变系数的预测值,并通过所述畸变系数值增量模型得到当前轮次的畸变系数预测值增量;
    获取数值收敛条件,当所述畸变系数预测值增量不满足所述数值收敛条件时,将所述当前轮次的畸变系数预测值增量和所述当前轮次的畸变系数的预测值进行叠加,得到所述 畸变系数的更新预测值;
    将下一轮次作为当前轮次,将所述更新预测值作为当前轮次的畸变系数的预测值,并返回根据所述多对标定点坐标对和所述当前轮次的畸变系数的预测值,并通过所述畸变系数值增量模型得到当前轮次的畸变系数预测值增量的步骤继续执行,直至所述畸变系数预测值增量满足所述数值收敛条件;及
    将最后一个轮次的所述畸变系数的预测值,作为所述待拟合畸变关系中畸变系数的值。
  8. 根据权利要求7所述的方法,其特征在于,所述畸变系数值增量模型是基于残差模型确定的;所述残差模型是基于待拟合畸变关系确定的;所述残差模型表征,第一坐标变化与第二坐标变化之间的残差;所述第一坐标变化,为基于所述畸变系数的预测值所确定的畸变前后坐标变化;所述第二坐标变化,为基于所述畸变系数的实际值所确定的畸变前后坐标变化。
  9. 根据权利要求7至8任意一项所述的方法,其特征在于,所述根据所述多对标定点对和所述当前轮次的畸变系数的预测值,并通过所述畸变系数值增量模型得到当前轮次的畸变系数预测值增量,包括:
    根据所述多对标定点对,并通过所述畸变系数值增量模型中的海森矩阵模型得到当前轮次的海森矩阵;
    根据所述多对标定点对和所述当前轮次的畸变系数的预测值,并通过所述畸变系数值增量模型中的迭代矩阵模型得到当前轮次的迭代矩阵;及
    对所述当前轮次的海森矩阵和所述当前轮次的迭代矩阵进行融合,得到当前轮次的畸变系数预测值增量。
  10. 根据权利要求9所述的方法,其特征在于,所述根据所述多对标定点坐标对,并通过所述畸变系数值增量模型中的海森矩阵模型得到当前轮次的海森矩阵,包括:
    针对所述多对标定点对中的每对标定点对,确定所针对的标定点对中属于所述标准标定图像的第一标定点的坐标,并确定所针对的标定点对中属于所述畸变标定图像的第二标定点的坐标;
    根据所针对的标定点对中属于所述标准标定图像的第一标定点的坐标,以及属于所述畸变标定图像的第二标定点的坐标,确定与所针对的标定点对相对应的坐标对;
    根据所述坐标对,并通过所述海森矩阵模型中的雅可比矩阵模型,得到与所针对的标定点对相对应的雅可比矩阵;
    将所述雅可比矩阵和所述雅可比矩阵的转置相融合,得到与所针对的标定点对相对应的融合雅可比矩阵;及
    将所述多对标定点对各自对应的融合雅可比矩阵进行叠加,得到当前轮次的海森矩阵。
  11. 根据权利要求10所述的方法,其特征在于,所述根据所述多对标定点对和所述当前轮次的畸变系数的预测值,并通过所述畸变系数值增量模型中的迭代矩阵模型得到当前轮次的迭代矩阵,包括:
    根据所针对的标定点对和所述当前轮次的畸变系数的预测值,通过所述迭代矩阵模型中的残差模型,得到与所针对的标定点对相对应的残差矩阵;
    将与所针对的标定点对相对应的雅可比矩阵的转置,以及与所针对的标定点对相对应的残差矩阵进行融合,得到与所针对的标定点坐标对相对应的融合迭代矩阵;及
    将所述多对标定点坐标对各自对应的融合迭代矩阵进行叠加,得到当前轮次的迭代矩阵。
  12. 根据权利要求9至11任意一项所述的方法,其特征在于,所述海森矩阵模型是根据雅可比矩阵模型生成得到的,所述雅可比矩阵模型表征残差模型在畸变系数方向上的偏导数;所述迭代矩阵是根据所述雅可比矩阵模型和所述残差模型生成得到的;所述残差模型表征,基于畸变系数的预测值所确定的畸变前后坐标变化与基于畸变系数的实际值所确定的畸变前后坐标变化之间的残差。
  13. 根据权利要求1所述的方法,其特征在于,所述方法还包括:
    综合所述畸变系数的值和所述待拟合畸变关系,得到畸变关系;
    获取待显示图像,并根据所述畸变关系对所述待显示图像中的各像素点进行反畸变处理,以确定各所述像素点各自对应的畸变矫正位置;
    将所述待显示图像中的像素点分别移动至对应的畸变矫正位置处,得到反畸变图像;及
    触发所述扩展现实设备显示所述反畸变图像。
  14. 一种扩展现实设备的畸变系数标定装置,所述装置包括:
    图像获取模块,用于获取标准标定图像和畸变标定图像,所述畸变标定图像,是在扩展现实设备的显示器将所述标准标定图像显示为画面时,透过所述扩展现实设备的光机镜头采集所述画面形成的;
    标定点对确定模块,用于基于所述标准标定图像和所述畸变标定图像进行标定点检测,得到多对标定点对;每个所述标定点对,包括属于所述标准标定图像的第一标定点和属于所述畸变标定图像的第二标定点,所述第二标定点是是透过所述扩展现实设备的光机镜头采集所述第一标定点得到的对应标定点;
    数值拟合模块,用于获取待拟合畸变关系;所述待拟合畸变关系包括待确定的畸变系数;根据所述多对标定点坐标对,对所述待拟合畸变关系进行数值拟合,以确定所述待拟合畸变关系中所述畸变系数的值,得到畸变关系,所述畸变关系用于表征所述标准标定图像与所述畸变标定图像之间标定点的转换关系。
  15. 根据权利要求14所述的装置,其特征在于,所述标定点对确定模块还用于对所述标准标定图像进行标定点检测,得到多个第一标定点,并确定每个所述第一标定点在所述标准标定图像中的坐标,得到所述多个第一标定点各自对应的第一标定点坐标;对所述畸变标定图像进行标定点检测,得到多个第二标定点,并确定每个所述第二标定点在所述畸变标定图像中的坐标,得到所述多个第二标定点各自对应的第二标定点坐标;根据所述 多个第一标定点各自对应的第一标定点坐标,确定各所述第一标定点之间的位置关系;根据所述多个第二标定点各自对应的第二标定点坐标,确定各所述第二标定点之间的位置关系;及根据各所述第一标定点之间的位置关系和各所述第二标定点之间的位置关系,对所述多个第一标定点和所述多个第二标定点进行匹配,得到多对标定点对。
  16. 根据权利要求15所述的装置,其特征在于,所述标定点对确定模块还用于获取滑动窗口,并触发所述滑动窗口按照预设的移动步长在所述标准标定图像上滑动,得到所述滑动窗口所框选的标准局部图像;根据所述标准局部图像中像素点的灰度值,确定所述标准局部图像的第一整体灰度值;触发所述滑动窗口多次朝任意方向移动,得到与所述标准局部图像相对应的多个移动后标准局部图像;根据各所述移动后标准局部图像中像素点的灰度值,确定每个所述移动后标准局部图像的第二整体灰度值;及根据各所述第二整体灰度值分别与所述第一整体灰度值之间的差异,从所述标准局部图像中提取出第一标定点。
  17. 根据权利要求16所述的装置,其特征在于,所述标定点对确定模块还用于将每个所述第二整体灰度值分别与所述第一整体灰度值相减,得到每个所述第二整体灰度值各自对应的灰度差异;获取预设差异阈值,从每个所述灰度差异的绝对值中筛选出大于或等于所述预设差异阈值的绝对值;确定筛选出的绝对值的数量;及获取预设数量阈值,在所述数量大于或等于所述预设数量阈值时,将所述标准局部图像的中心作为第一标定点。
  18. 一种计算机设备,包括存储器和处理器,所述存储器存储有计算机程序,所述处理器执行所述计算机程序时实现权利要求1至12中任一项所述的方法的步骤。
  19. 一种计算机可读存储介质,其上存储有计算机程序,所述计算机程序被处理器执行时实现权利要求1至13中任一项所述的方法的步骤。
  20. 一种计算机程序产品,包括计算机程序,该计算机程序被处理器执行时实现权利要求1至13中任一项所述的方法的步骤。
PCT/CN2023/130102 2022-12-13 2023-11-07 扩展现实设备的畸变系数标定方法、装置和存储介质 WO2024125159A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202211593972.7 2022-12-13
CN202211593972.7A CN115587952B (zh) 2022-12-13 2022-12-13 扩展现实设备的畸变系数标定方法、装置和存储介质

Publications (1)

Publication Number Publication Date
WO2024125159A1 true WO2024125159A1 (zh) 2024-06-20

Family

ID=84783407

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2023/130102 WO2024125159A1 (zh) 2022-12-13 2023-11-07 扩展现实设备的畸变系数标定方法、装置和存储介质

Country Status (2)

Country Link
CN (1) CN115587952B (zh)
WO (1) WO2024125159A1 (zh)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115587952B (zh) * 2022-12-13 2023-03-14 腾讯科技(深圳)有限公司 扩展现实设备的畸变系数标定方法、装置和存储介质
CN116433535B (zh) * 2023-06-12 2023-09-05 合肥埃科光电科技股份有限公司 一种二次曲线拟合的点坐标去畸变方法、系统及存储介质
CN117111046B (zh) * 2023-10-25 2024-01-12 深圳市安思疆科技有限公司 畸变矫正方法、系统、设备及计算机可读存储介质

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106780391A (zh) * 2016-12-27 2017-05-31 哈尔滨工业大学 一种用于全视角三维测量仪光学系统的畸变矫正算法
CN108876749A (zh) * 2018-07-02 2018-11-23 南京汇川工业视觉技术开发有限公司 一种鲁棒的镜头畸变校正方法
US20190080517A1 (en) * 2016-04-15 2019-03-14 Center Of Human-Centered Interaction For Coexistence Apparatus and method for three-dimensional information augmented video see-through display, and rectification apparatus
CN112286353A (zh) * 2020-10-28 2021-01-29 上海盈赞通信科技有限公司 一种vr眼镜的通用型图像处理方法及装置
CN115439549A (zh) * 2021-08-27 2022-12-06 北京车和家信息技术有限公司 畸变中心的标定方法、装置、设备及介质
CN115587952A (zh) * 2022-12-13 2023-01-10 腾讯科技(深圳)有限公司 扩展现实设备的畸变系数标定方法、装置和存储介质

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102194223B (zh) * 2010-03-09 2015-09-23 新奥特(北京)视频技术有限公司 一种变焦镜头的畸变系数标定方法及系统
CN108287678B (zh) * 2018-03-06 2020-12-29 京东方科技集团股份有限公司 一种基于虚拟现实的图像处理方法、装置、设备和介质
CN108917602B (zh) * 2018-07-09 2019-07-02 北京航空航天大学 一种全景结构光视觉测量系统及通用畸变模型参数标定方法
CN110035273B (zh) * 2019-03-07 2024-04-09 北京理工大学 一种畸变校正方法、装置及使用其的显示设备

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20190080517A1 (en) * 2016-04-15 2019-03-14 Center Of Human-Centered Interaction For Coexistence Apparatus and method for three-dimensional information augmented video see-through display, and rectification apparatus
CN106780391A (zh) * 2016-12-27 2017-05-31 哈尔滨工业大学 一种用于全视角三维测量仪光学系统的畸变矫正算法
CN108876749A (zh) * 2018-07-02 2018-11-23 南京汇川工业视觉技术开发有限公司 一种鲁棒的镜头畸变校正方法
CN112286353A (zh) * 2020-10-28 2021-01-29 上海盈赞通信科技有限公司 一种vr眼镜的通用型图像处理方法及装置
CN115439549A (zh) * 2021-08-27 2022-12-06 北京车和家信息技术有限公司 畸变中心的标定方法、装置、设备及介质
CN115587952A (zh) * 2022-12-13 2023-01-10 腾讯科技(深圳)有限公司 扩展现实设备的畸变系数标定方法、装置和存储介质

Also Published As

Publication number Publication date
CN115587952A (zh) 2023-01-10
CN115587952B (zh) 2023-03-14

Similar Documents

Publication Publication Date Title
WO2024125159A1 (zh) 扩展现实设备的畸变系数标定方法、装置和存储介质
CN111275518B (zh) 一种基于混合光流的视频虚拟试穿方法及装置
CN109064390B (zh) 一种图像处理方法、图像处理装置及移动终端
Alexiou et al. On the performance of metrics to predict quality in point cloud representations
Terzić et al. Methods for reducing visual discomfort in stereoscopic 3D: A review
JP5933931B2 (ja) 設定メニューを表示する方法及び対応するデバイス
KR101686693B1 (ko) 스테레오 영화용 시청자 중심 사용자 인터페이스
US9865032B2 (en) Focal length warping
CN104735435B (zh) 影像处理方法及电子装置
WO2019237745A1 (zh) 人脸图像处理方法、装置、电子设备及计算机可读存储介质
CN102246202A (zh) 图像显示装置、图像显示方法和程序
TW201505420A (zh) 內容感知顯示適應方法
US20130027389A1 (en) Making a two-dimensional image into three dimensions
WO2021105871A1 (en) An automatic 3d image reconstruction process from real-world 2d images
CN112291550A (zh) 自由视点图像生成方法、装置、系统及可读存储介质
US20170104982A1 (en) Presentation of a virtual reality scene from a series of images
CN113903210A (zh) 虚拟现实模拟驾驶方法、装置、设备和存储介质
US20060250389A1 (en) Method for creating virtual reality from real three-dimensional environment
GB2562530A (en) Methods and systems for viewing and editing 3D designs within a virtual environment
KR101632514B1 (ko) 깊이 영상 업샘플링 방법 및 장치
US11670045B2 (en) Method and apparatus for constructing a 3D geometry
TWI790560B (zh) 並排影像偵測方法與使用該方法的電子裝置
CN114779948A (zh) 基于面部识别的动画人物即时交互控制方法、装置及设备
CN115984445A (zh) 图像处理方法及相关装置、设备和存储介质
JP2022502771A (ja) 基材、コーティングされた物品、絶縁ガラスユニット、及び/又は同様のもののための拡張現実システム及び方法

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 23902359

Country of ref document: EP

Kind code of ref document: A1