CN111866493A - Image correction method, device and equipment based on head-mounted display equipment - Google Patents

Image correction method, device and equipment based on head-mounted display equipment Download PDF

Info

Publication number
CN111866493A
CN111866493A CN202010519249.9A CN202010519249A CN111866493A CN 111866493 A CN111866493 A CN 111866493A CN 202010519249 A CN202010519249 A CN 202010519249A CN 111866493 A CN111866493 A CN 111866493A
Authority
CN
China
Prior art keywords
image
pixel point
matched
window
pixel
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010519249.9A
Other languages
Chinese (zh)
Other versions
CN111866493B (en
Inventor
吴涛
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Qingdao Xiaoniao Kankan Technology Co Ltd
Original Assignee
Qingdao Xiaoniao Kankan Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Qingdao Xiaoniao Kankan Technology Co Ltd filed Critical Qingdao Xiaoniao Kankan Technology Co Ltd
Priority to CN202010519249.9A priority Critical patent/CN111866493B/en
Publication of CN111866493A publication Critical patent/CN111866493A/en
Application granted granted Critical
Publication of CN111866493B publication Critical patent/CN111866493B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/30Image reproducers
    • H04N13/332Displays for viewing with the aid of special glasses or head-mounted displays [HMD]
    • H04N13/344Displays for viewing with the aid of special glasses or head-mounted displays [HMD] with head-mounted left-right displays
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration by the use of more than one image, e.g. averaging, subtraction
    • G06T5/80
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/10Processing, recording or transmission of stereoscopic or multi-view image signals
    • H04N13/106Processing image signals
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/10Processing, recording or transmission of stereoscopic or multi-view image signals
    • H04N13/106Processing image signals
    • H04N13/122Improving the 3D impression of stereoscopic images by modifying image signal contents, e.g. by filtering or adding monoscopic depth cues
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/20Image signal generators
    • H04N13/204Image signal generators using stereoscopic image cameras
    • H04N13/239Image signal generators using stereoscopic image cameras using two 2D image sensors having a relative position equal to or related to the interocular distance
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/20Image signal generators
    • H04N13/296Synchronisation thereof; Control thereof
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • G06T2207/10012Stereo images

Abstract

The invention discloses an image correction method, device and equipment based on head-mounted display equipment, wherein the method comprises the following steps: acquiring a first image acquired by a first camera and a second image acquired by a second camera; performing stereo correction on the second image according to the first image to obtain a corrected second image; and converting the first image and the corrected second image according to the rotation angle information between the optical axis of the first camera and the optical axis of the lens corresponding to the first camera to obtain the converted first image and second image.

Description

Image correction method, device and equipment based on head-mounted display equipment
Technical Field
The embodiment of the disclosure relates to the technical field of image processing, and more particularly, to an image correction method based on a head-mounted display device, an image correction device and a head-mounted display device.
Background
Mixed Reality (MR) is a further development of virtual Reality technology, which builds an interactive feedback information loop among the virtual world, the real world and the user by introducing real scene information into the virtual environment, so as to enhance the Reality of the user experience.
At present, in the field of virtual reality or mixed reality, two external visible light cameras of display device are worn and external environmental information is caught in real time. Specifically, two images are collected by two visible light cameras to perform stereoscopic vision matching, the matched images are further subjected to rendering processing to obtain external environment information, and the external environment information is presented to a user through a head-mounted display device. However, there is a mounting error in the assembling process of the head-mounted display device, which may cause the optical axes of the two visible light cameras to be not completely parallel, thereby causing distortion in the rendered image. Particularly, when a user swings head left and right, raises head up and lowers head down or moves back and forth, the difference between the rendered image and the visual difference of human eyes is obvious, and obvious dizziness and image unsharp feeling are brought to the user.
Therefore, it is necessary to provide a new scheme for image correction based on a head-mounted display device.
Disclosure of Invention
An object of the embodiments of the present disclosure is to provide a new technical solution for image correction based on a head-mounted display device.
According to a first aspect of the embodiments of the present disclosure, there is provided an image correction method based on a head-mounted display device, the method including:
Acquiring a first image acquired by a first camera and a second image acquired by a second camera;
performing stereo correction on the second image according to the first image to obtain a corrected second image;
and converting the first image and the corrected second image according to the rotation angle information between the optical axis of the first camera and the optical axis of the lens corresponding to the first camera to obtain the converted first image and second image.
Optionally, the step of performing stereo correction on the second image according to the first image to obtain a corrected second image includes:
performing binocular stereo matching on the first image and the second image, and determining target pixel points in the second image, which are matched with each pixel point to be matched in the first image;
and replacing the pixel value of the matched target pixel point with the pixel value of the pixel point to be matched to obtain a corrected second image.
Optionally, the step of performing binocular stereo matching on the first image and the second image, and determining a target pixel point in the second image, which is matched with each pixel point to be matched in the first image, includes:
Determining an initial matching block of a pixel point to be matched in the first image on the second image based on a normalized cross-correlation matching algorithm;
aiming at the pixel points to be matched in the first image, performing sub-pixel matching on the initial matching block, and determining target pixel points matched with the pixel points to be matched.
Optionally, the step of determining an initial matching block of a pixel point to be matched in the first image on the second image based on a normalized cross-correlation matching algorithm includes:
constructing a first window with the pixel point to be matched as a center in the first image;
determining an epipolar line projected by the pixel points to be matched in the second image, and constructing a second window with the same size as the first window for each pixel point on the epipolar line;
and carrying out correlation calculation on the first window and the plurality of corresponding second windows to obtain an initial matching block with the highest correlation with the first window.
Optionally, for a pixel point to be matched in the first image, performing sub-pixel matching on the initial matching block, and determining a target pixel point matched with the pixel point to be matched includes:
Constructing a third window with an initial matching pixel point as a center in the second image, wherein the initial matching pixel point is a pixel point located in the center of the initial matching block;
moving a third window in a preset mode, and calculating the ratio of the gray value of a pixel point where the center coordinate of the moved third window is located to the average gray value of the first image;
and determining the pixel point of the central coordinate of the third window as a target pixel point until the ratio of the gray value of the pixel point of the central coordinate of the third window to the average gray value of the first image is greater than a preset ratio threshold.
Optionally, for a pixel point to be matched in the first image, performing sub-pixel matching on the initial matching block, and determining a target pixel point matched with the pixel point to be matched further includes:
acquiring gray values of a first reference pixel point, a second reference pixel point, a third reference pixel point and a fourth reference pixel point near a pixel point where the center coordinate of the third window is located based on the second image;
and performing weighted average calculation on the gray values of the first reference pixel point, the second reference pixel point, the third reference pixel point and the fourth reference pixel point to obtain the gray value of the pixel point where the center coordinate of the third window is located.
Optionally, for a pixel point to be matched in the first image, performing sub-pixel matching on the initial matching block, and determining a target pixel point matched with the pixel point to be matched further includes:
and determining the weight value of the reference pixel point according to the distance between the reference pixel point and the center coordinate of the third window.
Optionally, the method further includes:
acquiring the gray value of each pixel point in the reference matching block based on the gray image corresponding to the first image;
calculating gray gradient values of pixel points to be matched in the horizontal direction and the vertical direction of the reference matching block according to the gray values of the pixel points in the reference matching block;
and determining the average gray value of the first image according to the gray gradient values of the pixel points to be matched in the horizontal direction and the vertical direction of the reference matching block.
According to a second aspect of the embodiments of the present disclosure, there is provided an image correction apparatus comprising a processor and a memory, the memory storing computer instructions which, when executed by the processor, perform the method of any one of the first aspects of the embodiments of the present disclosure.
According to a third aspect of the embodiments of the present disclosure, there is provided a head mounted display device including the image processing apparatus of the second aspect of the embodiments of the present disclosure.
According to the embodiment of the present disclosure, after the second image is stereoscopically corrected according to the first image, the first image and the corrected second image are converted according to the rotation angle information, so that the plane where the converted first image and second image are located is parallel to the focal plane of the optical axis of the lens. The external scene image is generated through the converted first image and the second image in a rendering mode, and the external scene image generated through rendering can be prevented from being distorted and deformed, so that the image display effect of the head-mounted display device is improved, and the user experience is improved.
Other features of the present invention and advantages thereof will become apparent from the following detailed description of exemplary embodiments thereof, which proceeds with reference to the accompanying drawings.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present disclosure, the drawings needed to be used in the embodiments will be briefly described below. It is appreciated that the following drawings depict only certain embodiments of the invention and are therefore not to be considered limiting of its scope. For a person skilled in the art, it is possible to derive other relevant figures from these figures without inventive effort.
FIG. 1 is a schematic diagram of a hardware configuration of a head mounted display device that can be used to implement embodiments of the present disclosure;
FIG. 2 is a schematic flow chart of an image correction method based on a head-mounted display device according to an embodiment of the present disclosure;
FIG. 3 is a schematic illustration of a third window of an embodiment of the present disclosure;
FIG. 4 is a block diagram of an image correction apparatus according to an embodiment of the present disclosure;
FIG. 5 is a block diagram of an image correction apparatus according to an embodiment of the present disclosure;
fig. 6 is a schematic structural diagram of a head-mounted display device according to an embodiment of the present disclosure.
Detailed Description
Various exemplary embodiments of the present invention will now be described in detail with reference to the accompanying drawings. It should be noted that: the relative arrangement of the components and steps, the numerical expressions and numerical values set forth in these embodiments do not limit the scope of the present invention unless specifically stated otherwise.
The following description of at least one exemplary embodiment is merely illustrative in nature and is in no way intended to limit the invention, its application, or uses.
Techniques, methods, and apparatus known to those of ordinary skill in the relevant art may not be discussed in detail but are intended to be part of the specification where appropriate.
In all examples shown and discussed herein, any particular value should be construed as merely illustrative, and not limiting. Thus, other examples of the exemplary embodiments may have different values.
It should be noted that: like reference numbers and letters refer to like items in the following figures, and thus, once an item is defined in one figure, further discussion thereof is not required in subsequent figures.
< hardware configuration >
Fig. 1 is a hardware configuration diagram of a head-mounted display device 100 that can be used to implement an image processing method based on the head-mounted display device according to an embodiment of the present disclosure.
In one embodiment, the head-mounted display device 100 may be a smart device such as a Virtual Reality (VR) device, an Augmented Reality (AR) device, or a Mixed Reality (Mixed Reality) device.
In one embodiment, as shown in FIG. 1, the head mounted display device 100 may include a processor 110, a memory 120, an interface device 130, a communication device 140, a display device 150, an input device 160, a speaker 170, a microphone 180, a camera 190, and the like. The processor 110 may include, but is not limited to, a central processing unit CPU, a microprocessor MCU, and the like. The processor 110 may further include, for example, an image processor gpu (graphics Processing unit), or the like. The memory 120 may include, for example, but is not limited to, a ROM (read only memory), a RAM (random access memory), a nonvolatile memory such as a hard disk, and the like. The interface device 130 may include, for example, but is not limited to, a USB interface, a serial interface, a parallel interface, an infrared interface, and the like. The communication device 140 can perform wired or wireless communication, and specifically can include WiFi communication, bluetooth communication, 2G/3G/4G/5G communication, and the like. The display device 150 may be, for example, a liquid crystal display, an LED display, a touch display, or the like. Input device 160 may include, for example, but is not limited to, a touch screen, a keyboard, somatosensory inputs, and the like. The speaker 170 and the microphone 180 may be used to output/input voice information. Camera 180 may be used, for example, to acquire image information, and camera 190 may be, for example, a binocular camera. Although a plurality of devices are shown in fig. 1 for head mounted display apparatus 100, the present invention may relate to only some of the devices,
For application in an embodiment of the present disclosure, the memory 120 of the head mounted display device 100 is configured to store instructions for controlling the processor 110 to operate so as to support implementing an image processing method based on a head mounted display device according to any embodiment provided by the first aspect of the present disclosure. The skilled person can design the instructions according to the disclosed embodiments of the present disclosure. How the instructions control the operation of the processor is well known in the art and will not be described in detail herein.
< method examples >
Referring to fig. 2, an image processing method based on a head mounted display device according to an embodiment of the disclosure is described. The method involves a head mounted display device, which may be the head mounted display device 100 as shown in fig. 1. The image processing method based on the head-mounted display equipment comprises the following steps:
step 2100, acquire a first image captured by a first camera and a second image captured by a second camera.
In this embodiment, a first image is acquired by the first camera and a second image is acquired by the second camera. Wherein the first image and the second image are acquired at the same time. Optionally, the first camera and the second camera may be triggered using the same clock trigger source to ensure hardware synchronization of the first camera and the second camera. In this embodiment, the image sizes of the first image and the second image are the same, wherein the image sizes can be set in various ways.
Further, in this embodiment, after acquiring a first image acquired by a first camera and acquiring a second image acquired by a second camera, the first image and the second image need to be preprocessed.
In one embodiment, after the first image and the second image are acquired, the first image and the second image are subjected to grayscale processing to obtain grayscale images corresponding to the first image and the second image, respectively. According to the embodiment of the disclosure, after the first image and the second image are acquired, the first image and the second image are subjected to gray processing so as to facilitate subsequent operation processing.
In one embodiment, after the first image and the second image are acquired, noise in the first image and the second image is eliminated. Optionally, the noise in the first image and the second image may be eliminated by a median filtering method.
After acquiring the first image captured by the first camera and the second image captured by the second camera, step 2200 is entered.
Step 2200, performing stereo correction on the second image according to the first image to obtain a corrected second image.
In this embodiment, after the first image and the second image are preprocessed, the second image is stereoscopically corrected according to the first image, and a corrected second image is obtained, so that the corrected second image is coplanar with the first image.
In one embodiment, the step of performing stereo correction on the second image according to the first image to obtain a corrected second image may further include: step 3100-.
3100, performing binocular stereo matching on the first image and the second image, and determining target pixel points in the second image, which are matched with each pixel point to be matched in the first image.
In this embodiment, after the first image and the second image are preprocessed, binocular stereo matching is performed on the preprocessed first image and the preprocessed second image, and a target pixel point matched with each pixel point to be matched in the first image can be found in the second image.
In an embodiment, performing binocular stereo matching on the first image and the second image, and determining a target pixel point in the second image that matches each pixel point to be matched in the first image, may further include: step 3110-.
Step 3110, determining an initial matching block of a pixel point to be matched in the first image on the second image based on a normalized cross-correlation matching algorithm.
After the first image and the second image are preprocessed, the preprocessed first image and the preprocessed second image are matched through a Normalized Cross Correlation (NCC) algorithm, and then an initial matching block of a pixel point to be matched in the first image on the second image can be obtained. According to the embodiment of the disclosure, based on a normalized cross-correlation matching algorithm, an initial matching block of a pixel point to be matched in the first image on the second image is determined, and after the initial matching block is determined, a target pixel point is further searched based on the determined initial matching block, so that the matching speed can be improved.
In one embodiment, determining an initial matching block of a pixel point to be matched in the first image on the second image based on a normalized cross-correlation matching algorithm may further include: step 3111-3113.
Step 3111, a first window centered on the pixel point to be matched is constructed in the first image.
In this embodiment, for each pixel point to be matched in the first image, position information of the pixel point to be matched needs to be determined first. After the position information of the pixel point to be matched is determined, a first window is constructed based on the pixel point to be matched, the size of the first window may be set according to an actual situation, and optionally, in this embodiment of the present disclosure, the size of the first window is set to be 5 (pixels) × 5 (pixels).
Step 3112, determining an epipolar line of the projection of the pixel points to be matched in the second image, and constructing a second window with the same size as the first window for each pixel point on the epipolar line.
In this embodiment, the epipolar line refers to an epipolar line projected by a pixel point to be matched in the first image in the second image. Under the condition that the pixel point to be matched of the first image is known, the pixel point matched with the pixel point to be matched in the second image is always located on the polar line projected in the second image relative to the pixel point to be matched. The embodiment can determine the polar line of the projection of the pixel point to be matched in the second image through the characteristics such as the position information of the pixel point to be matched in the first image, so as to perform matching tracking on the pixel point to be matched on the polar line in the second image.
In a more specific example, the step of determining epipolar lines of the projection of the pixel point to be matched in the second image in step 3112 may include: step 4100-.
Step 4100, calibrating the first camera and the second camera to obtain an internal reference matrix K of the first camera1Distortion parameter, and internal parameter matrix K of second camera2And distortion parameter, and first camera and second cameraAn external parameter matrix between the cameras. Wherein the external reference matrix comprises a rotation matrix MatRAnd translation matrix MatT. Optionally, the camera may be calibrated by a field-friendly calibration method.
Step 4200, obtaining the position of the pixel point to be matched in the first image.
4300, according to the position of the pixel point to be matched in the first image and the internal parameter matrix K of the first camera1Distortion parameter, and internal parameter matrix K of second camera2And determining the epipolar line of the projection of the pixel point to be matched in the second image by using the distortion parameter and the external parameter matrix between the first camera and the second camera.
And 2213, performing correlation calculation on the first window and the plurality of corresponding second windows to obtain an initial matching block with the highest correlation with the first window.
In this embodiment, after determining an epipolar line of the projection of the pixel point to be matched in the second image, a second window having the same size as the first window is constructed for each pixel point on the corresponding epipolar line in the second image, so as to obtain a plurality of second windows. For example, the first window is 5 (pixels) × 5 (pixels), and the second window is 5 (pixels) × 5 (pixels).
After a second window with the same size as the first window is constructed for each pixel point on the corresponding polar line in the second image, according to an NCC algorithm, for the first window in the first image, correlation calculation is carried out on the first window and a plurality of second windows with the same size as the first window, and an initial matching block with the highest correlation with the first window is obtained.
After determining an initial matching block of a pixel point to be matched in the first image on the second image based on a normalized cross-correlation matching algorithm, entering step 3120.
And 3120, aiming at the pixel points to be matched in the first image, performing sub-pixel matching on the initial matching block, and determining target pixel points matched with the pixel points to be matched.
In this embodiment, the initial matching pixel point matched with the pixel point to be matched in the first image may be determined according to the initial matching block. The initial matching pixel point is a pixel point located in the center of the initial matching block. After the initial matching block is determined according to the NCC algorithm, the initial matching block is further subjected to sub-pixel matching, and more accurate searching can be performed near the initial matching pixel point so as to determine a target pixel point matched with the pixel point to be matched. According to the embodiment of the disclosure, the sub-pixel matching is further performed after the initial matching block is determined, so that the matching accuracy can be improved.
In an embodiment, the step of performing sub-pixel matching on the initial matching block and determining a target pixel point matched with the pixel point to be matched with the initial matching block for the pixel point to be matched in the first image may further include: step 3121-3123.
And 3121, constructing a third window taking the initial matching pixel point as the center in the second image.
In this embodiment, the initial matching pixel point matched with the pixel point to be matched in the first image may be determined according to the initial matching block. The initial matching pixel point is a pixel point located at the center coordinate of the initial matching block. And after the position information of the initial matching pixel point in the second image is determined, a third window is constructed based on the initial matching pixel point. The size of the third window may be set according to actual conditions, and optionally, in the embodiment of the present disclosure, the size of the initial matching block is 5 (pixels) × 5 (pixels), and the size of the third window may also be set to 5 (pixels) × 5 (pixels).
And 3122, moving the third window in a preset mode, and calculating the ratio of the gray value of the pixel point where the center coordinate of the moved third window is located to the average gray value of the first image.
In this embodiment, the initial matching pixel point is used as an initial position, the third window is moved in a predetermined manner, and the ratio of the gray value of the pixel point where the center coordinate of the third window is located to the average gray value of the first image is calculated after each movement. And determining whether the pixel point of the central coordinate of the third window is a target pixel point matched with the pixel point to be matched or not according to the ratio of the gray value of the pixel point of the central coordinate of the third window to the average gray value of the first image.
Moving the third window in the predetermined manner may be, for example, moving the third window in a predetermined moving step within a predetermined moving range. The step size of the movement may define a distance of each movement of the third window. The moving step may be preset, and the preset moving step may be set according to engineering experience or simulation experiment experience, which is not limited in this disclosure. The moving step may include a moving step in a horizontal direction and a moving step in a vertical direction. For example, the movement step in the horizontal direction is 0.2 pixels, and the movement step in the vertical direction is 0.2 pixels. Further, the movement of the third window may be limited to a predetermined movement range. The movement range may be preset, and the preset movement range may be set according to engineering experience or simulation experiment experience, which is not limited in the embodiment of the disclosure. For example, the location of the initial matching pixel point is LocalPoint u,vThe moving range of the third window is [ LocalPoint ]u,v-0.5,LocalPointu,v+0.5]。
And 3123, determining the pixel point where the center coordinate of the third window is located as a target pixel point until the ratio of the gray value of the pixel point where the center coordinate of the third window is located to the average gray value of the first image is greater than a predetermined ratio threshold.
In this embodiment, after the third window is created, the gray value of the pixel point where the center coordinate of the third window is located is calculated, and whether the ratio of the gray value of the pixel point where the center coordinate of the third window is located to the average gray value of the first image is greater than a predetermined ratio threshold is determined, so as to determine whether the pixel point where the center coordinate of the third window is located is a target pixel point matched with the pixel point to be matched.
In an embodiment, the step of calculating the gray value of the pixel point where the center coordinate of the third window is located may further include: steps 5100-5200.
In step 5100, based on the gray-scale image corresponding to the second image, gray-scale values of a first reference pixel point, a second reference pixel point, a third reference pixel point and a fourth reference pixel point near a pixel point where the center coordinate of the third window is located are obtained.
Step 5200, performing weighted average calculation on the gray values of the first reference pixel point, the second reference pixel point, the third reference pixel point and the fourth reference pixel point to obtain the gray value of the pixel point where the center coordinate of the third window is located.
Alternatively, fig. 3 shows a schematic view of a third window. As shown in fig. 3, the pixel point where the center coordinate of the third window is located is point, and the first reference pixel point1, the second reference pixel point2, the third reference pixel point3, and the fourth reference pixel point4 are the pixel points located at the sitting position, the upper right position, the sitting position, and the lower right position of the pixel point where the center coordinate of the third window is located, respectively. The gray value of the pixel point where the center coordinate of the third window is located can be calculated according to the following formula (1).
search_pixel=wTL*GrayPoint1+wTR*GrayPoint2+wBL*GrayPoint3+wBR*GrayPoint4 (1)
The search _ pixel is a gray value of a pixel point where the center coordinate of the third window is located; gray point1 is the gray value of the first reference pixel point, and wTL is the weight of the first reference pixel point; GrayPoint2 is the gray value of the second reference pixel point, wTR is the weight of the second reference pixel point; GrayPoint3 is the gray value of the third reference pixel point, wBL is the weight of the third reference pixel point; GrayPoint4 is the gray value of the fourth reference pixel, and wBR is the weight of the fourth reference pixel.
In this embodiment, the weights of the first reference pixel point, the second reference pixel point, the third reference pixel point, and the fourth reference pixel point may be determined according to the distance between the reference pixel point and the center coordinate of the third window. Optionally, weights corresponding to the first reference pixel point, the second reference pixel point, the third reference pixel point, and the fourth reference pixel point may be calculated according to the following formulas (2) - (5).
wTL=(1–subpix_x)*(1–subpix_y) (2)
wTR=subpix_x*(1–subpix_y) (3)
wBL=(1–subpix_x)*subpix_y (4)
wBR=subpix_x*subpix_y (5)
The third window is a window, wherein the third window is a window with a center coordinate, and the third window is a window with a center coordinate.
For example, after the movement, the pixel coordinate of the pixel point where the center coordinate of the third window is located is [6.2,8.2], the supbpix _ x is 2, and the supbpix _ y is 2.
In one embodiment, the average gray scale value of the first image is calculated based on the gray scale image corresponding to the first image. The step of acquiring the average gray value of the first image may further comprise: step 6100 + 6300.
Step 6100, obtaining the gray value of each pixel point in the reference matching block based on the gray image corresponding to the first image. The reference matching block is an image block which is constructed by taking a pixel point to be matched in the first image as a center and has the same size as the initial matching block.
And 6200, calculating gray gradient values of the pixel points to be matched in the horizontal direction and the vertical direction of the reference matching block according to the gray values of all the pixel points in the reference matching block.
6300, determining an average gray value of the first image according to the gray gradient values of the pixel points to be matched in the horizontal direction and the vertical direction of the reference matching block.
In one embodiment, the ratio threshold may be preset, and the preset ratio threshold may be set according to engineering experience or simulation experiment experience, which is not limited by the embodiments of the present disclosure. For example, the scaling threshold is 0.95.
The following describes the determination of a target pixel point by using a specific example.
When the pixel point where the center coordinate of the third window is located is the initial matching pixel point, calculating the gray value of the pixel point where the center coordinate of the third window is located according to the formula (1), and further calculating the current ratio of the gray value of the pixel point where the center coordinate of the third window is located to the average gray value of the first image. And if the current ratio is larger than the preset ratio threshold value of 0.95, determining the initial matching pixel point as a target pixel point matched with the pixel point to be matched of the first image. If the current ratio is not greater than the predetermined ratio threshold of 0.95, the third window is moved within the predetermined range of movement in the predetermined step of movement. And (3) calculating the gray value of the pixel point where the center coordinate of the moved third window is located according to the formula (1), and further calculating the next ratio of the gray value of the pixel point where the center coordinate of the moved third window is located to the average gray value of the first image. And if the next ratio is greater than the preset ratio threshold value of 0.95, determining the pixel point where the center coordinate of the third window is located as a target pixel point matched with the pixel point to be matched of the first image. And if the current ratio is not greater than the preset ratio threshold value of 0.95, continuously moving the third window until the ratio of the gray value of the pixel point where the center coordinate of the third window is located to the average gray value of the first image is greater than the preset ratio threshold value, and determining the pixel point where the center coordinate of the third window is located as a target pixel point.
In an embodiment, the step of performing sub-pixel matching on the initial matching block and determining a target pixel point matched with the pixel point to be matched with the initial matching block for the pixel point to be matched in the first image may further include: and if the number of times of moving the third window reaches a preset number threshold, determining a target pixel point according to the ratio of the gray value of the pixel point where the center coordinate of the third window is located after each movement to the average gray value of the first image. Optionally, a pixel point at which the center coordinate of the third window with the gray value closest to the average gray value of the first image is located may be determined as the target pixel point. The number threshold may be preset, and the preset number threshold may be set according to engineering experience or simulation experiment experience, which is not limited in the embodiment of the present disclosure. For example, the number threshold is 4.
After determining the target pixel point in the second image that matches each pixel point to be matched in the first image, step 3200 is performed.
Step 3200, replacing the pixel value of the matched target pixel point with the pixel value of the pixel point to be matched to obtain a corrected second image.
In this embodiment, after determining the target pixel point in the second image that matches each pixel point to be matched in the first image, the pixel value of the pixel point to be matched in the second image may be replaced with the pixel value corresponding to the first image, so as to obtain the corrected second image. And the plane where the second image is located after correction is parallel to the plane where the first image is located.
After the corrected second image is obtained, step 2300 is entered.
Step 2300, converting the first image and the corrected second image according to the rotation angle information between the optical axis of the first camera and the optical axis of the lens corresponding to the first camera, and obtaining a converted first image and a converted second image.
In this embodiment, the head-mounted display device has a first lens and a second lens disposed therein, and the head-mounted display device has a first camera and a second camera disposed therein. The first camera is arranged corresponding to the first lens, and the second camera is arranged corresponding to the second lens. In order to avoid the obvious difference between the rendered external scene and the visual difference of human eyes. The optical axis of the first camera needs to be parallel to the optical axis of the first lens corresponding to the first camera, that is, the focal plane of the first camera is parallel to the focal plane of the first lens. The optical axis of the second camera needs to be parallel to the optical axis of the second lens corresponding to the second camera, that is, the focal plane of the second camera is parallel to the focal plane of the second lens. When the head-mounted display device is assembled, an optical axis passing through the first lens is parallel to an optical axis of the second lens. However, due to the presence of mounting errors, the optical axis of the first camera and the optical axis of the first lens are not perfectly parallel, and the optical axis of the second camera and the optical axis of the second lens are not perfectly parallel. In this regard, the first image and the second image need to be stereoscopically corrected so that the corrected first image and second image are coplanar. Further, the corrected image is converted so that the plane in which the converted first image and second image lie is parallel to the focal plane of the optical axis of the lens.
In one embodiment, the first image and the corrected second image may be converted according to rotation angle information between an optical axis of the first camera and an optical axis of a lens corresponding to the first camera. At this time, the rotation angle information may be a rotation matrix RotMat of the optical axis of the first camera and the corresponding optical axis of the first lensc1. The rotation angle information may be acquired in advance. Optionally, when the head-mounted display device leaves the factory, the optical axis correction tool is used to measure the rotation matrix RotMat of the optical axis of the first camera and the corresponding optical axis of the first lensc1. The first image and the corrected second image are converted based on the rotation angle information, and a plane where the converted first image and the converted second image are located may be parallel to a focal plane of a lens of the head-mounted display device.
In a specific example, the coordinates of each pixel in the first image are (u)i,vj,1),i∈[0,WidthImage1-1],j∈[0,HeightImage1-1]Wherein, WidthImage1And HeightImage1Respectively representing the resolution of the rows and columns of the first image. And (3) converting the coordinates of each pixel point in the first image according to the following formula (1) to obtain the converted first image.
(ui,vj,1)=RotMatC1*(ui,vj1) formula (1)
Wherein, RotMatc1Is a rotation matrix of the optical axis of the first camera and the corresponding optical axis of the first lens.
The coordinate of each pixel point in the second image is (u)i,vj,1),i∈[0,WidthImage2-1],j∈[0,HeightImage2-1]Wherein, WidthImage2And HeightImage2Respectively representing the resolution of the rows and columns of the first image. Converting the coordinates of each pixel point in the second image according to the formula (1) to obtain converted coordinatesA second image.
In this embodiment, after the first image and the second image are subjected to stereo correction, the corrected second image and the first image are converted according to the rotation angle information, so that the plane where the converted first image and second image are located is parallel to the focal plane of the optical axis of the lens. Using the converted first image and second image, external scene information may be rendered to display the scene information to a user.
In another embodiment, the first image may be stereoscopically corrected based on the second image to obtain a corrected first image. Further, the second image and the corrected first image may be converted according to rotation angle information between the optical axis of the second camera and the optical axis of the lens corresponding to the second camera. In this case, the rotation angle information may be a rotation matrix RotMat of the optical axis of the second camera and the corresponding optical axis of the second lens c2. The rotation angle information may be acquired in advance. Optionally, when the head-mounted display device leaves the factory, the optical axis correction tool is used to measure the rotation matrix RotMat of the optical axis of the second camera and the corresponding optical axis of the second lensc2. The disclosed embodiments are not so limited. The second image and the corrected first image are converted based on the rotation angle information, and a plane where the converted first image and the converted second image are located may be parallel to a focal plane of a lens of the head-mounted display device. Using the converted first image and second image, external scene information may be rendered to display the scene information to a user.
According to the embodiment of the present disclosure, after the second image is stereoscopically corrected according to the first image, the first image and the corrected second image are converted according to the rotation angle information, so that the plane where the converted first image and second image are located is parallel to the focal plane of the optical axis of the lens. The external scene image is generated through the converted first image and the second image in a rendering mode, and the external scene image generated through rendering can be prevented from being distorted and deformed, so that the image display effect of the head-mounted display device is improved, and the user experience is improved.
< first embodiment of the apparatus >
Referring to fig. 4, the embodiment of the present disclosure provides an image correction apparatus 40, and the image correction apparatus 40 includes an acquisition module 41, a correction module 42, and a conversion module 43.
The acquisition module 41 may be configured to acquire a first image captured by a first camera and a second image captured by a second camera.
The correction module 42 may be configured to perform stereo correction on the second image according to the first image, so as to obtain a corrected second image.
The conversion module 43 may be configured to convert the first image and the corrected second image according to information of a rotation angle between an optical axis of the first camera and an optical axis of a lens corresponding to the first camera, so as to obtain a converted first image and a converted second image.
Referring to fig. 5, the embodiment of the present disclosure further provides an image correction apparatus 50, where the image correction apparatus 50 includes a processor 51 and a memory 52. The memory 52 is used for storing a computer program, and the computer program is executed by the processor 51 to implement the image correction method based on the head-mounted display device disclosed in any of the foregoing embodiments.
< example II of the apparatus >
Referring to fig. 6, an embodiment of the present disclosure provides a head mounted display device 60, which may be the head mounted display device 100 shown in fig. 1. The head-mounted display device 60 comprises the image correction apparatus 61 of any of the previous embodiments.
In one embodiment, the head-mounted display device 100 may be a smart device such as a Virtual Reality (VR) device, an Augmented Reality (AR) device, or a Mixed Reality (Mixed Reality) device.
According to the embodiment of the present disclosure, the image correction device may convert the first image and the corrected second image according to the rotation angle information after performing the stereoscopic correction on the second image according to the first image, so that a plane where the converted first image and the converted second image are located is parallel to a focal plane of the optical axis of the lens. The external scene image is generated through the converted first image and the second image in a rendering mode, and the external scene image generated through rendering can be prevented from being distorted and deformed, so that the image display effect of the head-mounted display device is improved, and the user experience is improved.
The embodiments in the present disclosure are described in a progressive manner, and the same and similar parts among the embodiments can be referred to each other, and each embodiment focuses on the differences from the other embodiments, but it should be clear to those skilled in the art that the embodiments described above can be used alone or in combination with each other as needed. In addition, for the device embodiment, since it corresponds to the method embodiment, the description is relatively simple, and for relevant points, refer to the description of the corresponding parts of the method embodiment. The system embodiments described above are merely illustrative, in that modules illustrated as separate components may or may not be physically separate.
The present invention may be a system, method and/or computer program product. The computer program product may include a computer-readable storage medium having computer-readable program instructions embodied therewith for causing a processor to implement various aspects of the present invention.
The computer readable storage medium may be a tangible device that can hold and store the instructions for use by the instruction execution device. The computer readable storage medium may be, for example, but not limited to, an electronic memory device, a magnetic memory device, an optical memory device, an electromagnetic memory device, a semiconductor memory device, or any suitable combination of the foregoing. More specific examples (a non-exhaustive list) of the computer readable storage medium would include the following: a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), a Static Random Access Memory (SRAM), a portable compact disc read-only memory (CD-ROM), a Digital Versatile Disc (DVD), a memory stick, a floppy disk, a mechanical coding device, such as punch cards or in-groove projection structures having instructions stored thereon, and any suitable combination of the foregoing. Computer-readable storage media as used herein is not to be construed as transitory signals per se, such as radio waves or other freely propagating electromagnetic waves, electromagnetic waves propagating through a waveguide or other transmission medium (e.g., optical pulses through a fiber optic cable), or electrical signals transmitted through electrical wires.
The computer-readable program instructions described herein may be downloaded from a computer-readable storage medium to a respective computing/processing device, or to an external computer or external storage device via a network, such as the internet, a local area network, a wide area network, and/or a wireless network. The network may include copper transmission cables, fiber optic transmission, wireless transmission, routers, firewalls, switches, gateway computers and/or edge servers. The network adapter card or network interface in each computing/processing device receives computer-readable program instructions from the network and forwards the computer-readable program instructions for storage in a computer-readable storage medium in the respective computing/processing device.
Computer program instructions for carrying out operations of the present invention may be assembler instructions, Instruction Set Architecture (ISA) instructions, machine-related instructions, microcode, firmware instructions, state setting data, or source or object code written in any combination of one or more programming languages, including an object oriented programming language such as Smalltalk, C + + or the like and conventional procedural programming languages, such as the "like" programming languages, or similar programming languages. The computer-readable program instructions may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any type of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet service provider). In some embodiments, aspects of the present invention are implemented by personalizing an electronic circuit, such as a programmable logic circuit, a Field Programmable Gate Array (FPGA), or a Programmable Logic Array (PLA), with state information of computer-readable program instructions, which can execute the computer-readable program instructions.
Aspects of the present invention are described herein with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the invention. It will be understood that each block of the flowchart illustrations and/or block diagrams, and combinations of blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer-readable program instructions.
These computer-readable program instructions may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks. These computer-readable program instructions may also be stored in a computer-readable storage medium that can direct a computer, programmable data processing apparatus, and/or other devices to function in a particular manner, such that the computer-readable medium storing the instructions comprises an article of manufacture including instructions which implement the function/act specified in the flowchart and/or block diagram block or blocks.
The computer readable program instructions may also be loaded onto a computer, other programmable data processing apparatus, or other devices to cause a series of operational steps to be performed on the computer, other programmable apparatus or other devices to produce a computer implemented process such that the instructions which execute on the computer, other programmable apparatus or other devices implement the functions/acts specified in the flowchart and/or block diagram block or blocks.
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present invention. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of instructions, which comprises one or more executable instructions for implementing the specified logical function(s). In some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions. It is well known to those skilled in the art that implementation by hardware, by software, and by a combination of software and hardware are equivalent.
Having described embodiments of the present invention, the foregoing description is intended to be exemplary, not exhaustive, and not limited to the embodiments disclosed. Many modifications and variations will be apparent to those of ordinary skill in the art without departing from the scope and spirit of the described embodiments. The terminology used herein is chosen in order to best explain the principles of the embodiments, the practical application, or improvements made to the technology in the marketplace, or to enable others of ordinary skill in the art to understand the embodiments disclosed herein. The scope of the invention is defined by the appended claims.

Claims (10)

1. An image correction method based on a head-mounted display device, the method comprising:
acquiring a first image acquired by a first camera and a second image acquired by a second camera;
performing stereo correction on the second image according to the first image to obtain a corrected second image;
and converting the first image and the corrected second image according to the rotation angle information between the optical axis of the first camera and the optical axis of the lens corresponding to the first camera to obtain the converted first image and second image.
2. The method of claim 1, the second image being stereoscopically corrected from the first image, the step of obtaining a corrected second image comprising:
performing binocular stereo matching on the first image and the second image, and determining target pixel points in the second image, which are matched with each pixel point to be matched in the first image;
and replacing the pixel value of the matched target pixel point with the pixel value of the pixel point to be matched to obtain a corrected second image.
3. The method of claim 2, wherein the step of performing binocular stereo matching on the first image and the second image, and determining a target pixel point in the second image which is matched with each pixel point to be matched in the first image comprises:
determining an initial matching block of a pixel point to be matched in the first image on the second image based on a normalized cross-correlation matching algorithm;
aiming at the pixel points to be matched in the first image, performing sub-pixel matching on the initial matching block, and determining target pixel points matched with the pixel points to be matched.
4. The method of claim 3, wherein the step of determining an initial matching block of the pixel point to be matched in the first image on the second image based on a normalized cross-correlation matching algorithm comprises:
Constructing a first window with the pixel point to be matched as a center in the first image;
determining an epipolar line projected by the pixel points to be matched in the second image, and constructing a second window with the same size as the first window for each pixel point on the epipolar line;
and carrying out correlation calculation on the first window and the plurality of corresponding second windows to obtain an initial matching block with the highest correlation with the first window.
5. The method according to claim 3, wherein the initial matching block is subjected to sub-pixel matching with respect to the pixel point to be matched in the first image, and the step of determining the target pixel point matched with the pixel point to be matched comprises:
constructing a third window with an initial matching pixel point as a center in the second image, wherein the initial matching pixel point is a pixel point located in the center of the initial matching block;
moving a third window in a preset mode, and calculating the ratio of the gray value of a pixel point where the center coordinate of the moved third window is located to the average gray value of the first image;
and determining the pixel point of the central coordinate of the third window as a target pixel point until the ratio of the gray value of the pixel point of the central coordinate of the third window to the average gray value of the first image is greater than a preset ratio threshold.
6. The method according to claim 5, wherein the initial matching block is sub-pixel matched for the pixel point to be matched in the first image, and the step of determining the target pixel point matched with the pixel point to be matched further comprises:
acquiring gray values of a first reference pixel point, a second reference pixel point, a third reference pixel point and a fourth reference pixel point near a pixel point where the center coordinate of the third window is located based on the gray image corresponding to the second image;
and performing weighted average calculation on the gray values of the first reference pixel point, the second reference pixel point, the third reference pixel point and the fourth reference pixel point to obtain the gray value of the pixel point where the center coordinate of the third window is located.
7. The method according to claim 6, wherein the initial matching block is sub-pixel matched for the pixel point to be matched in the first image, and the step of determining the target pixel point matched with the pixel point to be matched further comprises:
and determining the weight value of the reference pixel point according to the distance between the reference pixel point and the center coordinate of the third window.
8. The method of claim 5, further comprising:
Acquiring the gray value of each pixel point in the reference matching block based on the gray image corresponding to the first image;
calculating gray gradient values of pixel points to be matched in the horizontal direction and the vertical direction of the reference matching block according to the gray values of the pixel points in the reference matching block;
and determining the average gray value of the first image according to the gray gradient values of the pixel points to be matched in the horizontal direction and the vertical direction of the reference matching block.
9. An image correction apparatus comprising a processor and a memory, the memory storing computer instructions which, when executed by the processor, perform the method of any one of claims 1 to 8.
10. A head-mounted display device characterized by comprising the image correction apparatus according to claim 9.
CN202010519249.9A 2020-06-09 2020-06-09 Image correction method, device and equipment based on head-mounted display equipment Active CN111866493B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010519249.9A CN111866493B (en) 2020-06-09 2020-06-09 Image correction method, device and equipment based on head-mounted display equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010519249.9A CN111866493B (en) 2020-06-09 2020-06-09 Image correction method, device and equipment based on head-mounted display equipment

Publications (2)

Publication Number Publication Date
CN111866493A true CN111866493A (en) 2020-10-30
CN111866493B CN111866493B (en) 2022-01-28

Family

ID=72986285

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010519249.9A Active CN111866493B (en) 2020-06-09 2020-06-09 Image correction method, device and equipment based on head-mounted display equipment

Country Status (1)

Country Link
CN (1) CN111866493B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023115460A1 (en) * 2021-12-20 2023-06-29 歌尔股份有限公司 Image correction method and apparatus, electronic device, and head-mounted display device

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2000310747A (en) * 1999-02-26 2000-11-07 Mr System Kenkyusho:Kk Image observation device
CN101163236A (en) * 2006-10-10 2008-04-16 Itt制造企业公司 A system and method for dynamically correcting parallax in head borne video systems
JP2009140125A (en) * 2007-12-05 2009-06-25 National Institute Of Advanced Industrial & Technology Information presentation device
CN101641963A (en) * 2007-03-12 2010-02-03 佳能株式会社 Head mounted image-sensing display device and composite image generating apparatus
CN104392447A (en) * 2014-11-28 2015-03-04 西南科技大学 Image matching method based on gray scale gradient
CN106131536A (en) * 2016-08-15 2016-11-16 万象三维视觉科技(北京)有限公司 A kind of bore hole 3D augmented reality interactive exhibition system and methods of exhibiting thereof
CN109302600A (en) * 2018-12-06 2019-02-01 成都工业学院 A kind of stereo scene filming apparatus

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2000310747A (en) * 1999-02-26 2000-11-07 Mr System Kenkyusho:Kk Image observation device
CN101163236A (en) * 2006-10-10 2008-04-16 Itt制造企业公司 A system and method for dynamically correcting parallax in head borne video systems
CN101641963A (en) * 2007-03-12 2010-02-03 佳能株式会社 Head mounted image-sensing display device and composite image generating apparatus
JP2009140125A (en) * 2007-12-05 2009-06-25 National Institute Of Advanced Industrial & Technology Information presentation device
CN104392447A (en) * 2014-11-28 2015-03-04 西南科技大学 Image matching method based on gray scale gradient
CN106131536A (en) * 2016-08-15 2016-11-16 万象三维视觉科技(北京)有限公司 A kind of bore hole 3D augmented reality interactive exhibition system and methods of exhibiting thereof
CN109302600A (en) * 2018-12-06 2019-02-01 成都工业学院 A kind of stereo scene filming apparatus

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
孙博: "《散斑照明双目视觉三维重构技术研究》", 《中国优秀硕士学位论文全文数据库-信息科技辑》 *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023115460A1 (en) * 2021-12-20 2023-06-29 歌尔股份有限公司 Image correction method and apparatus, electronic device, and head-mounted display device

Also Published As

Publication number Publication date
CN111866493B (en) 2022-01-28

Similar Documents

Publication Publication Date Title
CN107223269B (en) Three-dimensional scene positioning method and device
JP6258953B2 (en) Fast initialization for monocular visual SLAM
CN112652016B (en) Point cloud prediction model generation method, pose estimation method and pose estimation device
US20230260145A1 (en) Depth Determination for Images Captured with a Moving Camera and Representing Moving Features
CN109743626B (en) Image display method, image processing method and related equipment
WO2020069049A1 (en) Employing three-dimensional data predicted from two-dimensional images using neural networks for 3d modeling applications
US20210241495A1 (en) Method and system for reconstructing colour and depth information of a scene
CN111739005B (en) Image detection method, device, electronic equipment and storage medium
CN112561978B (en) Training method of depth estimation network, depth estimation method of image and equipment
CN109640066B (en) Method and device for generating high-precision dense depth image
CN106782260B (en) Display method and device for virtual reality motion scene
US20170374256A1 (en) Method and apparatus for rolling shutter compensation
CN114785996A (en) Virtual reality parallax correction
CN111539973A (en) Method and device for detecting pose of vehicle
US20190079158A1 (en) 4d camera tracking and optical stabilization
US11403781B2 (en) Methods and systems for intra-capture camera calibration
CN113689578B (en) Human body data set generation method and device
CN113711276A (en) Scale-aware monocular positioning and mapping
CN113361365B (en) Positioning method, positioning device, positioning equipment and storage medium
CN111866493B (en) Image correction method, device and equipment based on head-mounted display equipment
EP4049245B1 (en) Augmented reality 3d reconstruction
CN113870213A (en) Image display method, image display device, storage medium, and electronic apparatus
US11010865B2 (en) Imaging method, imaging apparatus, and virtual reality device involves distortion
CN111866492A (en) Image processing method, device and equipment based on head-mounted display equipment
US11741671B2 (en) Three-dimensional scene recreation using depth fusion

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant