CN115761005A - Virtual reality equipment adjusting method and device, electronic equipment and storage medium - Google Patents

Virtual reality equipment adjusting method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN115761005A
CN115761005A CN202211478252.6A CN202211478252A CN115761005A CN 115761005 A CN115761005 A CN 115761005A CN 202211478252 A CN202211478252 A CN 202211478252A CN 115761005 A CN115761005 A CN 115761005A
Authority
CN
China
Prior art keywords
camera
matrix
coordinate system
binocular
virtual reality
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211478252.6A
Other languages
Chinese (zh)
Inventor
谢四化
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Luxshare Precision Technology Nanjing Co Ltd
Original Assignee
Luxshare Precision Technology Nanjing Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Luxshare Precision Technology Nanjing Co Ltd filed Critical Luxshare Precision Technology Nanjing Co Ltd
Priority to CN202211478252.6A priority Critical patent/CN115761005A/en
Publication of CN115761005A publication Critical patent/CN115761005A/en
Pending legal-status Critical Current

Links

Images

Abstract

The technical scheme of the embodiment of the invention discloses a virtual reality equipment adjusting method, a virtual reality equipment adjusting device, electronic equipment and a storage medium. The method comprises the following steps: performing polar line correction on the binocular cameras to obtain correction rotation matrixes of the cameras; acquiring optical machine internal parameter matrixes and camera selection matrixes of the optical machines of the lenses and camera translation matrixes of the cameras; converting each plane imaging point of the binocular camera into a lens optical-mechanical coordinate system according to the correction rotation matrix of each camera, the camera selection matrix and the camera translation matrix of each camera to obtain each target imaging point in the lens optical-mechanical coordinate system; and determining each target pixel point under a projection screen coordinate system according to each optical machine internal reference matrix and each target imaging point so as to realize the adjustment of the virtual reality equipment. The technical scheme of the embodiment of the invention adjusts the difference between the observed scene and the real world of the virtual reality equipment during video perspective and reduces the calculated amount.

Description

Virtual reality equipment adjusting method and device, electronic equipment and storage medium
Technical Field
The invention relates to the technical field of virtual reality, in particular to a virtual reality equipment adjusting method and device, electronic equipment and a storage medium.
Background
See-Through (ST) is an important technology in virtual reality, and virtual space and the real world can be combined Through the See-Through.
Currently, the implementation of See-Through mainly includes: optical See-Through (OST) and Video See-Through (VST). Among other things, video perspective has the advantages of being fully algorithmically controlled for visual integration, allowing for complete occlusion between the virtual space and the real world, and allowing higher levels of modification to real objects.
However, because the arrangement angle and distance of the binocular camera are not consistent with the interpupillary distance of human eyes, the scene seen through video perspective is different from the real world. For the difference, methods such as image stitching and three-dimensional reconstruction can be adopted for adjustment, but the calculation amount is too large.
Disclosure of Invention
The invention provides a virtual reality device adjusting method and device, an electronic device and a storage medium, which adjust the difference between a scene observed by the virtual reality device during video perspective and a real world and reduce the calculated amount.
According to an aspect of the present invention, there is provided a virtual reality device adjustment method, including:
performing polar line correction on the binocular cameras to obtain correction rotation matrixes of the cameras;
acquiring an optical machine internal reference matrix, a camera selection matrix and a camera translation matrix of each camera of each lens optical machine;
converting each plane imaging point of the binocular camera into a lens optical-mechanical coordinate system according to the correction rotation matrix of each camera, the camera selection matrix and the camera translation matrix of each camera to obtain each target imaging point in the lens optical-mechanical coordinate system;
and determining each target pixel point under a projection screen coordinate system according to each optical machine internal reference matrix and each target imaging point so as to realize the adjustment of the virtual reality equipment.
According to another aspect of the present invention, there is provided a virtual reality device adjustment apparatus, including:
the polar line correction module is used for carrying out polar line correction on the binocular cameras to obtain correction rotation matrixes of the cameras;
the matrix acquisition module is used for acquiring optical machine internal parameter matrixes of the optical machines of the lenses, camera selection matrixes and camera translation matrixes of the cameras;
a coordinate system conversion module, configured to convert, according to the correction rotation matrix of each camera, the camera selection matrix, and the camera translation matrix of each camera, each plane imaging point of the binocular camera into a lens optical-mechanical coordinate system, so as to obtain each target imaging point in the lens optical-mechanical coordinate system;
and the target pixel point determining module is used for determining each target pixel point under a projection screen coordinate system according to each optical machine internal parameter matrix and each target imaging point so as to realize the adjustment of the virtual reality equipment.
According to another aspect of the present invention, there is provided an electronic apparatus including:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein the content of the first and second substances,
the memory stores a computer program executable by the at least one processor, the computer program being executable by the at least one processor to enable the at least one processor to perform the virtual reality device adjustment method of any embodiment of the invention.
According to another aspect of the present invention, there is provided a computer-readable storage medium storing computer instructions for causing a processor to implement the virtual reality device adjusting method according to any one of the embodiments of the present invention when the computer instructions are executed.
According to the technical scheme of the embodiment of the invention, polar line correction is carried out on the binocular cameras to obtain the correction rotation matrix of each camera; acquiring optical machine internal parameter matrixes and camera selection matrixes of the optical machines of the lenses and camera translation matrixes of the cameras; converting each plane imaging point of the binocular camera into a lens optical-mechanical coordinate system according to the correction rotation matrix of each camera, the camera selection matrix and the camera translation matrix of each camera to obtain each target imaging point in the lens optical-mechanical coordinate system; according to the parameter matrix of each optical machine and each target imaging point, each target pixel point under the projection screen coordinate system is determined so as to realize the adjustment of the virtual reality equipment, solve the problem of overlarge calculated amount, adjust the difference between the observed scene and the real world of the virtual reality equipment during video perspective and reduce the calculated amount.
It should be understood that the statements in this section do not necessarily identify key or critical features of the embodiments of the present invention, nor do they necessarily limit the scope of the invention. Other features of the present invention will become apparent from the following description.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present invention, the drawings required to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the description below are only some embodiments of the present invention, and it is obvious for those skilled in the art that other drawings can be obtained according to the drawings without creative efforts.
Fig. 1 is a flowchart of a virtual reality device adjustment method according to an embodiment of the present invention;
fig. 2 is a schematic structural diagram of a binocular camera before epipolar rectification according to an embodiment of the present invention;
fig. 3 is a schematic structural diagram of a binocular camera after epipolar rectification according to an embodiment of the present invention;
FIG. 4 is a schematic diagram of a lens optical engine according to an embodiment of the present invention;
fig. 5 is a flowchart of a virtual reality device adjustment method according to a second embodiment of the present invention;
fig. 6 is a schematic structural diagram of a camera optical center and a lens optical center of a virtual reality device according to a second embodiment of the present invention;
fig. 7 is a schematic structural diagram of an imaging plane and a projection screen of a binocular camera according to a second embodiment of the present invention;
FIG. 8 is a system framework diagram of a See-Through system according to a second embodiment of the present invention;
fig. 9 is a schematic structural diagram of an adjusting apparatus for virtual reality according to a third embodiment of the present invention;
fig. 10 is a schematic structural diagram of an electronic device implementing the virtual reality device adjustment method according to the embodiment of the present invention.
Detailed Description
In order to make those skilled in the art better understand the technical solutions of the present invention, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be obtained by a person skilled in the art without making any creative effort based on the embodiments in the present invention, shall fall within the protection scope of the present invention.
It should be noted that the terms "first," "second," and the like in the description and claims of the present invention and in the drawings described above are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used is interchangeable under appropriate circumstances such that the embodiments of the invention described herein are capable of operation in sequences other than those illustrated or described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed, but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
Example one
Fig. 1 is a flowchart of a virtual reality device adjustment method according to an embodiment of the present invention. The embodiment is applicable to the case of adjusting virtual reality equipment, and the method can be executed by a virtual reality equipment adjusting device, the virtual reality equipment adjusting device can be realized in a hardware and/or software mode, and the virtual reality equipment adjusting device can be configured in electronic equipment bearing a virtual reality equipment adjusting function.
Referring to fig. 1, the virtual reality device adjusting method includes:
and S110, performing epipolar line correction on the binocular cameras to obtain a correction rotation matrix of each camera.
The binocular camera includes a left camera and a right camera. Due to the manufacturing process, there may be situations where the camera imaging plane of the left camera and the camera imaging plane of the right camera are not parallel or in the same straight line. When the binocular camera photographs an object point, a plane imaging point displayed on the camera imaging plane of the left camera may not coincide with a plane imaging point displayed on the camera imaging plane of the right camera. Therefore, by performing epipolar line correction on each camera of the binocular camera so that the camera imaging plane of the left camera and the camera imaging plane of the right camera are parallel and on the same straight line, it is ensured that the object point is displayed uniformly on the camera imaging planes of the cameras. The correction rotation matrix may be a rotation matrix for each camera transformed from pre-epipolar correction to post-epipolar correction.
Specifically, epipolar correction can be performed on each camera by using an epipolar correction algorithm, so that a correction rotation matrix of each camera is obtained. Illustratively, the epipolar rectification algorithm may include: bouguet (stereo correction) algorithm, etc.
For example, fig. 2 is a schematic structural diagram of a binocular camera before epipolar rectification; fig. 3 is a schematic structural view of the binocular camera after epipolar rectification. Wherein, C l Is the left camera optical center; c r Is the right camera optical center; p is w Is an object point; left camera optical center C l And right camera optical center C r The connecting line of (a) is a baseline; object point P w The plane formed by connecting the optical centers of the two cameras is a polar plane; the intersection line of the polar plane and the camera imaging plane is a polar line; the intersection point of the baseline and the camera imaging plane is a pole point; the line connecting the optical center of the camera and the object point is a collimation axis; the intersection point of the collimation axis and the camera imaging plane is a plane imaging point, wherein P l And P r Is a planar imaging point.
Comparing the structural schematic diagram of the binocular camera before epipolar rectification with the structural schematic diagram of the binocular camera after epipolar rectification, it can be found that: before epipolar rectification, a camera imaging plane of the left camera and a camera imaging plane of the right camera are not parallel and not on the same straight line, and an optical axis of the left camera and an optical axis of the right camera are not parallel to each other; after polar line correction, the camera imaging plane of the left camera and the camera imaging plane of the right camera are parallel and are on the same straight line, the pole of the left camera and the pole of the right camera are both at infinity, the optical axis of the left camera is parallel to the optical axis of the right camera, and the height of the plane imaging point of the left camera is consistent with that of the plane imaging point of the right camera.
S120, acquiring optical machine internal parameter matrixes, camera selection matrixes and camera translation matrixes of the cameras of the optical machines of the lenses.
The lens engine may be an optical component in a virtual reality device for viewing a projection screen. Illustratively, the lens light machine may include: aspheric lenses, fresnel lenses, pancake (Pancake) solutions, etc. The Pancake scheme can be a small and light thin lens. The optical machine internal parameter matrix, the camera selection matrix and the camera translation matrix can be obtained by calibrating the virtual reality equipment. The camera selection matrix may be used to change an angle between a camera imaging plane of the camera and the projection screen, so that the camera imaging plane of each camera is parallel to the projection screen. The camera translation matrix may be used to transform from the camera optical center to the lens optical center. The camera translation matrix may transform a planar imaging point on an imaging plane of the camera to be under a lens light engine coordinate system.
S130, converting each plane imaging point of the binocular camera into a lens optical-mechanical coordinate system according to the correction rotation matrix of each camera, the camera selection matrix and the camera translation matrix of each camera to obtain each target imaging point in the lens optical-mechanical coordinate system.
The lens optical engine coordinate system can be a space coordinate system taking the optical center of the lens optical engine as an origin. The target imaging point can be a position coordinate point of the plane imaging point transformed to the lens optical-mechanical coordinate system. Fig. 4 is a schematic structural diagram of a lens optical engine. Wherein, V l Is the optical center of the left lens optical machine; v r Is the optical center of the right lens optical machine; p is w Is an object point; p is vl A target imaging point corresponding to the left lens optical machine; p is vr And is a target imaging point corresponding to the right lens optical machine.
Specifically, epipolar line correction can be performed on each camera according to the correction rotation matrix of each camera to obtain a camera imaging plane of each camera after epipolar line correction; correcting an included angle between a camera imaging plane of each camera after polar line correction and the projection screen according to the camera selection matrix, so that the camera imaging plane of each camera after polar line correction is parallel to the projection screen; and finally, converting each plane imaging point from a camera imaging plane of the camera to a lens optical-mechanical coordinate system to obtain each target imaging point in the lens optical-mechanical coordinate system.
S140, determining each target pixel point under a projection screen coordinate system according to each optical machine internal reference matrix and each target imaging point so as to realize the adjustment of the virtual reality equipment.
The projection screen coordinate system may be a planar coordinate system. Compared with the projection screen coordinate system, the lens optical machine coordinate system is a space coordinate system. And projecting the target imaging point in the lens optical-mechanical coordinate system to the projection screen coordinate system, namely completing the projection of the target imaging point. The target pixel point can be a projection point of a target imaging point in a lens optical-mechanical coordinate system in a projection screen coordinate system.
Specifically, each target imaging point in the lens optical-mechanical coordinate system can be converted to the projection screen coordinate system according to each optical-mechanical internal reference matrix, so as to obtain each target pixel point. The position coordinate point of the object point in the projection screen coordinate system can be obtained, namely, a picture with the depth of field consistent with the real world can be seen through the virtual reality equipment.
The virtual reality equipment observes the object point displayed on the projection screen through the lens optical machine. The display positions of the same object point on the left screen and the right screen are different, so that different depths of field are generated. The real-world object is seen through the lens light machine of the virtual reality equipment, and the real-world object is actually seen as the projection of the real-world object on the coordinate system of the lens light machine. The method comprises the steps of determining the corresponding relation between a plane imaging point of a binocular camera and a target imaging point under a lens optical-mechanical coordinate system, and further determining the corresponding relation between the plane imaging point of the binocular camera, the target imaging point under the lens optical-mechanical coordinate system and a target pixel point in a projection screen coordinate system according to the corresponding relation between the target imaging point under the lens optical-mechanical coordinate system and the target pixel point in the projection screen coordinate system, so that the corresponding relation between a picture of the real world acquired by the binocular camera and a projection of the lens optical-mechanical coordinate system observed by human eyes through the lens optical-mechanical is realized, the picture with the depth of field consistent with the real world is observed through virtual reality equipment, and the adjustment of the virtual reality equipment can be realized.
For example, the following formula may be used to determine the target pixel point:
p vl =K vl P vl
p vr =K vr P vr
in the formula, p vl A target pixel point corresponding to the left lens optical machine; k is vl An optical machine internal reference matrix of the left lens optical machine; p is vl A target imaging point of the left lens optical machine; p is a radical of vr A target pixel point corresponding to the right lens optical machine; k is vr The right lens optical machine is an optical machine internal reference matrix of the right lens optical machine; p vr Is the target imaging point of the right lens optical machine.
In an optional embodiment of the present invention, epipolar line correction is performed on binocular cameras to obtain a corrected rotation matrix of each camera, which is embodied as: calibrating the binocular cameras according to the checkerboard images to respectively obtain the camera extrinsic parameter matrix of each camera; calculating a binocular relative rotation matrix and a binocular relative translation matrix between the binocular cameras according to the external reference matrix of each camera; and performing polar line correction on each camera according to the binocular relative rotation matrix and the binocular relative translation matrix to obtain a correction rotation matrix of each camera.
The binocular relative rotation matrix may be a rotation matrix transformed between binocular cameras. The binocular relative translation matrix may be a translation matrix that transforms between binocular cameras. Illustratively, the binocular relative rotation matrix may be a rotation matrix of the right camera transformed into the left camera; the binocular relative translation matrix may be a translation matrix of the right camera transformed to the left camera. The camera coherence matrix may include a first camera coherence matrix and a second camera coherence matrix. The first camera external reference matrix is a rotation matrix of a plane imaging point of each camera relative to an optical center of each camera; the second camera external reference matrix is a translation matrix of the plane imaging point of each camera relative to the optical center of each camera.
Specifically, the checkerboard images can be placed on a fixed plane, and the checkerboard images are acquired from different directions and distances by using a binocular camera at the same time, so that calibration data of the binocular camera are obtained. The calibration algorithm can be used to calibrate each camera of the binocular camera respectively to obtain the camera extrinsic parameter matrix of each camera. Optionally, the calibration algorithm may include: a tensor friends calibration algorithm or a calibrateCamera (camera calibration) function in OpenCV (open source computer vision software), etc.
According to the camera external reference matrix of each camera, a binocular relative rotation matrix and a binocular relative translation matrix between the binocular cameras can be calculated.
For example, the binocular relative rotation matrix and the binocular relative translation matrix between the binocular cameras may be calculated by the following formulas:
R rl =R tr R tl T
T rl =T tr -R rl T tl
in the formula, R rl Transforming a binocular relative rotation matrix of the right camera to the left camera; r tr A first camera extrinsic parameter matrix for the right camera; r tl A first camera extrinsic parameter matrix for the left camera; t is rl A binocular relative translation matrix is obtained; t is tr A second camera external reference matrix for a right camera; t is a unit of tl A second camera external reference matrix for the left camera.
Polar line correction algorithm can be adopted according to the binocular relative rotation matrix and the binocular relative translation matrix to perform polar line correction on each camera, so that the correction rotation matrix of each camera is obtained. Illustratively, the epipolar rectification algorithm may include: the Bouguet algorithm or the stereorecovery function in OpenCV, etc.
According to the scheme, the binocular cameras are calibrated according to the checkerboard images, the camera external parameter matrixes of the cameras are obtained respectively, the binocular relative rotation matrix and the binocular relative translation matrix between the binocular cameras are calculated according to the camera external parameter matrixes, polar line correction is performed on the cameras according to the binocular relative rotation matrix and the binocular relative translation matrix, the correction rotation matrix of the cameras is obtained, polar line correction of the cameras of the binocular cameras is achieved, polar line alignment of the cameras is ensured, further, the imaging planes of the cameras are parallel and on the same straight line, and consistency of the same object point acquired through the cameras on the imaging planes of the cameras is achieved.
In an optional embodiment of the present invention, after determining the target pixel point of the projection screen, the method further comprises: and carrying out interpolation on the target pixel points according to the display parameters of the projection screen and the target pixel points.
The display parameters of the projection screen may include resolution, pixel density, and the like.
Specifically, interpolation processing can be performed on the target pixel points according to the display parameters of the projection screen and the target pixel point numerical values, so that the values of the target pixel points after interpolation are more adaptive to the display parameters of the projection screen.
Optionally, a scaling coefficient may be performed on the target pixel point, and the size of the target pixel point is adjusted, so that the scaled value of the target pixel point is more adaptive to the display parameter of the projection screen.
According to the scheme, the target pixel points are interpolated according to the display parameters and the target pixel points of the projection screen, the display parameters of the projection screen are considered, and the target pixel points are further interpolated, so that the target pixel points and the projection screen are more adaptive, and the display effect of the projection screen is improved.
In an optional embodiment of the present invention, after determining the target pixel point of the projection screen, the method further comprises: performing parameter marking on an object in a projection screen based on an eye movement identification technology; the parameter is a distance and/or a dimension.
Specifically, based on the eye movement recognition technology, the parameter labeling may be performed on the object in the projection screen watched by the human eye, and information such as the distance and/or the size of the object may be displayed.
According to the scheme, the parameters of the object in the projection screen are labeled based on the eye movement recognition technology, and the further display of the object in the projection screen is realized by combining the fixation point of human eyes, so that the display flexibility of the virtual reality equipment is improved.
According to the technical scheme of the embodiment of the invention, polar line correction is carried out on a binocular camera to obtain a correction rotation matrix of each camera, a light machine internal reference matrix, a camera selection matrix and a camera translation matrix of each lens light machine are obtained, each plane imaging point of the binocular camera is converted into a lens light machine coordinate system according to the correction rotation matrix, the camera selection matrix and the camera translation matrix of each camera of each lens, each target imaging point under the lens light machine coordinate system is obtained, each target pixel point under a projection screen coordinate system is determined according to each light machine internal reference matrix and each target imaging point, so that adjustment of virtual reality equipment is realized, polar line correction of the binocular camera is realized, each camera imaging plane of the binocular camera is parallel to each other and on the same straight line, and consistency of the same object point collected by each camera is guaranteed; meanwhile, the correction between the imaging plane of the camera and the projection screen is realized, so that the imaging plane of the camera is parallel to the projection screen, a plane imaging point in the imaging plane of the binocular camera corresponds to a target imaging point in a lens optical machine coordinate system and a target pixel point in the projection screen coordinate system, and the consistency between a picture of a real world acquired by the binocular camera and a picture in the projection screen coordinate system observed by human eyes is realized. Different from methods such as image splicing and three-dimensional reconstruction, which need a large amount of formula calculation, the technical scheme of the embodiment of the invention adjusts the difference between the observed scene and the real world of the virtual reality device during video perspective through a small amount of calculation, and reduces the calculation amount.
Example two
Fig. 5 is a flowchart of a virtual reality device adjusting method according to a second embodiment of the present invention, where on the basis of the second embodiment of the present invention, each planar imaging point of the binocular camera is converted into a lens optical-mechanical coordinate system according to the correction rotation matrix of each camera, the camera selection matrix, and the camera translation matrix of each camera, so as to obtain each target imaging point in the lens optical-mechanical coordinate system, and the method is embodied as follows: rotating a plane imaging point of the binocular camera on a camera imaging plane to a reference camera coordinate system parallel to the projection screen according to the correction rotation matrix and the camera selection matrix of each camera to obtain a reference imaging point; and according to the translation matrix of each camera, translating the reference imaging point from the reference camera coordinate system to a lens optical-mechanical coordinate system to obtain a target imaging point in the lens optical-mechanical coordinate system. In the present invention, the description of the embodiments of the present invention may be referred to other embodiments.
Referring to fig. 5, the virtual reality device adjusting method includes:
and S510, performing epipolar line correction on the binocular cameras to obtain a correction rotation matrix of each camera.
S520, acquiring a light machine internal reference matrix, a camera selection matrix and a camera translation matrix of each camera of each lens light machine.
S530, according to the correction rotation matrix and the camera selection matrix of each camera, rotating a plane imaging point of the binocular camera on the camera imaging plane to a reference camera coordinate system parallel to the projection screen to obtain a reference imaging point.
The reference camera coordinate system may be a coordinate system obtained by correcting an angle between the camera imaging plane and the projection screen. The reference camera coordinate system may be a coordinate system corresponding to the corrected camera imaging plane. The reference camera coordinate system is parallel to the projection screen coordinate system. The reference imaging point may be a planar imaging point transformed from the camera imaging plane to the reference camera coordinate system.
Specifically, the camera imaging plane of the binocular camera can be rotated to the reference camera coordinate system parallel to the projection screen according to the correction rotation matrix and the camera selection matrix of each camera, and the reference imaging point is obtained after the plane imaging point on the camera imaging plane is subjected to rotation transformation.
For example, the reference imaging point may be determined using the following formula:
P vl '=RR l P l
P vr '=RR r P r
in the formula, P vl ' is a reference imaging point of the left camera; r is a camera selection matrix; r l A correction rotation matrix for the left camera; p l Is a planar imaging point of the left camera; p vr ' is a reference imaging point of the right camera; r is r A correction rotation matrix for the right camera; p is r Is the planar imaging point of the right camera.
And S540, according to the translation matrixes of the cameras, translating the reference imaging points from the reference camera coordinate system to a lens optical-mechanical coordinate system to obtain target imaging points in the lens optical-mechanical coordinate system.
Specifically, the reference imaging point can be translated from the reference camera coordinate system to the lens optical-mechanical coordinate system according to the camera translation matrix, so as to obtain the target imaging point in the lens optical-mechanical coordinate system.
For example, the target imaging point may be determined using the following formula:
P vl =P vl '+T l
P vr =P vr '+T r
in the formula, P vl Is a target imaging point of the left lens optical machine; p is vl ' is a reference imaging point of the left camera; t is l A camera translation matrix for the left camera; p is vr Is a target imaging point of the right lens optical machine; p is vr ' is a reference imaging point of the right camera; t is a unit of r A camera translation matrix for the right camera.
And S550, determining each target pixel point under a projection screen coordinate system according to each optical machine internal reference matrix and each target imaging point so as to realize the adjustment of the virtual reality equipment.
In an alternative embodiment of the present invention, the connecting line between the optical centers of the cameras is parallel and symmetrical to the connecting line between the optical centers of the optical machines of the lenses.
Fig. 6 is a schematic structural diagram of a camera optical center and a lens optical-mechanical optical center of a virtual reality device. As shown in FIG. 6, the line connecting the optical centers of the left and right cameras is parallel and symmetrical to the line connecting the optical centers of the left and right lens engines, i.e., C l C r Parallel to V l V r And is bilaterally symmetrical. T is a unit of l A camera translation matrix for the left camera; t is a unit of r Is the camera translation matrix for the right camera.
According to the scheme, the structure of the virtual reality equipment is limited, the connecting line between the optical centers of the cameras is limited, and is parallel and symmetrical to the connecting line between the optical centers of the lens optical machines, so that the accuracy of the camera translation matrix obtained by calibrating the virtual reality equipment is ensured, the corresponding degree of a reference camera coordinate system and a lens optical machine coordinate system is improved, and the presentation effect of a video perspective scene is ensured.
In an optional embodiment of the present invention, the camera selection matrix is determined according to an actual included angle between the projection screen coordinate system and the camera imaging plane of the binocular camera, and a correlation between a preset candidate included angle and the candidate camera selection matrix.
The candidate included angle and the actual included angle are included angles between a projection screen coordinate system of the virtual reality equipment and a camera imaging plane of the binocular camera. However, the candidate angle is an angle detected when the correlation between the candidate angle and the candidate camera selection matrix is calculated. By detecting a large number of virtual reality devices, the incidence relation between the candidate included angles and the candidate camera selection matrix can be obtained through pre-calculation. And the actual included angle is the included angle obtained by calibrating the virtual reality equipment which is being adjusted. At this time, the incidence relation between the candidate included angle and the candidate camera selection matrix is obtained by performing pre-calculation on a large number of virtual reality devices. The camera selection matrix of the virtual reality device which is being adjusted can be obtained through calculation by measuring the actual included angle of the virtual reality device which is being adjusted and based on the incidence relation between the candidate included angle and the candidate camera selection matrix which are preset. By using the camera selection matrix, the actual included angle between the camera imaging plane of each camera and the projection screen can be corrected, so that the corrected camera imaging plane is parallel to the projection screen.
Specifically, the actual included angle between the projection screen coordinate system and the camera imaging plane of the binocular camera can be determined by calibrating the virtual reality device, and the camera selection matrix is calculated according to the actual included angle based on the incidence relation between the preset candidate included angle and the candidate camera selection matrix.
For example, the association relationship between the preset candidate included angle and the candidate camera selection matrix may be as follows:
Figure BDA0003960201210000131
wherein R' is a candidate camera selection matrix; a' is a candidate angle between the projection screen coordinate system and the camera imaging plane of the binocular camera.
Fig. 7 is a schematic structural view of an imaging plane and a projection screen of a binocular camera. As shown in fig. 7, the actual angle between the projection screen coordinate system and the camera imaging plane of the binocular camera is a. After the actual included angle a between the projection screen coordinate system and the camera imaging plane of the binocular camera is obtained, let a' = a, and substitute a into the above formula, the camera selection matrix can be determined.
The camera selection matrix may be represented using the following formula:
Figure BDA0003960201210000141
wherein R is a camera selection matrix; and a is an actual included angle between the coordinate system of the projection screen and the camera imaging plane of the binocular camera.
According to the scheme, the camera selection matrix is determined by presetting the incidence relation between the candidate included angle and the candidate camera selection matrix, according to the actual included angle between the projection screen coordinate system and the camera imaging plane of the binocular camera and the incidence relation between the preset candidate included angle and the candidate camera selection matrix, the efficiency of confirming the camera selection matrix is improved, and the efficiency of adjusting the virtual reality equipment through the camera selection matrix is further improved.
FIG. 8 is a system framework diagram of the See-Through system. As shown in fig. 8, the hardware devices in the See-Through system include a left camera, a right camera, a left projection screen, and a right projection screen. Wherein the left camera and the right camera may be two cameras of the same model. The left projection screen and the right projection screen may also be one projection screen. At this time, the image collected by the left camera can be displayed on the left half part of the projection screen; the image captured by the right camera may be displayed on the right half of the projection screen. In order to avoid dead zones on the projection screen, the FOV (Field of View) area of the left camera and the FOV area of the right camera should intersect and cover the largest viewing area as possible. If the point of regard marking needs to be realized, the hardware device may be added with an infrared camera and an LED (Light Emitting Diode) lamp of an eye movement recognition technology. The software modules in the See-Through system comprise a video acquisition module, an epipolar alignment module, a fixation point marking module, a projection transformation module, a video display module and the like. The binocular camera comprises a video acquisition module, a binocular camera module and a video display module, wherein the video acquisition module is used for receiving video data acquired by the binocular camera; the polar line alignment module is used for correcting polar lines of the binocular camera; the fixation point labeling is used for carrying out parameter labeling on the object in the projection screen according to the eye movement identification technology; the projection transformation is used for transforming the plane imaging points on the camera imaging plane of each camera to target pixel points in the projection screen coordinates; the video display module is used for displaying the video in the projection screen.
The See-Through system of the scheme can realize the video perspective function of the virtual reality equipment, the virtual equipment adjusting method of the embodiment of the invention is realized Through the combination of hardware equipment and software modules, the adjusted virtual reality equipment is utilized to carry out video perspective, the consistency of a video perspective display scene and a real world is ensured, and the video perspective display effect is improved.
According to the technical scheme of the embodiment of the invention, the plane imaging point of the binocular camera on the camera imaging plane is rotated to a reference camera coordinate system parallel to the projection screen according to the correction rotation matrix and the camera selection matrix of each camera to obtain a reference imaging point, the reference imaging point is translated to a lens optical-mechanical coordinate system from the reference camera coordinate system according to each camera translation matrix to obtain a target imaging point under the lens optical-mechanical coordinate system, polar line correction of the binocular camera and parallel correction between each camera imaging plane and the projection screen are realized, and then the reference coordinate system is converted to the lens optical-mechanical coordinate system through the camera translation matrix, so that alignment of a picture acquired by the binocular camera and a picture projected by the lens optical-mechanical is realized, and consistency of an observed scene of virtual reality equipment in video perspective and a real world is ensured. The calculation process of difference adjustment of the virtual reality equipment is simplified, and the efficiency of adjustment of the virtual reality equipment is improved.
EXAMPLE III
Fig. 9 is a schematic structural diagram of an adjusting apparatus for virtual reality equipment according to a third embodiment of the present invention. The virtual reality device adjusting device may be implemented in hardware and/or software, and may be configured in an electronic device that carries a virtual reality device adjusting function.
Referring to fig. 9, the virtual reality device adjusting apparatus includes: the epipolar line correction module 910, the matrix acquisition module 920, the coordinate system conversion module 930, and the target pixel determination module 940. Wherein the content of the first and second substances,
the polar line correction module 910 is configured to perform polar line correction on the binocular cameras to obtain a correction rotation matrix of each camera;
a matrix obtaining module 920, configured to obtain an optical machine internal reference matrix, a camera selection matrix, and a camera translation matrix of each camera of each lens optical machine;
a coordinate system conversion module 930, configured to convert, according to the correction rotation matrix of each camera, the camera selection matrix, and the camera translation matrix of each camera, each planar imaging point of the binocular camera into a lens optical-mechanical coordinate system, so as to obtain each target imaging point in the lens optical-mechanical coordinate system;
and a target pixel point determining module 940, configured to determine each target pixel point in the projection screen coordinate system according to each optical-machine internal parameter matrix and each target imaging point, so as to implement adjustment on the virtual reality device.
According to the technical scheme of the embodiment of the invention, polar line correction is carried out on a binocular camera to obtain a correction rotation matrix of each camera, a light machine internal reference matrix, a camera selection matrix and a camera translation matrix of each lens light machine are obtained, each plane imaging point of the binocular camera is converted into a lens light machine coordinate system according to the correction rotation matrix, the camera selection matrix and the camera translation matrix of each camera of each lens, each target imaging point under the lens light machine coordinate system is obtained, each target pixel point under a projection screen coordinate system is determined according to each light machine internal reference matrix and each target imaging point, so that adjustment of virtual reality equipment is realized, polar line correction of the binocular camera is realized, each camera imaging plane of the binocular camera is parallel to each other and on the same straight line, and consistency of the same object point collected by each camera is guaranteed; meanwhile, the correction between the imaging plane of the camera and the projection screen is realized, so that the imaging plane of the camera is parallel to the projection screen, a plane imaging point in the imaging plane of the binocular camera is corresponding to a target imaging point in a lens optical machine coordinate system and a target pixel point in the projection screen coordinate system, and the consistency between a picture of the real world acquired by the binocular camera and a picture in the projection screen coordinate system observed by human eyes is realized. Different from methods such as image splicing and three-dimensional reconstruction, which need a large amount of formula calculation, the technical scheme of the embodiment of the invention adjusts the difference between the observed scene and the real world of the virtual reality device during video perspective through a small amount of calculation, and reduces the calculation amount.
In an alternative embodiment of the present invention, the coordinate system conversion module 930 comprises: the reference imaging point determining unit is used for rotating a plane imaging point of the binocular camera on the camera imaging plane to a reference camera coordinate system parallel to the projection screen according to the correction rotation matrix and the camera selection matrix of each camera to obtain a reference imaging point; and the target imaging point determining unit is used for translating the reference imaging point from the reference camera coordinate system to the lens optical-mechanical coordinate system according to the camera translation matrixes to obtain a target imaging point in the lens optical-mechanical coordinate system.
In an alternative embodiment of the invention, the connecting line between the optical centers of the cameras is parallel and symmetrical to the connecting line between the optical centers of the lens light engines.
In an optional embodiment of the present invention, the camera selection matrix is determined according to an actual included angle between the projection screen coordinate system and the camera imaging plane of the binocular camera, and a correlation between a preset candidate included angle and the candidate camera selection matrix.
In an alternative embodiment of the present invention, the epipolar rectification block 910 includes: the binocular camera calibration unit is used for calibrating the binocular cameras according to the checkerboard images to respectively obtain the camera extrinsic parameter matrixes of the cameras; the relative matrix calculation unit is used for calculating a binocular relative rotation matrix and a binocular relative translation matrix between the binocular cameras according to the camera extrinsic parameter matrixes; and the correction rotation matrix determining unit is used for carrying out epipolar correction on each camera according to the binocular relative rotation matrix and the binocular relative translation matrix to obtain the correction rotation matrix of each camera.
In an optional embodiment of the present invention, after the target pixel point determining module 940 determines the target pixel point of the projection screen, the apparatus further includes: and the target pixel point interpolation module is used for interpolating the target pixel points according to the display parameters of the projection screen and the target pixel points.
In an optional embodiment of the present invention, after the target pixel point determining module 940 determines the target pixel point of the projection screen, the apparatus further includes: the object parameter marking module is used for marking parameters of the object in the projection screen based on the eye movement identification technology; the parameter is distance and/or size.
The virtual reality equipment adjusting device provided by the embodiment of the invention can execute the virtual reality equipment adjusting method provided by any embodiment of the invention, and has corresponding functional modules and beneficial effects of the executing method.
In the technical scheme of the invention, the acquisition, storage, application and the like of the optical machine internal parameter matrix, the camera selection matrix, the camera translation matrix and the like of each lens optical machine meet the regulations of related laws and regulations without violating the public order and good customs.
Example four
FIG. 10 illustrates a schematic diagram of an electronic device 1000 that may be used to implement embodiments of the invention. Electronic devices are intended to represent various forms of digital computers, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other appropriate computers. The electronic device may also represent various forms of mobile devices, such as personal digital assistants, cellular phones, smart phones, wearable devices (e.g., helmets, glasses, watches, etc.), and other similar computing devices. The components shown herein, their connections and relationships, and their functions, are meant to be exemplary only, and are not meant to limit implementations of the inventions described and/or claimed herein.
As shown in fig. 10, the electronic device 1000 includes at least one processor 1001, and a memory communicatively connected to the at least one processor 1001, such as a Read Only Memory (ROM) 1002, a Random Access Memory (RAM) 1003, and the like, wherein the memory stores computer programs executable by the at least one processor, and the processor 1001 may perform various appropriate actions and processes according to the computer programs stored in the Read Only Memory (ROM) 1002 or the computer programs loaded from the storage unit 1008 into the Random Access Memory (RAM) 1003. In the RAM 1003, various programs and data necessary for the operation of the electronic apparatus 1000 can also be stored. The processor 1001, ROM 1002, and RAM 1003 are connected to each other by a bus 1004. An input/output (I/O) interface 1005 is also connected to bus 1004.
A number of components in the electronic device 1000 are connected to the I/O interface 1005, including: an input unit 1006 such as a keyboard, a mouse, and the like; an output unit 1007 such as various types of displays, speakers, and the like; a storage unit 1008 such as a magnetic disk, an optical disk, or the like; and a communication unit 1009 such as a network card, a modem, a wireless communication transceiver, or the like. The communication unit 1009 allows the electronic device 1000 to exchange information/data with other devices through a computer network such as the internet and/or various telecommunication networks.
The processor 1001 may be a variety of general and/or special purpose processing components with processing and computing capabilities. Some examples of processor 1001 include, but are not limited to, a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), various application specific Artificial Intelligence (AI) computing chips, various processors running machine learning model algorithms, a Digital Signal Processor (DSP), and any suitable processor, controller, microcontroller, or the like. The processor 1001 performs the various methods and processes described above, such as the virtual reality device adjustment method.
In some embodiments, the virtual reality device adjustment method can be implemented as a computer program tangibly embodied in a computer-readable storage medium, such as storage unit 1008. In some embodiments, part or all of the computer program may be loaded and/or installed onto electronic device 1000 via ROM 1002 and/or communications unit 1009. When the computer program is loaded into RAM 1003 and executed by processor 1001, one or more steps of the virtual reality device adjustment method described above may be performed. Alternatively, in other embodiments, the processor 1001 may be configured to perform the virtual reality device adjustment method in any other suitable manner (e.g., by way of firmware).
Various implementations of the systems and techniques described here above may be implemented in digital electronic circuitry, integrated circuitry, field Programmable Gate Arrays (FPGAs), application Specific Integrated Circuits (ASICs), application Specific Standard Products (ASSPs), system on a chip (SOCs), complex Programmable Logic Devices (CPLDs), computer hardware, firmware, software, and/or combinations thereof. These various embodiments may include: implemented in one or more computer programs that are executable and/or interpretable on a programmable system including at least one programmable processor, which may be special or general purpose, receiving data and instructions from, and transmitting data and instructions to, a storage system, at least one input device, and at least one output device.
Computer programs for implementing the methods of the present invention can be written in any combination of one or more programming languages. These computer programs may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus, such that the computer programs, when executed by the processor, cause the functions/acts specified in the flowchart and/or block diagram block or blocks to be performed. A computer program can execute entirely on a machine, partly on a machine, as a stand-alone software package partly on a machine and partly on a remote machine or entirely on a remote machine or server.
In the context of the present invention, a computer-readable storage medium may be a tangible medium that can contain, or store a computer program for use by or in connection with an instruction execution system, apparatus, or device. A computer readable storage medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. Alternatively, the computer readable storage medium may be a machine readable signal medium. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
To provide for interaction with a user, the systems and techniques described here can be implemented on an electronic device having: a display device (e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor) for displaying information to a user; and a keyboard and a pointing device (e.g., a mouse or a trackball) by which a user may provide input to the electronic device. Other kinds of devices may also be used to provide for interaction with a user; for example, feedback provided to the user can be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and input from the user can be received in any form, including acoustic, speech, or tactile input.
The systems and techniques described here can be implemented in a computing system that includes a back-end component (e.g., as a data server), or that includes a middleware component (e.g., an application server), or that includes a front-end component (e.g., a user computer having a graphical user interface or a web browser through which a user can interact with an implementation of the systems and techniques described here), or any combination of such back-end, middleware, or front-end components. The components of the system can be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include: local Area Networks (LANs), wide Area Networks (WANs), blockchain networks, and the internet.
The computing system may include clients and servers. A client and server are generally remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other. The Server may be a cloud Server, which is also called a cloud computing Server or a cloud host, and is a host product in a cloud computing service system, so as to solve the defects of high management difficulty and weak service expansibility in the conventional physical host and VPS (Virtual Private Server) service.
It should be understood that various forms of the flows shown above may be used, with steps reordered, added, or deleted. For example, the steps described in the present invention may be executed in parallel, sequentially, or in different orders, and are not limited herein as long as the desired results of the technical solution of the present invention can be achieved.
The above-described embodiments should not be construed as limiting the scope of the invention. It should be understood by those skilled in the art that various modifications, combinations, sub-combinations and substitutions may be made in accordance with design requirements and other factors. Any modification, equivalent replacement, and improvement made within the spirit and principle of the present invention should be included in the protection scope of the present invention.

Claims (10)

1. A virtual reality device adjustment method, the method comprising:
performing polar line correction on the binocular cameras to obtain correction rotation matrixes of the cameras;
acquiring optical machine internal parameter matrixes and camera selection matrixes of the optical machines of the lenses and camera translation matrixes of the cameras;
converting each plane imaging point of the binocular camera into a lens optical-mechanical coordinate system according to the correction rotation matrix of each camera, the camera selection matrix and the camera translation matrix of each camera to obtain each target imaging point in the lens optical-mechanical coordinate system;
and determining each target pixel point under a projection screen coordinate system according to each optical machine internal reference matrix and each target imaging point so as to realize the adjustment of the virtual reality equipment.
2. The method according to claim 1, wherein the converting each planar imaging point of the binocular camera into a lens ray-machine coordinate system according to the correction rotation matrix of each camera, the camera selection matrix, and the camera translation matrix of each camera to obtain each target imaging point in the lens ray-machine coordinate system comprises:
rotating a plane imaging point of the binocular camera on a camera imaging plane to a reference camera coordinate system parallel to the projection screen according to the correction rotation matrix of each camera and the camera selection matrix to obtain a reference imaging point;
and translating the reference imaging point from the reference camera coordinate system to the lens optical-mechanical coordinate system according to each camera translation matrix to obtain a target imaging point in the lens optical-mechanical coordinate system.
3. The method of claim 2, wherein a line connecting optical centers of the cameras is parallel and symmetrical to a line connecting optical centers of the lens optical engines.
4. The method of claim 1, wherein the camera selection matrix is determined according to an actual angle between the projection screen coordinate system and a camera imaging plane of a binocular camera, and a correlation between a preset candidate angle and the candidate camera selection matrix.
5. The method of claim 1, wherein the epipolar line correction of the binocular cameras to obtain a corrected rotation matrix for each camera comprises:
calibrating the binocular cameras according to the checkerboard images to respectively obtain a camera extrinsic parameter matrix of each camera;
calculating a binocular relative rotation matrix and a binocular relative translation matrix between the binocular cameras according to the external parameter matrixes of the cameras;
and performing epipolar line correction on each camera according to the binocular relative rotation matrix and the binocular relative translation matrix to obtain a correction rotation matrix of each camera.
6. The method of claim 1, after determining the target pixel point of the projection screen, further comprising:
and carrying out interpolation on the target pixel points according to the display parameters of the projection screen and the target pixel points.
7. The method of claim 1, after determining the target pixel point of the projection screen, further comprising:
performing parameter marking on the object in the projection screen based on an eye movement identification technology; the parameter is a distance and/or a dimension.
8. A virtual reality device adjustment apparatus, the apparatus comprising:
the polar line correction module is used for carrying out polar line correction on the binocular cameras to obtain correction rotation matrixes of the cameras;
the matrix acquisition module is used for acquiring optical machine internal parameter matrixes of the optical machines of the lenses, camera selection matrixes and camera translation matrixes of the cameras;
a coordinate system conversion module, configured to convert, according to the correction rotation matrix of each camera, the camera selection matrix, and the camera translation matrix of each camera, each planar imaging point of the binocular camera into a lens optical-mechanical coordinate system, so as to obtain each target imaging point in the lens optical-mechanical coordinate system;
and the target pixel point determining module is used for determining each target pixel point under a projection screen coordinate system according to each optical machine internal parameter matrix and each target imaging point so as to realize the adjustment of the virtual reality equipment.
9. An electronic device, characterized in that the electronic device comprises:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein, the first and the second end of the pipe are connected with each other,
the memory stores a computer program executable by the at least one processor to enable the at least one processor to perform the virtual reality device adjustment method of any one of claims 1-7.
10. A computer-readable storage medium storing computer instructions for causing a processor to perform the virtual reality device adjustment method of any one of claims 1 to 7 when executed.
CN202211478252.6A 2022-11-23 2022-11-23 Virtual reality equipment adjusting method and device, electronic equipment and storage medium Pending CN115761005A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211478252.6A CN115761005A (en) 2022-11-23 2022-11-23 Virtual reality equipment adjusting method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211478252.6A CN115761005A (en) 2022-11-23 2022-11-23 Virtual reality equipment adjusting method and device, electronic equipment and storage medium

Publications (1)

Publication Number Publication Date
CN115761005A true CN115761005A (en) 2023-03-07

Family

ID=85336468

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211478252.6A Pending CN115761005A (en) 2022-11-23 2022-11-23 Virtual reality equipment adjusting method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN115761005A (en)

Similar Documents

Publication Publication Date Title
US11632537B2 (en) Method and apparatus for obtaining binocular panoramic image, and storage medium
US11410331B2 (en) Systems and methods for video communication using a virtual camera
EP3852068A1 (en) Method for training generative network, method for generating near-infrared image and apparatuses
CN107705333B (en) Space positioning method and device based on binocular camera
US11189043B2 (en) Image reconstruction for virtual 3D
WO2022012192A1 (en) Method and apparatus for constructing three-dimensional facial model, and device and storage medium
US20130321589A1 (en) Automated camera array calibration
US9813693B1 (en) Accounting for perspective effects in images
CN109544628B (en) Accurate reading identification system and method for pointer instrument
WO2018161883A1 (en) Virtual ray tracing method and dynamic light field refocusing display system
WO2019062056A1 (en) Smart projection method and system, and smart terminal
US20220358675A1 (en) Method for training model, method for processing video, device and storage medium
WO2020151268A1 (en) Generation method for 3d asteroid dynamic map and portable terminal
CN113643414A (en) Three-dimensional image generation method and device, electronic equipment and storage medium
WO2023056840A1 (en) Method and apparatus for displaying three-dimensional object, and device and medium
CN113724391A (en) Three-dimensional model construction method and device, electronic equipment and computer readable medium
US20220405968A1 (en) Method, apparatus and system for image processing
JP2016114445A (en) Three-dimensional position calculation device, program for the same, and cg composition apparatus
CN115578515B (en) Training method of three-dimensional reconstruction model, three-dimensional scene rendering method and device
CN115131507B (en) Image processing method, image processing device and meta space three-dimensional reconstruction method
CN115761005A (en) Virtual reality equipment adjusting method and device, electronic equipment and storage medium
US20230005213A1 (en) Imaging apparatus, imaging method, and program
CN114020150A (en) Image display method, image display device, electronic apparatus, and medium
CN113706692A (en) Three-dimensional image reconstruction method, three-dimensional image reconstruction device, electronic device, and storage medium
TWI831583B (en) Virtual reality equipment adjustment method, apparatus, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination