WO2020048461A1 - 三维立体显示方法、终端设备及存储介质 - Google Patents

三维立体显示方法、终端设备及存储介质 Download PDF

Info

Publication number
WO2020048461A1
WO2020048461A1 PCT/CN2019/104240 CN2019104240W WO2020048461A1 WO 2020048461 A1 WO2020048461 A1 WO 2020048461A1 CN 2019104240 W CN2019104240 W CN 2019104240W WO 2020048461 A1 WO2020048461 A1 WO 2020048461A1
Authority
WO
WIPO (PCT)
Prior art keywords
virtual
distortion
image
display content
optical
Prior art date
Application number
PCT/CN2019/104240
Other languages
English (en)
French (fr)
Inventor
黄嗣彬
戴景文
贺杰
Original Assignee
广东虚拟现实科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from CN201811020965.1A external-priority patent/CN110874135B/zh
Priority claimed from CN201811023501.6A external-priority patent/CN110874867A/zh
Priority claimed from CN201811023521.3A external-priority patent/CN110874868A/zh
Application filed by 广东虚拟现实科技有限公司 filed Critical 广东虚拟现实科技有限公司
Priority to US16/731,094 priority Critical patent/US11380063B2/en
Publication of WO2020048461A1 publication Critical patent/WO2020048461A1/zh

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/10Processing, recording or transmission of stereoscopic or multi-view image signals
    • H04N13/106Processing image signals
    • H04N13/111Transformation of image signals corresponding to virtual viewpoints, e.g. spatial image interpolation
    • H04N13/117Transformation of image signals corresponding to virtual viewpoints, e.g. spatial image interpolation the virtual viewpoint locations being selected by the viewers or determined by viewer tracking
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/006Mixed reality
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B30/00Optical systems or apparatus for producing three-dimensional [3D] effects, e.g. stereoscopic images
    • G02B30/20Optical systems or apparatus for producing three-dimensional [3D] effects, e.g. stereoscopic images by providing first and second parallax images to an observer's left and right eyes
    • G02B30/22Optical systems or apparatus for producing three-dimensional [3D] effects, e.g. stereoscopic images by providing first and second parallax images to an observer's left and right eyes of the stereoscopic type
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/10Processing, recording or transmission of stereoscopic or multi-view image signals
    • H04N13/106Processing image signals
    • H04N13/111Transformation of image signals corresponding to virtual viewpoints, e.g. spatial image interpolation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/20Image signal generators
    • H04N13/275Image signal generators from 3D object models, e.g. computer-generated stereoscopic image signals
    • H04N13/279Image signal generators from 3D object models, e.g. computer-generated stereoscopic image signals the virtual viewpoint locations being selected by the viewers or determined by tracking
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/30Image reproducers
    • H04N13/332Displays for viewing with the aid of special glasses or head-mounted displays [HMD]
    • H04N13/344Displays for viewing with the aid of special glasses or head-mounted displays [HMD] with head-mounted left-right displays

Definitions

  • the present application relates to the field of display technology, and in particular, to a three-dimensional stereoscopic display method, a terminal device, and a storage medium.
  • Augmented reality is a technology that increases the user's perception of the real world through information provided by computer systems.
  • Content objects such as virtual objects, scenes, or system prompts are superimposed on real scenes to enhance or modify the perception of the real world environment or data representing the real world environment.
  • the device displays the virtual content, how to realize the stereoscopic display of the virtual content and the real scene is an urgent problem to be solved.
  • the embodiment of the present application proposes a three-dimensional stereoscopic display method, a terminal device, and a storage medium, which can realize the alignment display of the stereoscopic virtual content and the real object.
  • a three-dimensional stereoscopic display method is applied to a terminal device.
  • the method includes: obtaining target space coordinates of a target marker in a real space; and transforming the target space coordinates into a virtual space.
  • the left-eye display content and the right-eye display content are displayed, the left-eye display content is used to project to a first optical lens, the right-eye display content is used to project to a second optical lens, and the first optical
  • the lens and the second optical lens are respectively configured to reflect the left-eye display content and the right-eye display content to a human eye.
  • a data processing method is applied to a terminal device.
  • the method includes: displaying a virtual marker; and acquiring a physical marker in a first spatial coordinate system when a user's alignment determination operation is detected.
  • a first coordinate of, wherein the alignment determination operation is used to characterize the alignment of the virtual marker with the physical marker, the virtual marker corresponding to the physical marker; obtaining the virtual marker at a second A second coordinate in a spatial coordinate system; and acquiring the first spatial coordinate system and the first coordinate based on the first coordinate of the physical marker and the second coordinate of a virtual marker corresponding to the physical marker. Conversion parameters between two spatial coordinate systems.
  • an optical distortion correction method is applied to a terminal device.
  • the method includes: obtaining coordinate data of an undistorted virtual image; and obtaining the optical distortion model and the coordinate data of the undistorted virtual image to obtain A predistortion image to be displayed, the optical distortion model is used to fit the optical distortion generated by the optical lens; and the predistortion image is displayed, the predistortion image is used to be projected onto the optical lens, and The optical lens reflects to the human eye to form the distortion-free virtual image.
  • a terminal device includes a memory and a processor, and the memory is coupled to the processor; the memory stores a computer program, and when the computer program is executed by the processor, The processor is caused to perform the method as described above.
  • a computer-readable storage medium has a program code stored in the computer-readable storage medium, and the program code may be called by a processor to execute the method as described above.
  • FIG. 1 shows a schematic diagram of an augmented reality system according to an embodiment of the present application.
  • FIG. 2 is a schematic diagram of a scenario provided by an embodiment of the present application.
  • FIG. 3 is a schematic diagram of another scenario provided by an embodiment of the present application.
  • FIG. 4 is a schematic diagram of another scenario provided by an embodiment of the present application.
  • FIG. 5 shows a flowchart of a three-dimensional stereoscopic display method according to an embodiment of the present application.
  • FIG. 6 is a schematic diagram of an effect provided according to an embodiment of the present application.
  • FIG. 7 shows a flowchart of a three-dimensional stereoscopic display method according to another embodiment of the present application.
  • FIG. 8 is a schematic diagram of a usage scenario according to an embodiment of the present application.
  • FIG. 9 is a schematic diagram of another usage scenario provided according to an embodiment of the present application.
  • FIG. 10 shows a flowchart of step S240 in the display method according to an embodiment of the present application.
  • an augmented reality system 10 includes a terminal device 100 and a marker 200.
  • the terminal device 100 is a head-mounted display device, for example, an integrated head-mounted display device, a head-mounted display device connected to an external electronic device, and the like.
  • the terminal device 100 may also be a smart terminal such as a mobile phone connected to an external / plug-in head-mounted display device, that is, the terminal device 100 is used as a processing and storage device of the head-mounted display device, and is inserted into or connected to the external head-mounted display device.
  • a virtual object is displayed on the head-mounted display device.
  • the terminal device 100 is provided with a camera.
  • the terminal device 100 can collect an image containing the marker 200 and identify the marker 200 in the image to obtain the marker 200 relative to the terminal device.
  • the spatial position information such as the position and orientation of 100, and the identity information of the marker 200.
  • the marker 200 has a pattern of a topological structure, and the topological structure refers to a connection relationship between a sub-marker and a feature point in the marker 200, and the topology structure represents identity information of the marker 200.
  • the marker 200 may also be other patterns, which is not limited herein, as long as it can be identified and tracked by the terminal device 100.
  • the head-mounted display device may include a first optical lens and a second optical lens, wherein the first optical lens and the second optical lens are respectively used for projecting light emitted from the image source to the observation positions of the left eye and the right eye, and further The display contents corresponding to the left eye and the right eye are respectively presented in the left eye and the right eye of the user to realize stereoscopic display.
  • the image source may be a display screen of a head-mounted display device, or a display screen of a smart terminal connected to the head-mounted display device, etc., and may be used to display an image.
  • the coordinates of the coordinate system of the physical marker 306 identified in the real space by the tracking camera 301 and the virtual markers in the virtual space are used.
  • the coordinates of the coordinate system are used to obtain the transformation parameters between the coordinate system in the real space and the coordinate system in the virtual space.
  • a normal undistorted real image 311 forms a distorted virtual image 312 after being displayed by an optical lens.
  • An undistorted virtual image 314 can be obtained first, and the undistorted virtual image 314 is pre-distorted to obtain a display for display.
  • the predistortion image 313 is displayed, and then the predistortion image 313 is displayed. After the predistortion image 313 passes the optical distortion effect of the optical lens, a distortion-free virtual image 314 is formed.
  • the tracking target 301 when performing stereoscopic display of the alignment of the virtual content and the physical content, the tracking target 301 can be identified by the tracking camera 301 to obtain the tracking target in a real-world coordinate system with the tracking camera 301 as the origin. Coordinates, and then perform coordinate transformation. According to the conversion parameters between the coordinate system in the real space and the coordinate system in the virtual space, the coordinates of the tracking target in the coordinate system in the real space are converted into the coordinates in the virtual space.
  • the virtual camera 304 is the rendering coordinates in the coordinate system of the coordinate origin; the left-eye display image and the right-eye display image are generated according to the rendering coordinates, and the left-eye display image is pre-distorted with the left eye to obtain the left-eye pre-distortion image and the right eye is displayed The image is pre-distorted for the right eye to obtain the pre-distortion image for the right eye.
  • the pre-distortion image for the left eye and the pre-distortion image for the right eye are displayed on the display screen 303 and then projected to the human eye through the optical lens 302 to form a virtual image of the left eye without distortion and without
  • the distorted right-eye virtual image can be formed by the fusion of the user's brain to form a stereo image, which can realize the alignment between virtual content and physical content. Display, the stereoscopic display and non-display distortion.
  • a three-dimensional stereoscopic display method is applied to a terminal device, and includes steps S110 to S140.
  • Step S110 Obtain the target space coordinates of the target marker in the real space.
  • the target space coordinates of the target mark in real space can be obtained, and the target space coordinates can be used to represent the relationship between the target mark and the tracking camera on the head-mounted display device.
  • the positional relationship can also be used to indicate the positional relationship between the target marker and the terminal device.
  • the terminal device can recognize the image containing the target marker and obtain the recognition result of the target marker, so as to obtain the terminal of the target marker in real space.
  • the device tracks the target space coordinates in the first space coordinate system where the camera is the origin.
  • the tracking camera is a camera used by the terminal device to track physical objects;
  • the recognition result of the target marker may include the spatial position of the target marker relative to the terminal device, and the identity information of the target marker; and the space of the target marker relative to the terminal device.
  • the position may include a position of the target marker with respect to the terminal device and attitude information, etc.
  • the posture information is the orientation and rotation angle of the target marker with respect to the terminal device.
  • Step S120 Convert the target space coordinates to the rendering coordinates in the virtual space.
  • the target space coordinates may be converted into rendering coordinates in the virtual space, so as to generate display content corresponding to the virtual object.
  • transforming the target space coordinates into rendering coordinates in the virtual space may include: reading the stored conversion parameters of the first space coordinate system and the second space coordinate system, and the second space coordinate system is in the virtual space.
  • the transformation parameters of the first space coordinate system and the second space coordinate system can be used to transform the target space to obtain the rendering coordinates.
  • the transformation parameters can be used to align the first space coordinate system and the second space coordinate system to realize the conversion between the two.
  • the transformation parameters are in the transformation formula between the first space coordinate system and the second space coordinate system.
  • the parameters are calculated by substituting the target space coordinates and the conversion parameters into the conversion formula to obtain the rendering coordinates in the first space coordinate system of the virtual space.
  • the virtual camera is a camera used to simulate the perspective of the human eye in a 3D software system. According to the change of the virtual camera movement (that is, the head movement), the movement of the virtual object in the virtual space is tracked. Through rendering, it is projected onto the optical lens to achieve stereoscopic display. .
  • Step S130 Obtain data of the virtual object to be displayed, and render the virtual object according to the data of the virtual object and the rendering coordinates to obtain the left-eye display content and the right-eye display content of the virtual object.
  • the data of the virtual object to be displayed can be obtained, and according to the data of the virtual object And rendering the virtual object as described above.
  • the data corresponding to the virtual object to be displayed may include model data of the virtual object, and the model data is data used to render the virtual object.
  • the color, model vertex coordinates, model outline data, and the like used to establish a model corresponding to the virtual object may be included.
  • the virtual camera includes a left virtual camera and a right virtual camera.
  • the left virtual camera is used to simulate the left eye of the human eye
  • the right virtual camera is used to simulate the right eye of the human eye.
  • Render the virtual object according to the data of the virtual object and the rendering coordinates to obtain the left-eye display content and the right-eye display content of the virtual object including: constructing and rendering the virtual object according to the data of the virtual object; calculating the virtual object on the left virtual camera respectively according to the rendering coordinates And the corresponding pixel coordinates in the right virtual camera to obtain left-eye display content and right-eye display content.
  • the data used for rendering the virtual object can construct and render the virtual object.
  • the space coordinates in the second space coordinate system in the virtual space of each point of the virtual object can be obtained.
  • the pixel coordinates corresponding to each point of the virtual object in the left virtual camera can be obtained.
  • the pixel value of each point and the pixel coordinates corresponding to each point in the left virtual camera can be used to obtain the left-eye display content.
  • the pixel coordinates corresponding to each point of the virtual object in the right virtual camera can be obtained.
  • the right-eye display content can be obtained.
  • the left-eye display content and the right-eye display content with parallax corresponding to the virtual object can be obtained, thereby realizing the stereoscopic display effect during display.
  • Step S140 displaying the left-eye display content and the right-eye display content, the left-eye display content is used to project to the first optical lens, and the right-eye display content is used to project to the second optical lens, the first optical lens and the second optical The lenses are used to reflect the left eye display content and the right eye display content to the human eye, respectively.
  • the left-eye display content and the right-eye display content may be displayed.
  • the left-eye display content can be projected onto the first optical lens of the head-mounted display device, and the left-eye display content can be incident on the user's left eye after being reflected by the first optical lens; the right-eye display content can be projected on the head-mounted device
  • the second optical lens of the display device after the right eye display content is reflected by the second optical lens, can enter the left eye of the user.
  • the left-eye display content and the right-eye display content are displayed, the left-eye display content is projected to the user's left eye, and the right-eye display content is projected to the user's left eye, so that the user can see the left-eye display content with parallax and the right
  • the eye display content is formed by the fusion of the user's brain to form the stereo display content, thereby achieving the alignment display of the virtual object and the target marker and the stereo display of the virtual object. For example, as shown in FIG. 6, after the above-mentioned left-eye display content and right-eye display content are displayed, it can be seen that the three-dimensional virtual object 900 and the target marker 700 are aligned and displayed.
  • a three-dimensional stereoscopic display method according to another embodiment of the present application is applied to a terminal device.
  • the method includes steps S210 to S290.
  • Step S210 Display the virtual marker.
  • the transformation parameters between the spatial coordinate systems need to be obtained.
  • virtual markers can be displayed.
  • a physical marker can be set in the real scene, and the physical marker is within the visual range of the camera of the terminal device for subsequent use. Realize the alignment display of virtual markers and solid markers.
  • the virtual marker may be stored in the terminal device in advance, and the virtual marker is the same as the physical marker, that is, the pattern and size of the virtual marker are the same as those of the physical marker.
  • the left-eye content corresponding to the virtual marker is projected to the left-eye optical lens and reflected to the user's left eye through the left-eye optical lens, and the right-eye content corresponding to the virtual marker is projected to the left-eye optical lens. And it reflects to the user's right eye through the right eye optical lens to realize the stereoscopic display of the virtual marker.
  • the user looks at the displayed virtual marker he can see that the virtual marker is superimposed on the real scene where the physical marker is located.
  • the terminal device is a head-mounted display device, or the terminal device is provided on the head-mounted display device.
  • the parameters of the optical distortion correction of the head-mounted display device can be determined to ensure the normal display of the virtual marker, that is, the virtual marker without distortion is displayed.
  • a preset image such as a checkerboard image
  • the user can determine the parameters of the optical distortion correction.
  • the terminal device detects the user's determining operation, it can determine that the current optical distortion correction parameters are accurate.
  • the position of the physical marker may be moved until it is observed that the virtual marker is aligned with the physical marker. Make an alignment determination operation on the terminal device.
  • the user can watch the virtual marker superimposed on the real scene where the physical marker is located.
  • the virtual marker and the physical marker may be misaligned in the virtual space, for example, as shown in FIG. 8
  • the physical marker 500 is not aligned with the virtual marker 600; the virtual marker and the physical marker may also be aligned.
  • the physical marker 500 is aligned with the virtual marker 600.
  • the alignment means that the positions of the virtual marker and the physical marker in the virtual space are exactly the same, and it can also be understood that the virtual marker and the physical marker are overlapped together in the visual perception of the user.
  • the virtual marker can be aligned with the physical marker by controlling the movement of the marker.
  • the physical marker is disposed on the controllable moving mechanism, and the controllable moving mechanism is connected to the terminal device.
  • the display method may further include: when a user's movement control operation is detected, sending a movement instruction to the controllable movement mechanism, and the movement instruction is used to instruct the controllable movement mechanism to move according to the movement control operation.
  • the user can make a movement control operation on the terminal device, and the movement control operation is used to control the movement of the controllable movement mechanism to drive the marker to move.
  • a movement instruction can be sent to the controllable movement mechanism, so that the controllable movement mechanism moves according to the movement control operation, and finally the purpose of aligning the physical marker with the virtual marker is achieved.
  • the above mobile control operation may be an operation made through a key of a terminal device or a touch screen, or an operation made through a controller connected to the terminal device, and is not specifically limited.
  • Step S220 When the user's alignment determination operation is detected, the first coordinates of the physical marker in the first spatial coordinate system are acquired, where the alignment determination operation is used to characterize the alignment of the virtual marker with the physical marker, and the virtual marker with the entity. The marker corresponds.
  • the position of the physical marker may be moved until it is observed that the virtual marker is aligned with the physical marker, and an alignment determination operation is performed on the terminal device.
  • an alignment determination operation can be performed on the terminal device.
  • the alignment determination operation is used to characterize the alignment of the virtual marker and the physical marker. At this time, the virtual marker alignment is achieved. Display of physical markers.
  • the above-mentioned alignment determination operation may be an operation performed through a key of a terminal device or a touch screen, or an operation performed through a controller connected to the terminal device, and is not specifically limited.
  • the terminal device can detect the alignment determination operation made by the user, and determine that the virtual marker is aligned with the physical marker at this time, so as to determine the coordinates of the current physical marker in the first space coordinate system in the real space and the currently displayed virtual object.
  • the coordinates of the second spatial coordinate system in the virtual space determine a transformation parameter between the first spatial coordinate system and the second spatial coordinate system.
  • the first spatial coordinate system is a spatial coordinate system with a tracking camera as an origin in real space
  • the second spatial coordinate system is a spatial coordinate system with a virtual camera as an origin in virtual space
  • the tracking camera refers to the camera of the terminal device
  • the virtual camera is a camera used to simulate the perspective of the human eye in the 3D software system. According to the change of the virtual camera movement (that is, the head movement), the movement change of the virtual object in the virtual space is tracked, and the projection is projected on the optical lens to realize the stereoscopic display through rendering.
  • the first coordinates of the physical marker in the first spatial coordinate system can be obtained.
  • the terminal device may identify the image including the physical marker to obtain a recognition result of the physical marker, so as to obtain the physical marker in a first spatial coordinate system.
  • the first coordinate may include the spatial position of the physical marker relative to the terminal device, and the identity information of the physical marker; the spatial position of the physical marker relative to the terminal device may include the position of the physical marker relative to the terminal device.
  • attitude information is the orientation and rotation angle of the physical marker relative to the terminal device.
  • the first spatial coordinate system and the second spatial coordinate system are obtained according to the first coordinates of the physical marker in the first spatial coordinate system and the second coordinates of the virtual marker in the second spatial coordinate system.
  • the first coordinates of the physical markers in the first spatial coordinate system can be obtained.
  • the alignment determination used to characterize the alignment of multiple physical markers and multiple virtual markers can be detected.
  • first coordinates of all physical markers in a first spatial coordinate system are obtained.
  • the camera of the terminal device before using the camera of the terminal device to collect an image containing a physical marker to determine the first coordinate of the physical marker in the first spatial coordinate system, the camera may also be calibrated to ensure that the entity is acquired The exact coordinates of the marker in the first spatial coordinate system.
  • Step S230 Acquire a second coordinate of the virtual marker in the second spatial coordinate system.
  • the terminal device further needs to obtain a second coordinate of the virtual marker in the second spatial coordinate system, and the second coordinate of the virtual marker in the second spatial coordinate system can be obtained by tracking the virtual marker by the virtual camera, so that The second coordinates in the second spatial coordinate system corresponding to the plurality of virtual markers are acquired, and the plurality of virtual markers correspond to the plurality of markers one-to-one.
  • the plurality of physical markers may be labeled according to the plurality of physical markers.
  • the first coordinates of the physical marker and the second coordinates of the virtual marker corresponding to the physical marker are stored as coordinate pairs for subsequent subsequent first Calculation of transformation parameters of the space coordinate system and the second space coordinate system.
  • the physical marker A corresponds to the virtual marker a
  • the physical marker B corresponds to the virtual marker b
  • the first coordinate of the physical marker A and the second coordinate of the virtual marker a are used as one coordinate
  • the physical marker is The first coordinate of B and the second coordinate of the virtual marker b are stored as a coordinate pair.
  • Step S240 Obtain a conversion parameter between the first spatial coordinate system and the second spatial coordinate system based on the first coordinates of the physical marker and the second coordinates of the virtual marker corresponding to the physical marker.
  • a conversion parameter between the first spatial coordinate system and the second spatial coordinate system can be calculated.
  • the conversion parameters between the first spatial coordinate system and the second spatial coordinate system may include rotation parameters and translation parameters.
  • step S240 may include:
  • Step S241 Establish a conversion formula between the first space coordinate system and the second space coordinate system according to the attitude transformation algorithm.
  • the conversion formula includes a rotation parameter and a translation parameter.
  • the first spatial coordinate system when calculating a conversion parameter between the first spatial coordinate system and the second spatial coordinate system based on the first coordinates of the physical marker and the second coordinates of the virtual marker, the first spatial coordinate system and Conversion formula between the second space coordinate system.
  • attitude transformation algorithm may include: a rigid body transformation estimation algorithm, a PNP algorithm, a DCM algorithm, or a POSIT algorithm, and the specific attitude transformation algorithm is not limited.
  • the above conversion formula represents a conversion relationship between the coordinates in the first spatial coordinate system and the coordinates in the second spatial coordinate system, and the conversion formula includes a conversion parameter.
  • the above conversion formula may be that the coordinates in the second spatial coordinate system are expressed by the coordinates in the first spatial coordinate system and the transformation parameters, or the coordinates in the first spatial coordinate system are expressed by the coordinates in the second spatial coordinate system and the transformation Parameter expression.
  • the above conversion formula may be: a matrix composed of coordinates in the second spatial coordinate system is multiplied by a matrix composed of coordinates in the first spatial coordinate system and a matrix composed of conversion parameters, where the matrix composed of conversion parameters It includes rotation parameters and translation parameters.
  • Step S242 Obtaining a number of coordinate pairs greater than a preset value, and substituting the obtained coordinate pairs into a conversion formula to obtain a rotation parameter and a translation parameter between the first space coordinate system and the second space coordinate system.
  • the first coordinates of the physical marker and the second coordinates of the virtual marker corresponding to the physical marker can be used to solve Conversion parameters in the above conversion formula.
  • the stored coordinate pairs of the first coordinate and the corresponding second coordinate of the preset value may be read, the first coordinate and the second coordinate pair of the preset value may be substituted into the conversion formula, and the conversion in the conversion formula may be performed.
  • the parameters are solved to obtain the rotation parameters and translation parameters.
  • the preset value is determined according to a conversion formula established by a specific posture transformation algorithm. For example, when the conversion formula is a formula established according to a rigid body transform estimation algorithm, the preset value may be set to 4.
  • the first coordinate in the first spatial coordinate system of each coordinate pair corresponds to the second coordinate in a second spatial coordinate system
  • the coordinate pair is substituted into the above conversion formula for the first coordinate and the first coordinate in the coordinate pair.
  • the two coordinates are substituted into the conversion formula, that is, the first coordinate is substituted into the matrix composed of the coordinates in the first spatial coordinate system in the above conversion formula
  • the second coordinate is substituted into the matrix composed of the coordinates in the second spatial system in the above conversion formula.
  • the display method may further include: first camera parameters of the tracking camera and / or second cameras of the virtual camera Fine-tune the parameters.
  • the virtual content may not be completely aligned with the real content. Therefore, some fine adjustments can also be made to the first camera parameter of the tracking camera and / or the second camera parameter of the virtual camera, so that when the conversion parameters are used to display the virtual content, the virtual content is completely aligned with the real content. Specifically, the tilt angle and depth of the tracking camera and / or the virtual camera can be adjusted.
  • Step S250 Acquire the target space coordinates of the target marker in the first space coordinate system.
  • the alignment display of the virtual content and the real content can be achieved according to the conversion parameter.
  • the target space coordinates of the target marker in the first spatial coordinate system can be obtained, that is, the coordinates of the target marker in the real-space space coordinate system with the tracking camera as the origin.
  • the target marker is used for displaying the virtual object, that is, the alignment display of the virtual object and the target marker.
  • the target marker is similar to the physical marker described above.
  • the terminal device can acquire the target space coordinate of the target marker in the first spatial coordinate system by collecting an image containing the target marker and then identifying the image containing the target marker.
  • Step S260 Use the conversion parameters to convert the target space coordinates into rendering coordinates in the second space coordinate system.
  • the obtained conversion parameters can be used to convert the target space coordinates of the target marker in the first spatial coordinate system to the coordinates in the second spatial coordinate system. , That is, coordinates in a spatial coordinate system in the virtual space with the virtual camera as the origin to generate display content of the virtual object according to the target spatial coordinates.
  • the target space coordinates of the target marker in the first space coordinate system and the above-mentioned conversion parameters can be substituted into the conversion formula between the first space coordinate system and the second space coordinate system to calculate the second space coordinate system.
  • the rendering coordinates can be substituted into the conversion formula between the first space coordinate system and the second space coordinate system to calculate the second space coordinate system.
  • Step S270 Acquire the data of the virtual object to be displayed, and render the virtual object according to the data of the virtual object and the rendering coordinates to obtain the left-eye display content and the right-eye display content of the virtual object.
  • the data of the virtual object to be displayed can be obtained, and the virtual object is rendered according to the data of the virtual object and the rendering coordinates.
  • the data corresponding to the virtual object to be displayed may include model data of the virtual object, and the model data is data used to render the virtual object.
  • the model data may include color, model vertex coordinates, model outline data, and the like used to establish a model corresponding to the virtual object.
  • the left-eye display content and the right-eye display content corresponding to the virtual object with parallax can be obtained, so as to realize the stereoscopic display effect during display.
  • Step S280 According to the optical distortion model, the left-eye display content and the right-eye display content, a left-eye predistortion image corresponding to the left-eye display content and a right-eye predistortion image corresponding to the right-eye display content are obtained, and the optical distortion model is used for fitting Optical distortion caused by optical lenses.
  • the head-mounted display device displays the display content
  • the displayed image is distorted due to the optical system of the head-mounted display device. If the left-eye display content and the right-eye display content are directly displayed, the user will see a virtual image of the distorted virtual object. For example, in FIG. 3, the real image 311 forms a distorted virtual image 312 after being displayed. .
  • the left-eye display content and the right-eye display content may be pre-distorted and displayed, so that the user can see the virtual image of the distortion-free virtual object.
  • the left-eye display content may be subjected to reverse distortion processing according to the stored optical distortion model to obtain a left-eye predistortion image corresponding to the left-eye display content, and the right-eye display content may be reversed according to the optical distortion model.
  • the distortion processing is performed to obtain a right-eye predistortion image corresponding to the right-eye display content.
  • the optical distortion model is used to fit the optical distortion of the optical lens of the head-mounted display device.
  • the optical distortion model can be
  • X is the abscissa of the real image
  • Y is the ordinate of the real image
  • A is the first distortion parameter
  • B is the second distortion parameter
  • I 1 is a matrix to fit the lateral radial distortion of the optical lens or a fitted optical lens Matrix of horizontal barrel distortion
  • I 2 fits the matrix of transverse tangential distortion of the optical lens
  • I 3 is a matrix of longitudinal distortion of the optical lens or longitudinal barrel distortion of the optical lens.
  • I 4 Fits the longitudinal tangential distortion matrix of the optical lens, I 1 includes the abscissa of the virtual image, I 2 includes the abscissa and ordinate of the virtual image, I 3 includes the ordinate of the virtual image, and I 4 includes the virtual image And the ordinate.
  • the correspondence between the optical distortion model and the optical parameters of the optical lens can also be stored, that is, the optical distortion models corresponding to different optical parameters are stored, and the images to be displayed are read when the optical distortion model is read.
  • the corresponding optical distortion model can be read according to the optical parameters of its optical lens.
  • the stored optical distortion model can be read for reading.
  • the left to be displayed can be generated.
  • the coordinate data of the right-eye display content is used as the virtual image coordinate data, which is substituted into the optical distortion model to calculate the screen coordinate data corresponding to the right-eye display content.
  • the The displayed right-eye predistortion image corresponds to the right-eye display content.
  • the non-integer value coordinates when non-integer value coordinates exist in the screen coordinate data obtained according to the optical distortion model, the non-integer value coordinates need to be converted into integer value coordinates in order to generate a predistortion image. Therefore, pixel interpolation can be used to convert non-integer-valued coordinates in screen data to integer-valued coordinates. Specifically, the pixel coordinates closest to the integer value coordinates may be obtained, and then the non-integer value coordinates are replaced with the obtained pixel coordinates.
  • Step S290 displaying the left-eye predistortion image and the right-eye predistortion image, the left-eye predistortion image is used to project to the first optical lens, and is reflected to the human eye through the first optical lens to form a left-eye display without distortion.
  • the right eye pre-distortion image is used to project to the second optical lens, and reflects to the human eye through the second optical lens to form the right eye display content without distortion.
  • the left eye predistortion image and the right eye predistortion image After obtaining the left eye predistortion image and the right eye predistortion image after performing the predistortion, the left eye predistortion image and the right eye predistortion image can be displayed. After displaying the left eye predistortion image and the right eye predistortion image, the left eye predistortion image is projected onto the first optical lens, and is reflected by the first optical lens and then incident on the user's left eye.
  • the optical distortion of the left eye can be used to perform forward distortion on the left-eye predistortion image obtained through the reverse distortion processing. After the reverse distortion and the forward distortion are neutralized, the above-mentioned left-eye display content without distortion can be formed.
  • the predistortion image of the right eye is projected onto the second optical lens, it is incident on the right eye of the user after being reflected by the second optical lens to form the above-mentioned right-eye display content without distortion. Therefore, the user can see the distortion-free left-eye display content and the distortion-free right-eye display content with parallax, and the distortion-free stereoscopic display content is formed by the fusion of the user's brain, so that the virtual object and the target marker are aligned and displayed. Distortion-free display and stereo display. For example, referring to FIG. 3 again, the predistortion image 313 is displayed to obtain a distortion-free virtual image 314, and it is ensured that the distortion-free virtual image 314 is consistent with the real image 311.
  • the optical distortion model may be obtained before the left-eye display content and the right-eye display content are pre-distorted by using the optical distortion model. Therefore, the step of constructing the above-mentioned optical distortion model may include: reading the optical manufacturer data of the optical lens, the optical manufacturer data includes the coordinate data of the experimental image and the coordinate data of the distortion virtual image corresponding to the experimental image; and the coordinate data of the experimental image and the distortion The coordinate data of the virtual image is fitted with a polynomial to obtain an optical distortion model; the optical distortion model is stored.
  • the data of the optical manufacturer is optical data provided by the optical lens manufacturer, that is, the optical data obtained by the optical lens manufacturer using the experimental image to test the optical lens before the optical lens leaves the factory.
  • the optical manufacturer data may include the experimental image. And the distortion data after the experimental image display.
  • optical manufacturer data is shown in the following table:
  • the coordinate data of the distorted virtual image can also be adjusted according to the display parameters, where the display parameters include the zoom ratio of the optical lens, the screen size, the pixel size, and the optical center position. At least one.
  • the zoom ratio, screen size, pixel size, and optical center position corresponding to the optical lens may be obtained, and then the foregoing may be performed according to at least one parameter of the zoom ratio, screen size, pixel size, and optical center position corresponding to the optical lens.
  • the coordinate data of the distorted virtual image corresponding to the experimental image is adjusted to achieve the effect that the experimental image corresponds to each point of the distorted image with high accuracy.
  • the coordinate data of the experimental image and the coordinate data of the distorted virtual image are fitted with a polynomial to obtain an optical distortion model, which may include: according to the coordinate data of the experimental image and the coordinate data of the distorted virtual image corresponding to the experimental image, Calculate the first distortion parameter and the second distortion parameter of the optical distortion model.
  • the first distortion parameter is a coefficient fitting the distortion of the optical lens in the first direction
  • the second distortion parameter is the coefficient fitting the distortion of the optical lens in the second direction. ; Construct an optical distortion model based on the first distortion coefficient and the second distortion data.
  • the first distortion parameter is a coefficient for fitting the distortion of the optical lens in the first direction
  • the second distortion coefficient is a coefficient for fitting the distortion of the optical lens in the second direction.
  • the first direction may be horizontal
  • the second direction may be vertical
  • the first direction may be vertical
  • the second direction may be horizontal.
  • the first polynomial is obtained by multiplying a matrix for fitting the lateral radial distortion of the optical lens by a matrix for fitting the lateral tangential distortion of the optical lens, or a barrel for fitting the lateral direction of the optical lens
  • the matrix of directional distortion is obtained by multiplying the matrix of tangential distortion to fit the transverse direction of the optical lens.
  • the matrix used to fit the lateral radial distortion of the optical lens and the barrel distortion to fit the lateral lens distortion of the optical lens can be a four-row and one-column matrix composed of the abscissa of the virtual image and used to fit the lateral direction of the optical lens
  • the tangential distortion matrix is a four-row, one-column matrix composed of the abscissa and ordinate of the virtual image.
  • the second polynomial is obtained by multiplying a matrix used to fit the longitudinal distortion of the optical lens by a matrix used to fit the longitudinal distortion of the optical lens, or a barrel used to fit the longitudinal direction of the optical lens.
  • the matrix of directional distortion is obtained by multiplying the matrix of tangential distortion used to fit the longitudinal direction of the optical lens.
  • the matrix used to fit the longitudinal distortion of the optical lens and the matrix distortion to fit the longitudinal distortion of the optical lens can be a four-row, one-column matrix composed of the ordinate of the virtual image, used to fit the longitudinal of the optical lens
  • the tangential distortion matrix is a four-row, one-column matrix composed of the abscissa and ordinate of the virtual image.
  • the coordinate data of the experimental image and the coordinate data of the distorted virtual image adjusted according to the optical parameters may be substituted for the first distortion parameter in the first expression, and the first The second distortion parameter in the two expressions is solved to obtain the first distortion parameter and the second distortion parameter.
  • the first distortion parameter can be substituted into the first expression
  • the second distortion parameter can be substituted into the second expression, thereby obtaining an optical distortion model and an optical distortion model. Including the above first expression and second expression.
  • the obtained optical distortion model may also be considered to ensure the accuracy of the optical distortion model. Therefore, the display method may further include: verifying the optical distortion model.
  • the verification of the optical distortion model may include: using the coordinate data of the original image and the optical distortion model for verifying the optical distortion model to obtain a verification image to be displayed and displaying the verification image; and using the image at the viewing position
  • the acquisition device performs image acquisition on the verification image displayed by the terminal device to obtain an image containing the verification image; determines whether the parameters of the image containing the verification image meet a preset condition; and if the preset condition is satisfied, stores the optical distortion model.
  • the terminal device stores in advance a raw image for verifying the optical distortion model.
  • the original image can be a checkerboard.
  • the displayed virtual image is a distorted virtual image corresponding to the original image. If the original image is displayed by pre-distortion using the optical distortion model, and the displayed virtual image is a virtual image without distortion, it means that the optical distortion model is accurate.
  • the optical distortion model obtained above may be used to perform inverse operation on the coordinate data of the original image to obtain a verification image to be displayed corresponding to the original image.
  • the coordinate data of the original image is used as the coordinate data of the virtual image, and the virtual image at this time is a virtual image without distortion.
  • the screen coordinate data of the verification image to be displayed can be obtained.
  • the pixel value of each pixel point of the original image can generate a verification image to be displayed.
  • the verification image is an image pre-distorted by an optical distortion model.
  • the verification image may be displayed, and then the displayed verification image may be image-collected with an image acquisition device at the viewing position to obtain an image including the displayed verification image.
  • an industrial camera can be set at the human eye viewing position in the helmet, and the displayed verification image can be collected.
  • the aspect ratio of the verification image in the image is a preset aspect ratio and whether the linearity is linearity.
  • the aspect ratio is a preset aspect ratio and the linearity is a preset linearity, it can be determined that the obtained optical distortion model is correct, so the obtained optical distortion model can be stored to achieve distortion correction during display.
  • the correction of the optical distortion model may also be performed after the model determination operation performed by the user is detected after the verification image to be displayed is displayed.
  • the model determination operation is used to characterize the linearity of the verification image. And the aspect ratio is normal, and the left and right angles of view match, so it is determined that the optical distortion model is correct, and the optical distortion model is stored.
  • the present application provides a data processing method, which is applied to a terminal device, and includes: displaying a virtual marker; and acquiring a first physical marker in a first spatial coordinate system when a user's alignment determination operation is detected. Coordinates, where the alignment determination operation is used to characterize the alignment of the virtual marker with the physical marker, and the virtual marker corresponds to the physical marker; obtaining the second coordinate of the virtual marker in the second spatial coordinate system; A coordinate, and a second coordinate of the virtual marker corresponding to the physical marker, obtain a transformation parameter between the first spatial coordinate system and the second spatial coordinate system.
  • the first spatial coordinate system is a spatial coordinate system with a tracking camera as an origin in real space
  • the second spatial coordinate system is a spatial coordinate system with a virtual camera as an origin in virtual space.
  • the above-mentioned data processing method further includes: a first camera parameter of the tracking camera and / or a second camera of the virtual camera Fine-tune the parameters.
  • the above-mentioned data processing method further includes: first coordinates of the physical marker and a virtual marker corresponding to the physical marker.
  • the second coordinate of is stored as a coordinate pair.
  • the transformation parameters include rotation parameters and translation parameters
  • the first space coordinate system and the second space coordinates are obtained based on the first coordinates of the solid marker and the second coordinates of the virtual marker corresponding to the solid marker.
  • the conversion parameters between the systems include: establishing a conversion formula between the first spatial coordinate system and the second spatial coordinate system according to the attitude transformation algorithm, and the conversion formula includes a rotation parameter and a translation parameter; obtaining a number of coordinates greater than a preset value Pair, and substitute the obtained coordinate pair into the conversion formula to obtain the rotation parameter and the translation parameter between the first space coordinate system and the second space coordinate system.
  • the physical marker is disposed on the controllable mobile mechanism, and the controllable mobile mechanism is connected to the terminal device; when the alignment determination operation of the user is detected, the physical marker is obtained before the first coordinate of the first spatial coordinate system.
  • the above data processing method further includes: when a user's movement control operation is detected, sending a movement instruction to the controllable movement mechanism, and the movement instruction is used to instruct the controllable movement mechanism to move according to the movement control operation.
  • the conversion parameter between the first spatial coordinate system and the second spatial coordinate system is obtained based on the first coordinates of the physical marker and the second coordinates of the virtual marker corresponding to the physical marker.
  • the above-mentioned data processing method further includes: acquiring a third coordinate of the target marker in the first spatial coordinate system; using a conversion parameter to acquire the third coordinate into a fourth coordinate in the second spatial coordinate system; and acquiring a virtual to be displayed Data of the object, and rendering the virtual object according to the data of the virtual object and the fourth coordinate to obtain the left-eye display content and the right-eye display content of the virtual object; display the left-eye display content and the right-eye display content, and the left-eye display The content is used to be projected onto the first optical lens and reflected to the human eye via the first optical lens, and the right eye display content is used to be projected onto the second optical lens and reflected to the human eye via the second optical lens.
  • the present application further provides a method for correcting optical distortion, which is applied to a terminal device, and includes: obtaining coordinate data of an undistorted virtual image; and obtaining a preview to be displayed according to the optical distortion model and the coordinate data of the undistorted virtual image Distorted image.
  • the optical distortion model is used to fit the optical distortion generated by the optical lens.
  • the pre-distortion image is displayed. The pre-distortion image is used to project onto the optical lens and reflected to the human eye through the optical lens to form a distortion-free virtual image.
  • the above-mentioned method for correcting optical distortion further includes reading data of an optical manufacturer of the optical lens, and the data of the optical manufacturer includes The coordinate data of the experimental image and the coordinate data of the distorted virtual image corresponding to the experimental image; the coordinate data of the experimental image and the coordinate data of the distorted virtual image are polynomial fitted to obtain an optical distortion model; and the optical distortion model is stored.
  • the coordinate data of the experimental image and the coordinate data of the distorted virtual image are fitted with a polynomial to obtain an optical distortion model, which includes: The first distortion parameter and the second distortion parameter of the distortion model, the first distortion parameter is a coefficient fitting the distortion of the optical lens in the first direction, and the second distortion parameter is the coefficient fitting the distortion of the optical lens in the second direction; The first distortion coefficient and the second distortion data are used to construct an optical distortion model.
  • the method for correcting optical distortion further includes: adjusting coordinate data of the distorted virtual image according to a display parameter, wherein the display parameter includes a zoom ratio of the optical lens, a screen size, At least one of a pixel size and a light center position.
  • the method for correcting the optical distortion further includes: verifying the optical distortion model.
  • the verification of the optical distortion model includes: using the coordinate data of the original image and the optical distortion model for verifying the optical distortion model to obtain a verification image to be displayed, and the verification image to be displayed; using viewing The image acquisition device at the location performs image acquisition on the verification image displayed by the terminal device to obtain an image including the verification image; determines whether the parameters including the verification image meet a preset condition; and if the preset condition is satisfied, stores the optical distortion model.
  • obtaining the predistortion image to be displayed according to the optical distortion model and the coordinate data of the distortion-free virtual image includes: using the optical distortion model to perform inverse calculation on the coordinate data of the distortion-free virtual image to obtain Screen coordinate data corresponding to the coordinate data; a predistortion image to be displayed is generated according to the screen coordinate data.
  • the terminal device 100 in this application may include one or more of the following components: a processor, a memory, a camera, and one or more application programs, where one or more application programs may be stored in the memory and configured to be executed by one or more Multiple processors execute, and one or more programs are configured to execute the method as described in the foregoing method embodiment.
  • a processor may include one or more processing cores.
  • the processor uses various interfaces and lines to connect various parts of the entire terminal device, and executes each of the terminal device by running or executing instructions, programs, code sets or instruction sets stored in the memory, and calling data stored in the memory. Functions and processing data.
  • the processor may be implemented in at least one hardware form of digital signal processing (DSP), field programmable gate array (FPGA), and programmable logic array (PLA).
  • DSP digital signal processing
  • FPGA field programmable gate array
  • PDA programmable logic array
  • the processor may integrate one or a combination of a central processing unit (CPU), a graphics processor (GPU), and a modem.
  • the CPU mainly handles the operating system, user interface, and application programs;
  • the GPU is responsible for rendering and rendering of the displayed content;
  • the modem is used for wireless communication. It can be understood that the modem may not be integrated into the processor 110, and may be implemented by a single communication chip.
  • the memory may include a random access memory (RAM) or a read-only memory (ROM). Memory can be used to store instructions, programs, code, code sets, or instruction sets.
  • the memory may include a storage program area and a storage data area, where the storage program area may store instructions for implementing an operating system, instructions for implementing at least one function (such as a touch function, a sound playback function, an image playback function, etc.), Instructions for implementing the following method embodiments.
  • the storage data area can also store data created by the terminal during use.
  • the camera is used to collect the image of the marker, and may be an infrared camera or a color camera, and the specific type is not limited.
  • An embodiment of the present application further provides a computer-readable storage medium.
  • the computer-readable medium stores program code, and the program code can be called by a processor to execute the method described in the foregoing method embodiment.
  • the computer-readable storage medium may be an electronic memory such as a flash memory, EEPROM, EPROM, hard disk, or ROM.
  • the computer-readable storage medium includes a non-volatile computer-readable medium.
  • the computer-readable storage medium has a storage space of program code for performing any of the method steps in the above method, and the program code can be read from or written into one or more computer program products, The program code can be compressed in a suitable form.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Optics & Photonics (AREA)
  • Computer Graphics (AREA)
  • Software Systems (AREA)
  • Computer Hardware Design (AREA)
  • Human Computer Interaction (AREA)
  • Processing Or Creating Images (AREA)

Abstract

本申请实施例公开了一种三维立体显示方法,包括:获取目标标记物于现实空间中的目标空间坐标;将所述目标空间坐标转换为虚拟空间中的渲染坐标;获取待显示的虚拟对象的数据,并根据所述虚拟对象的数据以及所述渲染坐标渲染所述虚拟对象,得到所述虚拟对象的左眼显示内容以及右眼显示内容;将所述左眼显示内容以及所述右眼显示内容进行显示,所述左眼显示内容用于投射到第一光学镜片,所述右眼显示内容用于投射到第二光学镜片,所述第一光学镜片和第二光学镜片分别用于将所述左眼显示内容和右眼显示内容反射到人眼。该方法可实现虚拟对象与目标标记物的对齐显示以及立体显示。

Description

三维立体显示方法、终端设备及存储介质 技术领域
本申请涉及显示技术领域,尤其涉及一种三维立体显示方法、终端设备及存储介质。
背景技术
近年来,随着科技的进步,增强现实(AR,Augmented Reality)等技术已逐渐成为国内外研究的热点,增强现实是通过计算机系统提供的信息增加用户对现实世界感知的技术,其将计算机生成的虚拟对象、场景或系统提示信息等内容对象叠加到真实场景中,来增强或修改对现实世界环境或表示现实世界环境的数据的感知。设备对虚拟内容进行显示时,如何实现虚拟内容与真实场景配合的立体显示是亟待解决的问题。
发明内容
本申请实施例提出了一种三维立体显示方法、终端设备及存储介质,能够实现将立体的虚拟内容与真实对象进行对齐显示。
第一方面,本申请实施例的一种三维立体显示方法,应用于终端设备,所述方法包括:获取目标标记物于现实空间中的目标空间坐标;将所述目标空间坐标转换为虚拟空间中的渲染坐标;获取待显示的虚拟对象的数据,并根据所述虚拟对象的数据以及所述渲染坐标渲染所述虚拟对象,得到所述虚拟对象的左眼显示内容以及右眼显示内容;将所述左眼显示内容以及所述右眼显示内容进行显示,所述左眼显示内容用于投射到第一光学镜片,所述右眼显示内容用于投射到第二光学镜片,所述第一光学镜片和第二光学镜片分别用于将所述左眼显示内容和右眼显示内容反射到人眼。
第二方面,本申请实施例的一种数据处理方法,应用于终端设备,所述方法包括:显示虚拟标记物;在检测到用户的对齐确定操作时,获取实体标记物在第一空间坐标系的第一坐标,其中,所述对齐确定操作用于表征所述虚拟标记物与所述实体标记物对齐,所述虚拟标记物与所述实体标记物对应;获取所述虚拟标记物在第二空间坐标系中的第二坐标;及基于所述实体标记物的第一坐标,以及与所述实体标记物对应的虚拟标记物的第二坐标,获取所述第一空间坐标系与所述第二空间坐标系之间的转换参数。
第三方面,本申请实施例的一种光学畸变的校正方法,应用于终端设备,所述方法包括:获取无畸变虚像的坐标数据;根据光学畸变模型及所述无畸变虚像的坐标数据,得到待显示的预畸变图像,所述光学畸变模型用于拟合光学镜片产生的光学畸变;及将所述预畸变图像进行显示,所述预畸变图像用于投射到所述光学镜片上,并经由所述光学镜片反射到人眼,形成所述无畸变虚像。
第四方面,本申请实施例的一种终端设备,包括存储器以及处理器,所述存储器与所述处理器耦合;所述存储器存储有计算机程序,所述计算机程序被所述处理器执行时,使得所述处理器执行如上所述的方法。
第五方面,本申请实施例的一种计算机可读取存储介质,所述计算机可读取存储介质中存储有程序代码,所述程序代码可被处理器调用以执行如上所述的方法。
本申请的一个或多个实施例的细节在下面的附图和描述中提出。本申请的其它特征、目的和优点将从说明书、附图以及权利要求书变得明显。
附图说明
为了更清楚地说明本申请实施例中的技术方案,下面将对实施例描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本申请的一些实施例,对于本领域技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。
图1示出了本申请实施例的一种增强现实系统示意图。
图2示出了本申请实施例提供的一种场景示意图。
图3示出了本申请实施例提供的另一种场景示意图。
图4示出了本申请实施例提供的又一种场景示意图。
图5示出了根据本申请一个实施例的三维立体显示方法流程图。
图6示出了根据本申请实施例提供的一种效果示意图。
图7示出了根据本申请另一个实施例的三维立体显示方法流程图。
图8示出了根据本申请实施例提供的一种使用场景的示意图。
图9示出了根据本申请实施例提供的另一种使用场景的示意图。
图10示出了根据本申请实施例的显示方法中步骤S240的流程图。
具体实施方式
为了使本技术领域的人员更好地理解本申请方案,下面将结合本申请实施例中的附图,对本申请实施例中的技术方案进行清楚、完整地描述。
请参图1,本申请实施例的一种增强现实系统10,包括终端设备100以及标记物200。其中,终端设备100为头戴显示装置,例如,一体式头戴显示装置,连接有外置电子装置的头戴显示装置等。终端设备100还可为与外接式/插入式头戴显示装置连接的手机等智能终端,即终端设备100作为头戴显示装置的处理和存储设备,插入或者接入外接式头戴显示装置,在头戴显示装置中显示虚拟对象。终端设备100上设置有相机,当标记物200处于相机视觉范围内时,终端设备100可采集到包含标记物200的图像,并对图像中的标记物200进行识别,获得标记物200相对终端设备100的位置、朝向等空间位置信息,以及标记物200的身份信息。
在一些实施方式中,标记物200具有拓扑结构的图案,拓扑结构是指标记物200中的子标记物和特征点等之间连通关系,该拓扑结构表示标记物200的身份信息。标记物200也可为其他图案,在此不作限定,只要能被终端设备100识别追踪即可。
上述头戴显示装置可包括第一光学镜片以及第二光学镜片,其中,第一光学镜片和第二光学镜片分别用于将图像源发出的光线投射至左眼和右眼的观察位置,进而将左眼和右眼对应的显示内容分别呈现在用户左眼和右眼中,实现立体显示。图像源可以是头戴显示装置的显示屏,也可以是与头戴显示装置连接的智能终端的显示屏等,可用于显示图像。
请参图2,可通过显示的虚拟标记物与实体标记物306对齐时,利用跟踪相机301识别到的实体标记物306在现实空间中的坐标系的坐标,以及虚拟标记物在虚拟空间中的坐标系的坐标,得到现实空间中的坐标系与该虚拟空间中的坐标系之间的转换参数。
由于光学镜片的原因会使显示的图像形成虚像后会产生畸变,因此可对显示的图像进行预畸变后进行显示,达到畸变校正的效果。如图3所示,正常无畸变的实像311在经过光学镜片的显示后形成畸变后的虚像312,可先获取无畸变的虚像314,并对无畸变的虚像314进行预畸变,得到用于显示的预畸变图像313,再将预畸变图像313进行显示,预畸变图像313通过光学镜片的光学畸变作用后,形成无畸变的虚像314。
请参图4,在进行虚拟内容与实体内容的对齐的立体显示时,可通过跟踪相机301识别设置有标记物的跟踪目标,得到跟踪目标在现实空间中以跟踪相机301为原点的坐标系中的坐标,再进行坐标转换,根据上述现实空间中的坐标系与该虚拟空间中的坐标系之间的转换参数,将跟踪目标在现实空间中的坐标系中的坐标,转换为虚拟空间中以虚拟相机304为坐标原点的坐标系中的渲染坐标;根据渲染坐标生成左眼显示图像以及右眼显示图像,将左眼显示图像进行左眼预畸变,得到左眼预畸变图像,将右眼显示图像进行右眼预畸变,得到右眼预畸变图像,左眼预畸变图像以及右眼预畸 变图像经过显示屏303显示后,经过光学镜片302投射到人眼,形成无畸变的左眼虚像以及无畸变的右眼虚像,经过用户大脑的融合,可形成立体图像,实现虚拟内容与实体内容的对齐显示、立体显示以及无畸变显示。
请参图5,本申请实施例的一种三维立体显示方法,应用于终端设备,包括步骤S110至S140。
步骤S110:获取目标标记物于现实空间中的目标空间坐标。
在实现虚拟对象与实体的目标标记物的对齐显示时,可获取目标标记物于现实空间中的目标空间坐标,该目标空间坐标可用于表示目标标记物与头戴显示装置上的跟踪相机之间的位置关系,也可用于表示目标标记物与终端设备之间的位置关系。
终端设备在采集得到包含上述目标标记物的图像后,可对该包含上述目标标记物的图像进行识别,得到对目标标记物的识别结果,从而可得到上述目标标记物在现实空间中的以终端设备跟踪相机为原点的第一空间坐标系中的目标空间坐标。其中,跟踪相机为终端设备用于跟踪实物的相机;对目标标记物的识别结果可包括目标标记物相对终端设备的空间位置,以及目标标记物的身份信息等;目标标记物相对终端设备的空间位置可包括目标标记物相对终端设备的位置以及姿态信息等,姿态信息为目标标记物相对终端设备的朝向及旋转角度。
步骤S120:将目标空间坐标转换为虚拟空间中的渲染坐标。
在获取到目标标记物在现实空间中的第一空间坐标系中的目标空间坐标之后,可将该目标空间坐标转换为虚拟空间中的渲染坐标,以便生成虚拟对象对应的显示内容。
在一个实施例中,将目标空间坐标转换为虚拟空间中的渲染坐标,可包括:读取存储的第一空间坐标系与第二空间坐标系的转换参数,第二空间坐标系为虚拟空间中以虚拟相机为原点的空间坐标系;根据转换参数将目标空间坐标转换为虚拟空间中的渲染坐标。
在一个实施例中,可利用第一空间坐标系与第二空间坐标系的转换参数对目标空间进行转换,得到渲染坐标。转换参数可用于将第一空间坐标系与第二空间坐标系进行对齐,实现二者之间的转换,其中,转换参数为第一空间坐标系与第二空间坐标系之间的转换公式中的参数,通过将目标空间坐标以及转换参数代入转换公式进行计算,即可得到虚拟空间的第一空间坐标系中的渲染坐标。虚拟相机为3D软件系统中用于模拟人眼视角的相机,根据虚拟相机运动(即头部运动)的变化,跟踪虚拟空间中虚拟物体的运动变化,通过渲染,投射到光学镜片上实现立体显示。
步骤S130:获取待显示的虚拟对象的数据,并根据虚拟对象的数据及渲染坐标渲染虚拟对象,得到虚拟对象的左眼显示内容以及右眼显示内容。
在将目标标记物在显示空间的第一空间坐标系中的目标空间坐标转换为虚拟空间的第二空间坐标系中的渲染坐标后,可获取待显示的虚拟对象的数据,根据虚拟对象的数据以及上述渲染坐标渲染虚拟对象。其中,上述待显示显示的虚拟对象对应的数据,可包括虚拟对象的模型数据,模型数据为用于渲染虚拟对象的数据。例如,可包括用于建立虚拟对象对应的模型的颜色、模型顶点坐标、模型轮廓数据等。
在一个实施例中,上述虚拟相机包括左虚拟相机以及右虚拟相机。其中,左虚拟相机以用于模拟人眼左眼,右虚拟相机用于模拟人眼右眼。根据虚拟对象的数据以及渲染坐标渲染虚拟对象,得到虚拟对象的左眼显示内容以及右眼显示内容,包括:根据虚拟对象的数据构建并渲染虚拟对象;根据渲染坐标分别计算虚拟对象在左虚拟相机及右虚拟相机中对应的像素坐标,得到左眼显示内容及右眼显示内容。
上述用于渲染虚拟对象的数据,可构建并渲染出虚拟对象。根据上述渲染坐标以及构建并渲染出的虚拟对象,可得到虚拟对象的各个点上述虚拟空间中第二空间坐标系中的空间坐标。将上述空间坐标代入左虚拟相机对应的像素坐标系与上述虚拟空间中第二空间坐标系之间的转换公式,即可得到虚拟对象的各个点在左虚拟相机中对应 的像素坐标,根据虚拟对象的各个点的像素值以及各个点在左虚拟相机中对应的像素坐标,可得到左眼显示内容。同理,将上述空间坐标代入右虚拟相机对应的像素坐标系与上述虚拟空间中第二空间坐标系之间的转换公式,即可得到虚拟对象的各个点在右虚拟相机中对应的像素坐标,根据虚拟对象的各个点的像素值以及各个点在右虚拟相机中对应的像素坐标,可得到右眼显示内容。
在渲染虚拟对象后,可得到虚拟对象对应的具有视差的左眼显示内容以及右眼显示内容,实现显示时的立体显示效果。
步骤S140:将左眼显示内容以及右眼显示内容进行显示,左眼显示内容用于投射到第一光学镜片,右眼显示内容用于投射到第二光学镜片,第一光学镜片和第二光学镜片分别用于将左眼显示内容和右眼显示内容反射到人眼。
在得到虚拟对象的左眼显示内容以及右眼显示内容后,可将左眼显示内容以及右眼显示内容进行显示。具体的,可将左眼显示内容投射到头戴显示装置的第一光学镜片,左眼显示内容在经过第一光学镜片反射后,可入射至用户左眼;将右眼显示内容投射到头戴显示装置的第二光学镜片,右眼显示内容在经过第二光学镜片反射后,可入射至用户左眼。
在将左眼显示内容以及右眼显示内容进行显示,使左眼显示内容投射到用户左眼,右眼显示内容投射到用户左眼后,可使用户看到具有视差的左眼显示内容以及右眼显示内容,经过用户大脑的融合形成立体显示内容,从而达到虚拟对象与目标标记物对齐显示以及虚拟对象的立体显示。例如图6所示,在将上述左眼显示内容以及右眼显示内容显示后,可看到立体的虚拟对象900与目标标记物700对齐显示。
请参图7,本申请又一实施例的一种三维立体显示方法,应用于终端设备,该方法包括步骤S210至S290。
步骤S210:显示虚拟标记物。
在本申请实施例中,在实现虚拟内容与实体内容的对齐显示时,需要获取空间坐标系之间的转换参数。获取空间坐标系之间的转换参数时,可对虚拟标记物进行显示,另外,可设置有实体标记物于真实场景中,且实体标记物处于终端设备的相机的视觉范围内,以用于后续实现虚拟标记物与实体标记物的对齐显示。
其中,虚拟标记物可预先存储于终端设备中,虚拟标记物与实体标记物相同,即虚拟标记物的图案与实体标记物的形状及大小相同。
虚拟标记物在显示时,虚拟标记物对应的左眼内容投射至左眼光学镜片,并经由左眼光学镜片反射至用户的左眼,虚拟标记物对应的右眼内容投射至左眼光学镜片,并经由右眼光学镜片反射至用户的右眼,实现虚拟标记物的立体显示,当用户察看显示的虚拟标记物时,可以察看到虚拟标记物叠加到实体标记物所处的真实场景中。
在本申请实施例中,终端设备为头戴显示装置,或者终端设备设置于头戴显示装置。在显示虚拟标记物前,可对头戴显示装置的光学畸变校正的参数进行确定,以确保虚拟标记物正常的显示,即显示无畸变的虚拟标记物。
在验证光学畸变校正的参数时,可通过显示预先设置的图像,例如,棋盘格图像,以供用户对光学畸变校正的参数进行确定。用户在确保显示的预设图像无畸变时,可做出光学畸变校正的参数的确定操作。终端设备在检测到用户的确定操作时,可确定出当前光学畸变校正的参数准确。在本申请实施例中,在显示虚拟标记物后,当用户观察到显示的虚拟标记物与实体标记物未对齐,可移动实体标记物的位置,直至观察到虚拟标记物与实体标记物对齐,于终端设备做出对齐确定操作。
在将虚拟标记物进行显示后,用户可观看到虚拟标记物叠加到实体标记物所处的真实场景中,此时虚拟空间中虚拟标记物与实体标记物可能处于未对齐的情况,例如图8所示,实体标记物500与虚拟标记物600未对齐;虚拟标记物与实体标记物也可能处于对齐的情况,例如图9所示,实体标记物500与虚拟标记物600对齐。其中,对齐是指虚拟标记物与实体标记物在虚拟空间中的位置完全相同,也可理解为在用户的视觉感知中虚拟标记物与实体标记物是重叠在一起的。
进一步的,可通过控制标记物的移动,使虚拟标记物与实体标记物对齐。在本申请实施例中,实体标记物设置于可控制移动机构,可控制移动机构与终端设备连接。
在本申请实施例中,该显示方法还可包括:在检测到用户的移动控制操作时,向可控制移动机构发送移动指令,移动指令用于指示可控制移动机构根据移动控制操作进行移动。
用户可于终端设备做出移动控制操作,该移动控制操作用于控制上述可控制移动机构的移动,而带动标记物移动。在检测到用户的移动控制操作时,可发送移动指令至可控制移动机构,从而使可控制移动机构根据移动控制操作而移动,最终达到实体标记物与虚拟标记物对齐的目的。上述移动控制操作可为通过终端设备的按键或者触控屏做出的操作,也可为通过与终端设备连接的控制器做出的操作,具体并不限定。
步骤S220:在检测到用户的对齐确定操作时,获取实体标记物在第一空间坐标系的第一坐标,其中,对齐确定操作用于表征虚拟标记物与实体标记物对齐,虚拟标记物与实体标记物对应。
当用户观察到虚拟标记物与实体标记物未对齐时,可移动实体标记物的位置,直至观察到虚拟标记物与实体标记物对齐,于终端设备做出对齐确定操作。
当用户观察到虚拟标记物与实体标记物对齐时,可于终端设备做出对齐确定操作,该对齐确定操作用于表征虚拟标记物与实体标记物对齐,此时即实现了虚拟标记物对齐于实体标记物的显示。
上述对齐确定操作可为通过终端设备的按键或者触控屏做出的操作,也可为通过与终端设备连接的控制器做出的操作,具体并不限定。
终端设备可检测到用户做出的对齐确定操作,确定出此时虚拟标记物与实体标记物对齐,以根据当前实体标记物在现实空间中第一空间坐标系中的坐标以及当前显示的虚拟对象在虚拟空间中的第二空间坐标系的坐标,确定第一空间坐标系与第二空间坐标系之间的转换参数。
在一个实施例中,第一空间坐标系为现实空间中以跟踪相机为原点的空间坐标系,第二空间坐标系为虚拟空间中以虚拟相机为原点的空间坐标系。其中,跟踪相机是指终端设备的相机,虚拟相机为3D软件系统中用于模拟人眼视角的相机。根据虚拟相机运动(即头部运动)的变化,跟踪虚拟空间中虚拟物体的运动变化,通过渲染,投射到光学镜片上实现立体显示。
在一个实施例中,可在检测到用户的对齐确定操作时,获取实体标记物在第一空间坐标系的第一坐标。
终端设备在采集得到包含上述实体标记物的图像后,可对该包含上述实体标记物的图像进行识别,以得到对实体标记物的识别结果,从而得到上述实体标记物在第一空间坐标系中的第一坐标。其中,对目实体标记物的识别结果可包括实体标记物相对终端设备的空间位置,以及实体标记物的身份信息等;实体标记物相对终端设备的空间位置可包括实体标记物相对终端设备的位置以及姿态信息等,姿态信息为实体标记物相对终端设备的朝向 及旋转角度,
在一个实施例中,在根据实体标记物于第一空间坐标系中的第一坐标以及虚拟标记物在第二空间坐标系中的第二坐标,获取第一空间坐标系与第二空间坐标系之间的转换关系时,需要根据多个实体标记物在第一空间坐标系中的第一坐标以及多个虚拟标记物在第二空间坐标系中的第二坐标,计算第一空间坐标系与第二空间坐标系之间的转换关系,其中,多个实体标记物与多个虚拟标记物为一一对应的关系,即上述多个实体标记物中的每个实体标记物与多个虚拟标记物中的一个虚拟标记物对齐。
因此,在检测到用户的对齐确定操作时,获取实体标记物在第一空间坐标系的第一坐标,可以是,检测到用于表征多个实体标记物与多个虚拟标记物对齐的对齐确定操作时,获取所有实体标记物在第一空间坐标系的第一坐标。
在一个实施例中,在利用终端设备的相机采集包含实体标记物的图像,以确定实体标记物在第一空间坐标系的第一坐标之前,还可对相机的进行标定,以确保获取到实体标记物在第一空间坐标系中的准确坐标。
步骤S230:获取虚拟标记物在第二空间坐标系中的第二坐标。
在一个实施例中,终端设备还需要获取虚拟标记物在第二空间坐标系中的第二坐标,虚拟标记物在第二空间坐标系中的第二坐标可由虚拟相机跟踪虚拟标记物获得,从而获取到多个虚拟标记物对应的在第二空间坐标系中的第二坐标,多个虚拟标记物与上述多个标记物一一对应。
在一个实施例中,在得到多个实体标记物在第一空间坐标系中的第一坐标,以及多个虚拟标记物在第二空间坐标系中的第二坐标后,可根据多个实体标记物与多个虚拟标记物之间的一一对应的关系,将实体标记物的第一坐标以及实体标记物对应的虚拟标记物的第二坐标作为坐标对进行存储,以用于后续进行第一空间坐标系与第二空间坐标系的转换参数的计算。例如,实体标记物A与虚拟标记物a对应,实体标记物B与虚拟标记物b对应,将实体标记物A的第一坐标以及虚拟标记物a的第二坐标作为一个坐标,将实体标记物B的第一坐标以及虚拟标记物b的第二坐标作为一个坐标对进行存储。
步骤S240:基于实体标记物的第一坐标,以及与实体标记物对应的虚拟标记物的第二坐标,获取第一空间坐标系与第二空间坐标系之间的转换参数。
在得到实体标记物的第一坐标,以及实体标记物对应的虚拟标记物的第二坐标之后,可计算第一空间坐标系与第二空间坐标系之间的转换参数。其中,第一空间坐标系与第二空间坐标系之间的转换参数可包括旋转参数以及平移参数。
在一些实施例中,请参图10,步骤S240可包括:
步骤S241:根据姿态变换算法建立第一空间坐标系与第二空间坐标系之间的转换公式,转换公式包括旋转参数以及平移参数。
在一个实施例中,在根据实体标记物的第一坐标以及虚拟标记物的第二坐标计算第一空间坐标系与第二空间坐标系之间的转换参数时,可获取第一空间坐标系与第二空间坐标系之间的转换公式。
具体的,可根据姿态变换算法建立第一空间坐标系与第二空间坐标系之间的转换公式。其中,姿态变换算法可以包括:刚体变换估计算法、PNP算法、DCM算法或者POSIT算法,具体的姿态变换算法并不限定。
上述转换公式表示第一空间坐标系中的坐标与第二空间坐标系中的坐标的转换关系,并且该转换公式中包括转换参数。上述转换公式可为,第二空间坐标系中的坐标由第一空 间坐标系中的坐标以及转换参数表达,也可为第一空间坐标系中的坐标由第二空间坐标系中的坐标以及转换参数表达。
进一步的,上述转换公式可为,第二空间坐标系中的坐标所构成的矩阵由第一空间坐标系中的坐标所构成的矩阵与转换参数构成的矩阵相乘表达,其中转换参数构成的矩阵中包括旋转参数以及平移参数。
步骤S242:获取数量大于预设数值的坐标对,并将获取的坐标对代入转换公式,得到第一空间坐标系与第二空间坐标系之间的旋转参数以及平移参数。
在一个实施例中,在得到第一空间坐标系与第二空间坐标系之间的转换公式后,可利用实体标记物的第一坐标以及与实体标记物对应的虚拟标记物的第二坐标求解上述转换公式中的转换参数。
具体地,可读取预设数值的上述存储的第一坐标与对应的第二坐标的坐标对,将预设数值的第一坐标与第二坐标对代入上述转换公式,对转换公式中的转换参数进行求解,从而得到上述旋转参数以及平移参数。其中,预设数值根据具体利用的姿态变换算法建立的转换公式而定,例如,当转换公式为根据刚体变换估计算法建立的公式,预设数值可设为4。
其中,每个坐标对中第一空间坐标系中的第一坐标与一个第二空间坐标系中的第二坐标对应,将坐标对代入上述转换公式,为将坐标对中的第一坐标以及第二坐标代入转换公式,即第一坐标代入上述转换公式中第一空间坐标系中的坐标所构成的矩阵,第二坐标代入上述转换公式中第二空间系的坐标所构成的矩阵。在将预设数值的坐标对分别代入上述转换公式之后,则可以求解出转换公式中的转换参数所构成的矩阵,从而得到矩阵中的旋转参数以及平移参数,即获取到第一空间坐标系与第二空间坐标系之间的旋转参数以及平移参数。
在一些实施方式中,在获得第一空间坐标系与第二空间坐标系之间的转换参数后,该显示方法还可包括:对跟踪相机的第一相机参数和/或虚拟相机的第二相机参数进行微调。
由于光学镜片的镜面折射的存在以及姿态变换算法的误差,在利用上述转换参数实现虚拟内容叠加于真实场景中的内容进行显示时,可能无法实现虚拟内容与真实内容完全对齐。因此,还可对跟踪相机的第一相机参数和/或虚拟相机的第二相机参数做一些微调,以使在利用转换参数进行虚拟内容显示时,虚拟内容完全对齐于真实内容。具体的,可调节跟踪相机和/或虚拟相机的倾斜角度、深度等。
步骤S250:获取目标标记物在第一空间坐标系的目标空间坐标。
在获取到现实空间中的第一空间坐标系与虚拟空间中的第二空间坐标系之间的转换参数后,可根据上述转换参数实现虚拟内容与真实内容的对齐显示。
在一个实施例中,可获取目标标记物在第一空间坐标系的目标空间坐标,即目标标记物在现实空间中以跟踪相机为原点的空间坐标系中的坐标。其中,目标标记物用于虚拟对象的显示,即虚拟对象与目标标记物的对齐显示。目标标记物与上述实体标记物类似,终端设备可通过采集包含目标标记物的图像,然后对包含目标标记物的图像进行识别,从而得到目标标记物在第一空间坐标系的目标空间坐标。
步骤S260:利用转换参数将目标空间坐标转换为第二空间坐标系中的渲染坐标。
在获取到目标标记物在第一空间坐标系中的目标空间坐标后,可利用获取的转换参数将目标标记物在第一空间坐标系中的目标空间坐标转换为第二空间坐标系中的坐标,即虚 拟空间中以虚拟相机为原点的空间坐标系中的坐标,以根据目标空间坐标生成虚拟对象的显示内容。
具体的,可将目标标记物在第一空间坐标系中的目标空间坐标以及上述转换参数,代入第一空间坐标系与第二空间坐标系之间的转换公式,计算得到第二空间坐标系中的渲染坐标。
步骤S270:获取待显示的虚拟对象的数据,并根据虚拟对象的数据以及渲染坐标渲染虚拟对象,得到虚拟对象的左眼显示内容以及右眼显示内容。
在将目标标记物在第一空间坐标系中的目标空间坐标转换为第二空间坐标系中的渲染坐标后,可获取待显示的虚拟对象的数据,根据虚拟对象的数据以及上述渲染坐标渲染虚拟对象。其中,上述待显示显示的虚拟对象对应的数据,可包括虚拟对象的模型数据,模型数据为用于渲染虚拟对象的数据。例如,模型数据可包括用于建立虚拟对象对应的模型的颜色、模型顶点坐标、模型轮廓数据等。
在渲染虚拟对象后,可得到虚拟对象对应的具有视差的左眼显示内容以及右眼显示内容,以实现显示时的立体显示效果。
步骤S280:根据光学畸变模型、左眼显示内容以及右眼显示内容,得到左眼显示内容对应的左眼预畸变图像以及右眼显示内容对应的右眼预畸变图像,光学畸变模型用于拟合光学镜片产生的光学畸变。
头戴显示装置在将显示内容进行显示时,由于头戴显示装置的光学系统的原因,会使显示的图像产生畸变。如果直接将上述左眼显示内容以及右眼显示内容进行显示,会使用户看到畸变的虚拟对象的虚像。例如,图3中,实像311在显示后形成畸变后的虚像312。。
因此,在将上述左眼显示内容以及右眼显示内容进行显示时,可对上述左眼显示内容以及右眼显示内容进行预畸变后进行显示,以使用户看到无畸变的虚拟对象的虚像。
在一个实施例中,可根据存储的光学畸变模型对左眼显示内容进行反向畸变处理,得到左眼显示内容对应的左眼预畸变图像,并根据上述光学畸变模型对右眼显示内容进行反向畸变处理,得到右眼显示内容对应的右眼预畸变图像。其中,光学畸变模型用于拟合头戴显示装置的光学镜片的光学畸变,光学畸变模型可以为
Figure PCTCN2019104240-appb-000001
其中,X为实像的横坐标,Y为实像的纵坐标,A为第一畸变参数,B为第二畸变参数,I 1为拟合光学镜片的横向的径向畸变的矩阵或者拟合光学镜片的横向的桶向畸变的矩阵,I 2拟合光学镜片横向的切向畸变的矩阵,I 3为拟合光学镜片的纵向的径向畸变的矩阵或者拟合光学镜片的纵向的桶向畸变的矩阵,I 4拟合光学镜片的纵向的切向畸变的矩阵,I 1中包括虚像的横坐标,I 2包括虚像的横坐标以及纵坐标,I 3中包括虚像的纵坐标,I 4包括虚像的横坐标以及纵坐标。
在一个实施例中,还可将光学畸变模型与光学镜片的光学参数的对应关系进行存储,即存储有不同的光学参数所对应的光学畸变模型,在读取光学畸变模型对需要显示的图像进行预畸变时,可以根据其光学镜片的光学参数读取对应的光学畸变模型。
在对虚拟对象的左眼显示内容以及右眼显示内容进行预畸变时,可读取上述存储的光 学畸变模型进行读取。将左眼显示内容的坐标数据作为虚像的坐标数据,代入光学畸变模型,计算出左眼显示内容对应的屏幕坐标数据,根据屏幕坐标数据以及左眼显示内容的像素点即可生成待显示的左眼预畸变图像,该左眼预畸变图像与左眼显示内容对应。
同样的,将右眼显示内容的坐标数据作为虚像的坐标数据,代入光学畸变模型,计算出右眼显示内容对应的屏幕坐标数据,根据屏幕坐标数据以及右眼显示内容的像素点即可生成待显示的右眼预畸变图像,该右眼预畸变图像与右眼显示内容对应。
在一个实施例中,当上述根据光学畸变模型的获得的屏幕坐标数据中存在非整数值坐标时,需要将非整数值坐标转换为整数值坐标,以便生成预畸变图像。因此,可利用像素插值法将屏幕数据中的非整数值坐标转换为整数值坐标。具体的,可获取距离上述整数值坐标最近的像素坐标,再将非整数值坐标替换为获取的像素坐标。
步骤S290:将左眼预畸变图像以及右眼预畸变图像进行显示,左眼预畸变图像用于投射到第一光学镜片,并经由第一光学镜片反射到人眼,形成无畸变的左眼显示内容,右眼预畸变图像用于投射到第二光学镜片,并经由第二光学镜片反射到人眼,形成无畸变的右眼显示内容。
在得到进行预畸变后的左眼预畸变图像以及右眼预畸变图像后,可将左眼预畸变图像以及右眼预畸变图像进行显示。在将左眼预畸变图像以及右眼预畸变图像显示后,左眼预畸变图像投射到第一光学镜片后,经过第一光学镜片的反射后入射到用户左眼,由于第一光学镜片本身造成的光学畸变,可对经过反向畸变处理得到左眼预畸变图像进行正向畸变,反向畸变与正向畸变中和后,可形成无畸变的上述左眼显示内容。同理,右眼预畸变图像投射到第二光学镜片后,经过第二光学镜片的反射后入射到用户右眼,形成无畸变的上述右眼显示内容。从而用户可以看到具有视差的无畸变的左眼显示内容以及无畸变的右眼显示内容,经过用户大脑的融合形成无畸变的立体显示内容,达到虚拟对象与目标标记物对齐显示、虚拟对象的无畸变显示以及立体显示。例如,请再次参见图3,将预畸变图像313进行显示,得到无畸变的虚像314,保证无畸变的虚像314与实像311一致。
在一个实施例中,可在利用光学畸变模型对左眼显示内容以及右眼显示内容进行预畸变之前,获取上述光学畸变模型。因此,构建上述光学畸变模型的步骤,可包括:读取光学镜片的光学厂商数据,光学厂商数据包括实验图像的坐标数据以及实验图像对应的畸变虚像的坐标数据;将实验图像的坐标数据与畸变虚像的坐标数据进行多项式拟合,得到光学畸变模型;将光学畸变模型进行存储。
其中,上述光学厂商数据为光学镜片的厂商所提供的光学数据,即光学镜片的厂商在该光学镜片出厂前,利用实验图像对该光学镜片测试获得的光学数据,上述光学厂商数据可以包括实验图像的坐标数据以及实验图像显示后的畸变虚像的坐标数据。
例如,光学厂商数据如下表所示:
Figure PCTCN2019104240-appb-000002
Figure PCTCN2019104240-appb-000003
在一个实施例中,在获取到光学镜片的光学厂商数据后,还可根据显示参数调整畸变虚像的坐标数据,其中,显示参数包括光学镜片的缩放比例、屏幕尺寸、像素尺寸以及光心位置中的至少一种。
可以理解的是,可获取光学镜片对应的缩放比例、屏幕尺寸、像素尺寸以及光心位置,然后根据光学镜片对应的缩放比例、屏幕尺寸、像素尺寸以及光心位置中的至少一种参数对上述实验图像对应的畸变虚像的坐标数据进行调整,达到实验图像与畸变图像的各个点对应,准确度高的效果。
在本申请实施例中,将实验图像的坐标数据与畸变虚像的坐标数据进行多项式拟合,得到光学畸变模型,可包括:根据实验图像的坐标数据及与实验图像对应的畸变虚像的坐标数据,计算光学畸变模型的第一畸变参数和第二畸变参数,第一畸变参数为拟合光学镜片在第一方向上畸变的系数,第二畸变参数为拟合光学镜片在第二方向上畸变的系数;根据第一畸变系数和第二畸变数据构建光学畸变模型。
具体的,根据式(1),可利用横向多项式以及纵向多项式对畸变进行拟合,得到实像的横坐标由第一畸变参数与第一多项式相乘的第一表达式:X=A*I 1*I 2,以及实像的纵坐标由第二畸变参数与第二多项式相乘的第一表达式:Y=B*I 2*I 3,其中,X为实像的横坐标,Y为实像的纵坐标,A为第一畸变参数,B为第二畸变参数,I 1为拟合光学镜片的横向的径向畸变的矩阵或者拟合光学镜片的横向的桶向畸变的矩阵,I 2拟合光学镜片横向的切向畸变的矩阵,I 3为拟合光学镜片的纵向的径向畸变的矩阵或者拟合光学镜片的纵向的桶向畸变的矩阵,I 4拟合光学镜片的纵向的切向畸变的矩阵,I 1中包括虚像的横坐标,I 2包括虚像的横坐标以及纵坐标,I 3中包括虚像的纵坐标,I 4包括虚像的横坐标以及纵坐标。
其中,第一畸变参数为拟合光学镜片在第一方向上畸变的系数,第二畸变系数为拟合光学镜片在第二方向上畸变的系数。另外,第一方向可以为横向,第二方向可以为纵向,当然,也可以是第一方向为纵向,第二方向为横向。
第一多项式为用于拟合光学镜片的横向的径向畸变的矩阵与用于拟合光学镜片横向的切向畸变的矩阵相乘获得,或者为用于拟合光学镜片的横向的桶向畸变的矩阵与用于拟合光学镜片横向的切向畸变的矩阵相乘获得。用于拟合光学镜片的横向的径向畸变的矩阵,以及于拟合光学镜片的横向的桶向畸变的矩阵,可以为虚像的横坐标构成的四行一列矩阵,用于拟合光学镜片横向的切向畸变的矩阵为虚像的横坐标及纵坐标构成的四行一列矩阵。
第二多项式为用于拟合光学镜片的纵向的径向畸变的矩阵与用于拟合光学镜片纵向的切向畸变的矩阵相乘获得,或者为用于拟合光学镜片的纵向的桶向畸变的矩阵与用于拟合光学镜片纵向的切向畸变的矩阵相乘获得。用于拟合光学镜片的纵向的径向畸变的矩阵,以及于拟合光学镜片的纵向的桶向畸变的矩阵,可以为虚像的纵坐标构成的四行一列矩阵,用于拟合光学镜片纵向的切向畸变的矩阵为虚像的横坐标及纵坐标构成的四行一列矩阵。
在得到上述第一表达式以及第二表达式之后,则可以代入上述实验图像的坐标数据以及根据光学参数调整后的畸变虚像的坐标数据,对第一表达式中的第一畸变参数,以及第 二表达式中的第二畸变参数进行求解,从而得到上述第一畸变参数以及第二畸变参数。
在得到第一畸变参数以及第二畸变参数之后,则可以将第一畸变参数代入上述第一表达式,以及将第二畸变参数代入上述第二表达式,从而得到光学畸变模型,光学畸变模型中包括上述第一表达式以及第二表达式。
在本申请实施例中,在得到光学畸变模型之后,还可以考量得到的光学畸变模型,以确保光学畸变模型的准确性。因此,该显示方法还可以包括:对光学畸变模型进行验证。
进一步的,对光学畸变模型进行验证可包括:利用用于验证光学畸变模型的原始图像的坐标数据以及光学畸变模型,得到待显示的验证图像,并将验证图像进行显示;利用观看位置处的图像采集设备对终端设备显示的验证图像进行图像采集,得到包含验证图像的图像;判断包含验证图像的图像的参数是否满足预设条件;如果满足预设条件,则将光学畸变模型进行存储。
可以理解的是,终端设备中预先存储有用于验证光学畸变模型的原始图像。例如,原始图像可以为棋盘格。在未利用光学畸变模型对原始图像进行预畸变,将原始图像进行显示时,则显示出的虚像为原始图像对应的发生畸变的虚像。如果将原始图像利用上述光学畸变模型进行预畸变后进行显示,显示出的虚像为未产生畸变的虚像,则表示该光学畸变模型准确。
在一个实施例中,可利用上述得到的光学畸变模型对原始图像的坐标数据进行反向运算,得到原始图像对应的待显示的验证图像。
具体的,将原始图像的坐标数据作为虚像的坐标数据,此时的虚像为无畸变的虚像,代入上述光学畸变模型,即可求得待显示的验证图像的屏幕坐标数据,根据屏幕坐标数据以及原始图像的各个像素点的像素值即可生成待显示的验证图像,该验证图像即为通过光学畸变模型进行预畸变后的图像。
在得到待显示的验证图像后,可将验证图像进行显示,然后可用观看位置处的图像采集设备对显示的验证图像进行图像采集,得到包含显示的验证图像的图像。例如,可设置工业相机于头盔中的人眼观看位置,采集显示的验证图像。
在得到包含显示的验证图像的图像之后,可判断该图像中验证图像的宽高比是否为预设宽高比,线性度是否为线性度。当宽高比为预设宽高比,线性度为预设线性度时,可确定出得到的光学畸变模型为正确,因此可将得到的光学畸变模型进行存储,以实现显示时的畸变校正。
在一个实施例中,对光学畸变模型的校正,也可是在将上述待显示的验证图像进行显示后,检测到用户做出的模型确定操作时,该模型确定操作用于表征验证图像的线性度以及宽高比正常,以及左右视角交界处吻合,从而确定出该光学畸变模型为正确,将该光学畸变模型存储。
在一个实施例中,本申请提供一种数据处理方法,应用于终端设备,包括:显示虚拟标记物;在检测到用户的对齐确定操作时,获取实体标记物在第一空间坐标系的第一坐标,其中,对齐确定操作用于表征虚拟标记物与实体标记物对齐,虚拟标记物与实体标记物对应;获取虚拟标记物在第二空间坐标系中的第二坐标;基于实体标记物的第一坐标,以及与实体标记物对应的虚拟标记物的第二坐标,获取第一空间坐标系与第二空间坐标系之间的转换参数。
在一个实施例中,第一空间坐标系为现实空间中以跟踪相机为原点的空间坐标系,第二空间坐标系为虚拟空间中以虚拟相机为原点的空间坐标系。
在一个实施例中,在获取第一空间坐标系与第二空间坐标系之间的转换参数之后, 上述数据处理方法还包括:对跟踪相机的第一相机参数和/或虚拟相机的第二相机参数进行微调。
在一个实施例中,在获取虚拟标记物在第二空间坐标系中的第二坐标之后,上述数据处理方法还包括:将实体标记物的第一坐标,以及与实体标记物对应的虚拟标记物的第二坐标作为坐标对进行存储。
在一个实施例中,转换参数包括旋转参数以及平移参数,基于实体标记物的第一坐标,以及与实体标记物对应的虚拟标记物的第二坐标,获取第一空间坐标系与第二空间坐标系之间的转换参数,包括:根据姿态变换算法建立第一空间坐标系与所述第二空间坐标系之间的转换公式,转换公式包括旋转参数以及平移参数;获取数量大于预设数值的坐标对,并将获取的坐标对代入所述转换公式,得到第一空间坐标系与第二空间坐标系之间的旋转参数以及平移参数。
在一个实施例中,实体标记物设置于可控制移动机构,可控制移动机构与终端设备连接;在检测到用户的对齐确定操作时,获取实体标记物在第一空间坐标系的第一坐标之前,上述数据处理方法还包括:在检测到用户的移动控制操作时,向可控制移动机构发送移动指令,移动指令用于指示可控制移动机构根据所述移动控制操作进行移动。
在一个实施例中,在基于实体标记物的第一坐标,以及与实体标记物对应的虚拟标记物的第二坐标,获取第一空间坐标系与所述第二空间坐标系之间的转换参数之后,上述数据处理方法还包括:获取目标标记物在第一空间坐标系的第三坐标;利用转换参数获取将第三坐标转换为第二空间坐标系中的第四坐标;获取待显示的虚拟对象的数据,并根据虚拟对象的数据以及第四坐标渲染所述虚拟对象,得到虚拟对象的左眼显示内容以及右眼显示内容;将左眼显示内容以及右眼显示内容进行显示,左眼显示内容用于投射到第一光学镜片,并经由第一光学镜片反射到人眼,右眼显示内容用于投射到第二光学镜片,并经由第二光学镜片反射到人眼。
在一个实施例中,本申请还提供一种光学畸变的校正方法,应用于终端设备,包括:获取无畸变虚像的坐标数据;根据光学畸变模型及无畸变虚像的坐标数据,得到待显示的预畸变图像,光学畸变模型用于拟合光学镜片产生的光学畸变;将预畸变图像进行显示,预畸变图像用于投射到光学镜片上,并经由光学镜片反射到人眼,形成无畸变虚像。
在一个实施例中,在根据光学畸变模型及无畸变虚像的坐标数据,得到待显示的预畸变图像之前,上述光学畸变的校正方法还包括:读取光学镜片的光学厂商数据,光学厂商数据包括实验图像的坐标数据以及实验图像对应的畸变虚像的坐标数据;将实验图像的坐标数据与畸变虚像的坐标数据进行多项式拟合,得到光学畸变模型;将光学畸变模型进行存储。
在一个实施例中,将实验图像的坐标数据与畸变虚像的坐标数据进行多项式拟合,得到光学畸变模型,包括:根据实验图像的坐标数据及与实验图像对应的畸变虚像的坐标数据,计算光学畸变模型的第一畸变参数和第二畸变参数,第一畸变参数为拟合光学镜片在第一方向上畸变的系数,第二畸变参数为拟合光学镜片在第二方向上畸变的系数;根据第一畸变系数和第二畸变数据构建光学畸变模型。
在一个实施例中,在读取光学镜片的光学厂商数据之后,上述光学畸变的校正方法还包括:根据显示参数调整畸变虚像的坐标数据,其中,显示参数包括光学镜片的缩放比例、屏幕尺寸、像素尺寸以及光心位置中的至少一种。
在一个实施例中,在将光学畸变模型进行存储之前,上述光学畸变的校正方法还包 括:对光学畸变模型进行验证。
在一个实施例中,对光学畸变模型进行验证,包括:利用用于验证光学畸变模型的原始图像的坐标数据以及光学畸变模型,得到待显示的验证图像,并将验证图像待进行显示;利用观看位置处的图像采集设备对终端设备显示的验证图像进行图像采集,得到包含验证图像的图像;判断包含验证图像参数是否满足预设条件;如果满足预设条件,则将光学畸变模型进行存储。
在一个实施例中,根据光学畸变模型及无畸变虚像的坐标数据,得到待显示的预畸变图像,包括:利用光学畸变模型对无畸变虚像的坐标数据进行反向计算,得到与无畸变虚像的坐标数据对应的屏幕坐标数据;根据屏幕坐标数据生成待显示的预畸变图像。
本申请中的终端设备100可包括一个或多个如下部件:处理器、存储器、相机以及一个或多个应用程序,其中一个或多个应用程序可以被存储在存储器中并被配置为由一个或多个处理器执行,一个或多个程序配置用于执行如前述方法实施例所描述的方法。
处理器可包括一个或者多个处理核。处理器利用各种接口和线路连接整个终端设备内的各个部分,通过运行或执行存储在存储器内的指令、程序、代码集或指令集,以及调用存储在存储器内的数据,执行终端设备的各种功能和处理数据。可选地,处理器可以采用数字信号处理(DSP)、现场可编程门阵列(FPGA)、可编程逻辑阵列(PLA)中的至少一种硬件形式来实现。处理器可集成中央处理器(CPU)、图像处理器(GPU)和调制解调器等中的一种或几种的组合。其中,CPU主要处理操作系统、用户界面和应用程序等;GPU用于负责显示内容的渲染和绘制;调制解调器用于处理无线通信。可以理解的是,上述调制解调器也可以不集成到处理器110中,单独通过一块通信芯片进行实现。
存储器可包括随机存储器(RAM),也可以包括只读存储器(ROM)。存储器可用于存储指令、程序、代码、代码集或指令集。存储器可包括存储程序区和存储数据区,其中,存储程序区可存储用于实现操作系统的指令、用于实现至少一个功能的指令(比如触控功能、声音播放功能、图像播放功能等)、用于实现下述各个方法实施例的指令等。存储数据区还可以存储终端在使用中所创建的数据等。
相机用于采集标记物的图像,可为红外相机,也可以是彩色相机,具体的类型并不限定。
本申请实施例还提供的一种计算机可读存储介质,该计算机可读介质中存储有程序代码,程序代码可被处理器调用执行上述方法实施例中所描述的方法。
计算机可读存储介质可为诸如闪存、EEPROM、EPROM、硬盘或者ROM之类的电子存储器。可选地,计算机可读存储介质包括非易失性计算机可读介质。计算机可读存储介质具有执行上述方法中的任何方法步骤的程序代码的存储空间,这些程序代码可从一个或者多个计算机程序产品中读出或者写入到这一个或者多个计算机程序产品中,程序代码可利用适当形式进行压缩。
最后应说明的是:以上实施例仅用以说明本申请的技术方案,而非对其限制;尽管参照前述实施例对本申请进行了详细的说明,本领域的普通技术人员当理解:其依然可以对前述各实施例所记载的技术方案进行修改,或者对其中部分技术特征进行等同替换;而这些修改或者替换,并不驱使相应技术方案的本质脱离本申请各实施例技术方案的精神和范围。

Claims (26)

  1. 一种三维立体显示方法,其特征在于,应用于终端设备,所述方法包括:
    获取目标标记物于现实空间中的目标空间坐标;
    将所述目标空间坐标转换为虚拟空间中的渲染坐标;
    获取待显示的虚拟对象的数据,并根据所述虚拟对象的数据以及所述渲染坐标渲染所述虚拟对象,得到所述虚拟对象的左眼显示内容以及右眼显示内容;及
    将所述左眼显示内容以及所述右眼显示内容进行显示,所述左眼显示内容用于投射到第一光学镜片,所述右眼显示内容用于投射到第二光学镜片,所述第一光学镜片和第二光学镜片分别用于将所述左眼显示内容和右眼显示内容反射到人眼。
  2. 根据权利要求1所述的方法,其特征在于,所述将所述目标空间坐标转换为虚拟空间中的渲染坐标,包括:
    读取存储的第一空间坐标系与第二空间坐标系的转换参数,所述第一空间坐标系为现实空间中以跟踪相机为原点的空间坐标系,所述第二空间坐标系为虚拟空间中以虚拟相机为原点的空间坐标系;及
    根据所述转换参数将所述目标空间坐标转换为虚拟空间中的渲染坐标。
  3. 根据权利要求2所述的方法,其特征在于,所述虚拟相机包括左虚拟相机和右虚拟相机,所述根据所述虚拟对象的数据以及所述渲染坐标渲染所述虚拟对象,得到所述虚拟对象的左眼显示内容以及右眼显示内容,包括:
    根据所述虚拟对象的数据构建并渲染所述虚拟对象;及
    根据所述渲染坐标分别计算所述虚拟对象在所述左虚拟相机及右虚拟相机中对应的像素坐标,得到左眼显示内容及右眼显示内容。
  4. 根据权利要求1所述的方法,其特征在于,所述将所述左眼显示内容以及所述右眼显示内容进行显示,包括:
    分别根据光学畸变模型对所述左眼显示内容以及所述右眼显示内容进行处理,得到所述左眼显示内容对应的左眼预畸变图像以及所述右眼显示内容对应的右眼预畸变图像,所述光学畸变模型用于拟合光学镜片产生的光学畸变;及
    将所述左眼预畸变图像以及所述右眼预畸变图像进行显示,所述左眼预畸变图像用于投射到第一光学镜片,并经由第一光学镜片反射到人眼,所述右眼预畸变图像用于投射到第二光学镜片,并经由第二光学镜片反射到人眼,以形成无畸变的三维显示内容的虚像。
  5. 根据权利要求4所述的方法,其特征在于,构建所述光学畸变模型的步骤包括:
    读取光学镜片的光学厂商数据,所述光学厂商数据包括实验图像的坐标数据以及所述实验图像对应的畸变虚像的坐标数据;及
    将所述实验图像的坐标数据与所述畸变虚像的坐标数据进行多项式拟合,得到光学畸变模型。
  6. 根据权利要求5所述的方法,其特征在于,所述将所述实验图像的坐标数据与所述畸变虚像的坐标数据进行多项式拟合,得到光学畸变模型,包括:
    根据所述实验图像的坐标数据及与所述实验图像对应的畸变虚像的坐标数据,计算光学畸变模型的第一畸变参数和第二畸变参数,所述第一畸变参数为拟合所述光学镜片在第一方向上畸变的系数,所述第二畸变参数为拟合所述光学镜片在第二方向上畸变的系数;及
    根据所述第一畸变系数和第二畸变数据构建所述光学畸变模型。
  7. 根据权利要求5所述的方法,其特征在于,在所述读取光学镜片的光学厂商数据之后,所述方法还包括:
    根据显示参数调整所述畸变虚像的坐标数据,其中,所述显示参数包括所述光学 镜片的缩放比例、屏幕尺寸、像素尺寸以及光心位置中的至少一种。
  8. 一种终端设备,其特征在于,包括存储器以及处理器,所述存储器与所述处理器耦合;所述存储器存储有计算机程序,所述计算机程序被所述处理器执行时,使得所述处理器执行如下步骤:
    获取目标标记物于现实空间中的目标空间坐标;
    将所述目标空间坐标转换为虚拟空间中的渲染坐标;
    获取待显示的虚拟对象的数据,并根据所述虚拟对象的数据以及所述渲染坐标渲染所述虚拟对象,得到所述虚拟对象的左眼显示内容以及右眼显示内容;及
    将所述左眼显示内容以及所述右眼显示内容进行显示,所述左眼显示内容用于投射到第一光学镜片,所述右眼显示内容用于投射到第二光学镜片,所述第一光学镜片和第二光学镜片分别用于将所述左眼显示内容和右眼显示内容反射到人眼。
  9. 根据权利要求8所述的终端设备,其特征在于,所述将所述目标空间坐标转换为虚拟空间中的渲染坐标,包括:
    读取存储的第一空间坐标系与第二空间坐标系的转换参数,所述第一空间坐标系为现实空间中以跟踪相机为原点的空间坐标系,所述第二空间坐标系为虚拟空间中以虚拟相机为原点的空间坐标系;及
    根据所述转换参数将所述目标空间坐标转换为虚拟空间中的渲染坐标。
  10. 根据权利要求9所述的终端设备,其特征在于,所述虚拟相机包括左虚拟相机和右虚拟相机,所述根据所述虚拟对象的数据以及所述渲染坐标渲染所述虚拟对象,得到所述虚拟对象的左眼显示内容以及右眼显示内容,包括:
    根据所述虚拟对象的数据构建并渲染所述虚拟对象;及
    根据所述渲染坐标分别计算所述虚拟对象在所述左虚拟相机及右虚拟相机中对应的像素坐标,得到左眼显示内容及右眼显示内容。
  11. 根据权利要求8所述的终端设备,其特征在于,所述将所述左眼显示内容以及所述右眼显示内容进行显示,包括:
    分别根据光学畸变模型对所述左眼显示内容以及所述右眼显示内容进行处理,得到所述左眼显示内容对应的左眼预畸变图像以及所述右眼显示内容对应的右眼预畸变图像,所述光学畸变模型用于拟合光学镜片产生的光学畸变;及
    将所述左眼预畸变图像以及所述右眼预畸变图像进行显示,所述左眼预畸变图像用于投射到第一光学镜片,并经由第一光学镜片反射到人眼,所述右眼预畸变图像用于投射到第二光学镜片,并经由第二光学镜片反射到人眼,以形成无畸变的三维显示内容的虚像。
  12. 根据权利要求11所述的终端设备,其特征在于,所述处理器还执行构建所述光学畸变模型的步骤,包括:
    读取光学镜片的光学厂商数据,所述光学厂商数据包括实验图像的坐标数据以及所述实验图像对应的畸变虚像的坐标数据;及
    将所述实验图像的坐标数据与所述畸变虚像的坐标数据进行多项式拟合,得到光学畸变模型。
  13. 根据权利要求12所述的终端设备,其特征在于,所述将所述实验图像的坐标数据与所述畸变虚像的坐标数据进行多项式拟合,得到光学畸变模型,包括:
    根据所述实验图像的坐标数据及与所述实验图像对应的畸变虚像的坐标数据,计算光学畸变模型的第一畸变参数和第二畸变参数,所述第一畸变参数为拟合所述光学镜片在第一方向上畸变的系数,所述第二畸变参数为拟合所述光学镜片在第二方向上畸变的系数;及
    根据所述第一畸变系数和第二畸变数据构建所述光学畸变模型。
  14. 根据权利要求12所述的终端设备,其特征在于,所述处理器在执行所述读取 光学镜片的光学厂商数据的步骤之后,还执行以下步骤:
    根据显示参数调整所述畸变虚像的坐标数据,其中,所述显示参数包括所述光学镜片的缩放比例、屏幕尺寸、像素尺寸以及光心位置中的至少一种。
  15. 一种计算机可读取存储介质,其特征在于,所述计算机可读取存储介质中存储有程序代码,所述程序代码可被处理器调用,执行以下步骤:
    获取目标标记物于现实空间中的目标空间坐标;
    将所述目标空间坐标转换为虚拟空间中的渲染坐标;
    获取待显示的虚拟对象的数据,并根据所述虚拟对象的数据以及所述渲染坐标渲染所述虚拟对象,得到所述虚拟对象的左眼显示内容以及右眼显示内容;及
    将所述左眼显示内容以及所述右眼显示内容进行显示,所述左眼显示内容用于投射到第一光学镜片,所述右眼显示内容用于投射到第二光学镜片,所述第一光学镜片和第二光学镜片分别用于将所述左眼显示内容和右眼显示内容反射到人眼。
  16. 根据权利要求15所述的计算机可读取存储介质,其特征在于,所述将所述目标空间坐标转换为虚拟空间中的渲染坐标,包括:
    读取存储的第一空间坐标系与第二空间坐标系的转换参数,所述第一空间坐标系为现实空间中以跟踪相机为原点的空间坐标系,所述第二空间坐标系为虚拟空间中以虚拟相机为原点的空间坐标系;及
    根据所述转换参数将所述目标空间坐标转换为虚拟空间中的渲染坐标。
  17. 根据权利要求16所述的计算机可读取存储介质,其特征在于,所述虚拟相机包括左虚拟相机和右虚拟相机,所述根据所述虚拟对象的数据以及所述渲染坐标渲染所述虚拟对象,得到所述虚拟对象的左眼显示内容以及右眼显示内容,包括:
    根据所述虚拟对象的数据构建并渲染所述虚拟对象;及
    根据所述渲染坐标分别计算所述虚拟对象在所述左虚拟相机及右虚拟相机中对应的像素坐标,得到左眼显示内容及右眼显示内容。
  18. 根据权利要求15所述的计算机可读取存储介质,其特征在于,所述将所述左眼显示内容以及所述右眼显示内容进行显示,包括:
    分别根据光学畸变模型对所述左眼显示内容以及所述右眼显示内容进行处理,得到所述左眼显示内容对应的左眼预畸变图像以及所述右眼显示内容对应的右眼预畸变图像,所述光学畸变模型用于拟合光学镜片产生的光学畸变;及
    将所述左眼预畸变图像以及所述右眼预畸变图像进行显示,所述左眼预畸变图像用于投射到第一光学镜片,并经由第一光学镜片反射到人眼,所述右眼预畸变图像用于投射到第二光学镜片,并经由第二光学镜片反射到人眼,以形成无畸变的三维显示内容的虚像。
  19. 根据权利要求18所述的计算机可读取存储介质,其特征在于,所述程序代码还用于被处理器调用,执行构建所述光学畸变模型的步骤,包括:
    读取光学镜片的光学厂商数据,所述光学厂商数据包括实验图像的坐标数据以及所述实验图像对应的畸变虚像的坐标数据;及
    将所述实验图像的坐标数据与所述畸变虚像的坐标数据进行多项式拟合,得到光学畸变模型。
  20. 根据权利要求19所述的计算机可读取存储介质,其特征在于,所述将所述实验图像的坐标数据与所述畸变虚像的坐标数据进行多项式拟合,得到光学畸变模型,包括:
    根据所述实验图像的坐标数据及与所述实验图像对应的畸变虚像的坐标数据,计算光学畸变模型的第一畸变参数和第二畸变参数,所述第一畸变参数为拟合所述光学镜片在第一方向上畸变的系数,所述第二畸变参数为拟合所述光学镜片在第二方向上畸变的系数;及
    根据所述第一畸变系数和第二畸变数据构建所述光学畸变模型。
  21. 一种数据处理方法,其特征在于,应用于终端设备,所述方法包括:
    显示虚拟标记物;
    在检测到用户的对齐确定操作时,获取实体标记物在第一空间坐标系的第一坐标,其中,所述对齐确定操作用于表征所述虚拟标记物与所述实体标记物对齐,所述虚拟标记物与所述实体标记物对应;
    获取所述虚拟标记物在第二空间坐标系中的第二坐标;及
    基于所述实体标记物的第一坐标,以及与所述实体标记物对应的虚拟标记物的第二坐标,获取所述第一空间坐标系与所述第二空间坐标系之间的转换参数。
  22. 一种终端设备,其特征在于,包括存储器以及处理器,所述存储器与所述处理器耦合;所述存储器存储有计算机程序,所述计算机程序被所述处理器执行时,使得所述处理器执行如下步骤:
    显示虚拟标记物;
    在检测到用户的对齐确定操作时,获取实体标记物在第一空间坐标系的第一坐标,其中,所述对齐确定操作用于表征所述虚拟标记物与所述实体标记物对齐,所述虚拟标记物与所述实体标记物对应;
    获取所述虚拟标记物在第二空间坐标系中的第二坐标;及
    基于所述实体标记物的第一坐标,以及与所述实体标记物对应的虚拟标记物的第二坐标,获取所述第一空间坐标系与所述第二空间坐标系之间的转换参数。
  23. 一种计算机可读取存储介质,其特征在于,所述计算机可读取存储介质中存储有程序代码,所述程序代码可被处理器调用,执行以下步骤:
    显示虚拟标记物;
    在检测到用户的对齐确定操作时,获取实体标记物在第一空间坐标系的第一坐标,其中,所述对齐确定操作用于表征所述虚拟标记物与所述实体标记物对齐,所述虚拟标记物与所述实体标记物对应;
    获取所述虚拟标记物在第二空间坐标系中的第二坐标;及
    基于所述实体标记物的第一坐标,以及与所述实体标记物对应的虚拟标记物的第二坐标,获取所述第一空间坐标系与所述第二空间坐标系之间的转换参数。
  24. 一种光学畸变的校正方法,其特征在于,应用于终端设备,所述方法包括:
    获取无畸变虚像的坐标数据;
    根据光学畸变模型及所述无畸变虚像的坐标数据,得到待显示的预畸变图像,所述光学畸变模型用于拟合光学镜片产生的光学畸变;及
    将所述预畸变图像进行显示,所述预畸变图像用于投射到所述光学镜片上,并经由所述光学镜片反射到人眼,形成所述无畸变虚像。
  25. 一种终端设备,其特征在于,包括存储器以及处理器,所述存储器与所述处理器耦合;所述存储器存储有计算机程序,所述计算机程序被所述处理器执行时,使得所述处理器执行如下步骤:
    获取无畸变虚像的坐标数据;
    根据光学畸变模型及所述无畸变虚像的坐标数据,得到待显示的预畸变图像,所述光学畸变模型用于拟合光学镜片产生的光学畸变;及
    将所述预畸变图像进行显示,所述预畸变图像用于投射到所述光学镜片上,并经由所述光学镜片反射到人眼,形成所述无畸变虚像。
  26. 一种计算机可读取存储介质,其特征在于,所述计算机可读取存储介质中存储有程序代码,所述程序代码可被处理器调用,执行以下步骤:
    获取无畸变虚像的坐标数据;
    根据光学畸变模型及所述无畸变虚像的坐标数据,得到待显示的预畸变图像,所述光学畸变模型用于拟合光学镜片产生的光学畸变;及
    将所述预畸变图像进行显示,所述预畸变图像用于投射到所述光学镜片上,并经由所述光学镜片反射到人眼,形成所述无畸变虚像。
PCT/CN2019/104240 2018-09-03 2019-09-03 三维立体显示方法、终端设备及存储介质 WO2020048461A1 (zh)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US16/731,094 US11380063B2 (en) 2018-09-03 2019-12-31 Three-dimensional distortion display method, terminal device, and storage medium

Applications Claiming Priority (6)

Application Number Priority Date Filing Date Title
CN201811020965.1A CN110874135B (zh) 2018-09-03 2018-09-03 光学畸变的校正方法、装置、终端设备及存储介质
CN201811023501.6A CN110874867A (zh) 2018-09-03 2018-09-03 显示方法、装置、终端设备及存储介质
CN201811023521.3 2018-09-03
CN201811020965.1 2018-09-03
CN201811023501.6 2018-09-03
CN201811023521.3A CN110874868A (zh) 2018-09-03 2018-09-03 数据处理方法、装置、终端设备及存储介质

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US16/731,094 Continuation US11380063B2 (en) 2018-09-03 2019-12-31 Three-dimensional distortion display method, terminal device, and storage medium

Publications (1)

Publication Number Publication Date
WO2020048461A1 true WO2020048461A1 (zh) 2020-03-12

Family

ID=69721481

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2019/104240 WO2020048461A1 (zh) 2018-09-03 2019-09-03 三维立体显示方法、终端设备及存储介质

Country Status (2)

Country Link
US (1) US11380063B2 (zh)
WO (1) WO2020048461A1 (zh)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112710608A (zh) * 2020-12-16 2021-04-27 深圳晶泰科技有限公司 实验观测方法及系统

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11926064B2 (en) * 2020-12-10 2024-03-12 Mitsubishi Electric Corporation Remote control manipulator system and remote control assistance system
CN115249214A (zh) * 2021-04-28 2022-10-28 华为技术有限公司 用于双目畸变校正的显示系统、显示方法及车载系统
CN117058749B (zh) * 2023-08-17 2024-06-07 深圳市华弘智谷科技有限公司 一种多摄像头透视方法、装置、智能眼镜及存储介质

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101262830A (zh) * 2005-07-20 2008-09-10 布拉科成像S.P.A.公司 用于把对象的虚拟模型映射到对象的方法和系统
CN103792674A (zh) * 2014-01-21 2014-05-14 浙江大学 一种测量和校正虚拟现实显示器畸变的装置和方法
US20150193980A1 (en) * 2014-01-06 2015-07-09 Qualcomm Incorporated Calibration of augmented reality (ar) optical see-through display using shape-based alignment
CN106444023A (zh) * 2016-08-29 2017-02-22 北京知境科技有限公司 一种超大视场角的双目立体显示的透射式增强现实系统

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP5814532B2 (ja) * 2010-09-24 2015-11-17 任天堂株式会社 表示制御プログラム、表示制御装置、表示制御システム及び表示制御方法
CN103258338A (zh) * 2012-02-16 2013-08-21 克利特股份有限公司 利用真实数据来驱动仿真的虚拟环境的方法和系统
US9229228B2 (en) * 2013-12-11 2016-01-05 Honeywell International Inc. Conformal capable head-up display
WO2015139005A1 (en) * 2014-03-14 2015-09-17 Sony Computer Entertainment Inc. Methods and systems tracking head mounted display (hmd) and calibrations for hmd headband adjustments
KR20160034037A (ko) * 2014-09-19 2016-03-29 삼성전자주식회사 화면 캡쳐를 위한 방법 및 그 전자 장치
CN108171759A (zh) * 2018-01-26 2018-06-15 上海小蚁科技有限公司 双鱼眼镜头全景相机的标定方法及装置、存储介质、终端
CN108830894B (zh) * 2018-06-19 2020-01-17 亮风台(上海)信息科技有限公司 基于增强现实的远程指导方法、装置、终端和存储介质

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101262830A (zh) * 2005-07-20 2008-09-10 布拉科成像S.P.A.公司 用于把对象的虚拟模型映射到对象的方法和系统
US20150193980A1 (en) * 2014-01-06 2015-07-09 Qualcomm Incorporated Calibration of augmented reality (ar) optical see-through display using shape-based alignment
CN103792674A (zh) * 2014-01-21 2014-05-14 浙江大学 一种测量和校正虚拟现实显示器畸变的装置和方法
CN106444023A (zh) * 2016-08-29 2017-02-22 北京知境科技有限公司 一种超大视场角的双目立体显示的透射式增强现实系统

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112710608A (zh) * 2020-12-16 2021-04-27 深圳晶泰科技有限公司 实验观测方法及系统
CN112710608B (zh) * 2020-12-16 2023-06-23 深圳晶泰科技有限公司 实验观测方法及系统

Also Published As

Publication number Publication date
US20200134927A1 (en) 2020-04-30
US11380063B2 (en) 2022-07-05

Similar Documents

Publication Publication Date Title
WO2020048461A1 (zh) 三维立体显示方法、终端设备及存储介质
EP3614340B1 (en) Methods and devices for acquiring 3d face, and computer readable storage media
CN110809786B (zh) 校准装置、校准图表、图表图案生成装置和校准方法
US10269177B2 (en) Headset removal in virtual, augmented, and mixed reality using an eye gaze database
KR101761751B1 (ko) 직접적인 기하학적 모델링이 행해지는 hmd 보정
KR102170182B1 (ko) 패턴 프로젝션을 이용한 왜곡 보정 및 정렬 시스템, 이를 이용한 방법
US9438878B2 (en) Method of converting 2D video to 3D video using 3D object models
CN110874135B (zh) 光学畸变的校正方法、装置、终端设备及存储介质
JP2022528659A (ja) プロジェクタの台形補正方法、装置、システム及び読み取り可能な記憶媒体
WO2019049331A1 (ja) キャリブレーション装置、キャリブレーションシステム、およびキャリブレーション方法
CN103839227B (zh) 鱼眼图像校正方法和装置
KR20160116075A (ko) 카메라로부터 획득한 영상에 대한 자동보정기능을 구비한 영상처리장치 및 그 방법
JPWO2012147363A1 (ja) 画像生成装置
JP2019083402A (ja) 画像処理装置、画像処理システム、画像処理方法、及びプログラム
CN110874868A (zh) 数据处理方法、装置、终端设备及存储介质
WO2017187694A1 (ja) 注目領域画像生成装置
KR101148508B1 (ko) 모바일 기기 디스플레이의 표시 장치 및 그 방법, 이를 이용하는 모바일 기기
KR20190027079A (ko) 전자 장치, 그 제어 방법 및 컴퓨터 판독가능 기록 매체
JP6552266B2 (ja) 画像処理装置、画像処理方法およびプログラム
CN110874867A (zh) 显示方法、装置、终端设备及存储介质
CN110784693A (zh) 投影机校正方法与使用此方法的投影系统
JP2020191624A (ja) 電子機器およびその制御方法
TW202029056A (zh) 來自廣角影像的像差估計
GB2585197A (en) Method and system for obtaining depth data
CN114092668A (zh) 虚实融合方法、装置、设备及存储介质

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19858598

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 30.07.2021)

122 Ep: pct application non-entry in european phase

Ref document number: 19858598

Country of ref document: EP

Kind code of ref document: A1