CN110874867A - Display method, display device, terminal equipment and storage medium - Google Patents

Display method, display device, terminal equipment and storage medium Download PDF

Info

Publication number
CN110874867A
CN110874867A CN201811023501.6A CN201811023501A CN110874867A CN 110874867 A CN110874867 A CN 110874867A CN 201811023501 A CN201811023501 A CN 201811023501A CN 110874867 A CN110874867 A CN 110874867A
Authority
CN
China
Prior art keywords
virtual
display content
eye display
distortion
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201811023501.6A
Other languages
Chinese (zh)
Inventor
黄嗣彬
戴景文
贺杰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Virtual Reality Technology Co Ltd
Original Assignee
Guangdong Virtual Reality Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Virtual Reality Technology Co Ltd filed Critical Guangdong Virtual Reality Technology Co Ltd
Priority to CN201811023501.6A priority Critical patent/CN110874867A/en
Priority to PCT/CN2019/104240 priority patent/WO2020048461A1/en
Priority to US16/731,094 priority patent/US11380063B2/en
Publication of CN110874867A publication Critical patent/CN110874867A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/006Mixed reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • G06F3/013Eye tracking input arrangements

Landscapes

  • Engineering & Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Graphics (AREA)
  • Computer Hardware Design (AREA)
  • Software Systems (AREA)
  • Human Computer Interaction (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The embodiment of the application discloses a display method, a display device, terminal equipment and a storage medium, wherein the display method comprises the following steps: acquiring a target space coordinate of a target marker in a real space; converting the target space coordinates to rendering coordinates in a virtual space; acquiring data of a virtual object to be displayed, and rendering the virtual object according to the data of the virtual object and the rendering coordinates to obtain left-eye display content and right-eye display content of the virtual object; and displaying the left eye display content and the right eye display content, wherein the left eye display content is used for being projected to a first optical lens, the right eye display content is used for being projected to a second optical lens, and the first optical lens and the second optical lens are respectively used for reflecting the left eye display content and the right eye display content to human eyes. The display method can realize alignment display and three-dimensional display of the virtual object and the target marker.

Description

Display method, display device, terminal equipment and storage medium
Technical Field
The present application relates to the field of display technologies, and in particular, to a display method, an apparatus, a terminal device, and a storage medium.
Background
In recent years, with the progress of science and technology, technologies such as Augmented Reality (AR) have become hot spots of research at home and abroad, and Augmented Reality is a technology for increasing the perception of a user to the real world through information provided by a computer system, in which a virtual object generated by a computer, a scene, or a content object such as system prompt information is superimposed on a real scene to enhance or modify the perception of the real world environment or data representing the real world environment. When the device displays the virtual content, how to realize the three-dimensional display of the virtual content matched with the real scene is an urgent problem to be solved.
Disclosure of Invention
The embodiment of the application provides a display method, a display device, a terminal device and a storage medium, which can realize alignment display of three-dimensional virtual content and a real object.
In a first aspect, an embodiment of the present application provides a display method, which is applied to a terminal device, and the method includes: acquiring a target space coordinate of a target marker in a real space; converting the target space coordinates to rendering coordinates in a virtual space; acquiring data of a virtual object to be displayed, and rendering the virtual object according to the data of the virtual object and the rendering coordinates to obtain left-eye display content and right-eye display content of the virtual object; and displaying the left eye display content and the right eye display content, wherein the left eye display content is used for being projected to a first optical lens, the right eye display content is used for being projected to a second optical lens, and the first optical lens and the second optical lens are respectively used for reflecting the left eye display content and the right eye display content to human eyes.
In a second aspect, an embodiment of the present application provides a display apparatus, which is applied to a terminal device, and the apparatus includes: the system comprises a space coordinate acquisition module, a space coordinate conversion module, a virtual object rendering module and an object display module, wherein the space coordinate acquisition module is used for acquiring a target space coordinate of a target marker in a real space; the space coordinate conversion module is used for converting the target space coordinate into a rendering coordinate in a virtual space; the virtual object rendering module is used for acquiring data of a virtual object to be displayed, and rendering the virtual object according to the data of the virtual object and the rendering coordinates to obtain left eye display content and right eye display content of the virtual object; the object display module is used for displaying the left eye display content and the right eye display content, the left eye display content is used for projecting to a first optical lens, the right eye display content is used for projecting to a second optical lens, and the first optical lens and the second optical lens are respectively used for reflecting the left eye display content and the right eye display content to human eyes.
In a third aspect, an embodiment of the present application provides a terminal device, including: one or more processors; a memory; one or more application programs, wherein the one or more application programs are stored in the memory and configured to be executed by the one or more processors, the one or more programs configured to perform the display method provided by the first aspect above.
In a fourth aspect, an embodiment of the present application provides a computer-readable storage medium, where a program code is stored in the computer-readable storage medium, and the program code may be called by a processor to execute the display method provided in the first aspect.
According to the scheme, the target space coordinate of the target marker in the real space is obtained, the target space coordinate is converted into the rendering coordinate in the virtual space, the data of the virtual object to be displayed are obtained, the virtual object is rendered according to the data of the virtual object and the rendering coordinate, the left eye display content and the right eye display content of the virtual object are obtained, finally, the left eye display content and the right eye display content are displayed, the left eye display content is used for being projected to the first optical lens, the right eye display content is used for being projected to the second optical lens, the first optical lens and the second optical lens are respectively used for reflecting the left eye display content and the right eye display content to human eyes, and therefore alignment display and three-dimensional display of the virtual object and the target marker are achieved.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
Fig. 1 shows a schematic diagram of an application scenario suitable for use in an embodiment of the present application.
Fig. 2 shows a scene schematic diagram provided in an embodiment of the present application.
Fig. 3 shows another schematic view of a scenario provided in an embodiment of the present application.
Fig. 4 shows a schematic view of another scenario provided in the embodiment of the present application.
FIG. 5 shows a flow diagram of a display method according to one embodiment of the present application.
Fig. 6 shows a schematic diagram of an effect provided according to an embodiment of the present application.
FIG. 7 shows a flow diagram of a display method according to another embodiment of the present application.
Fig. 8 shows a schematic diagram of a usage scenario provided in accordance with an embodiment of the present application.
Fig. 9 is a schematic diagram illustrating another usage scenario provided in accordance with an embodiment of the present application.
Fig. 10 shows a flowchart of step S240 in the display method according to the embodiment of the present application.
FIG. 11 shows a block diagram of a display device according to one embodiment of the present application.
Fig. 12 is a block diagram of a terminal device for executing a display method according to an embodiment of the present application.
Fig. 13 is a storage unit for storing or carrying program codes for implementing a display method according to an embodiment of the present application.
Detailed Description
In order to make the technical solutions better understood by those skilled in the art, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application.
An application scenario of the display method provided in the embodiment of the present application is described below.
Referring to fig. 1, a schematic diagram of an application scenario of the display method provided in the embodiment of the present application is shown, where the application scenario includes a display system 10. The display system 10 includes: a terminal device 100 and a tag 200.
In the embodiment of the present application, the terminal device 100 may be a head-mounted display device, or may be a mobile device such as a mobile phone and a tablet. When the terminal device 100 is a head-mounted display device, the head-mounted display device may be an integrated head-mounted display device. The terminal device 100 may also be an intelligent terminal such as a mobile phone connected to an external head-mounted display device, that is, the terminal device 100 may be inserted or connected to the external head-mounted display device as a processing and storage device of the head-mounted display device, and display virtual content in the head-mounted display device.
In the embodiment of the present application, when the marker 200 is located within the visual field of the terminal device 100, the terminal device 100 may acquire an image including the marker 200, and may recognize the acquired image of the marker 200 to obtain spatial position information such as the position and the orientation of the marker 200, and a recognition result such as the identity information of the marker 200. It is to be understood that the specific marker 200 is not limited in the embodiment of the present application, and only needs to be identified and tracked by the terminal device.
In an embodiment of the present application, the head mounted display device may include a first optical lens and a second optical lens. The first optical lens is used for emitting light emitted by the image source to the observation position of the left eye so as to enable display content corresponding to the left eye to be emitted to the left eye of a user; the second optical lens is used for directing the light emitted by the image source to the observation position of the right eye so as to enable the display content corresponding to the right eye to be incident to the right eye of the user, and therefore three-dimensional display is achieved. The image source can be a display screen of the head-mounted display device, can also be a display screen of an intelligent terminal connected with the head-mounted display device and the like, and can be used for displaying images.
In the embodiment of the present application, please refer to fig. 2, when the displayed virtual marker is aligned with the physical marker 306, the coordinate of the coordinate system of the physical marker 306 in the real space and the coordinate of the coordinate system of the virtual marker in the virtual space, which are recognized by the tracking camera 301, are utilized to obtain the conversion parameter between the coordinate system in the real space and the coordinate system in the virtual space.
Due to the optical lens, the displayed image can be distorted after forming a virtual image, so that the displayed image can be pre-distorted and displayed, and the effect of distortion correction is achieved. For example, as shown in fig. 3, a normal undistorted real image 311 forms a distorted virtual image 312 after being displayed on an optical lens, a virtual image 314 without distortion may be obtained first, the virtual image 314 without distortion is predistorted to obtain a predistorted image 313 for display, then the predistorted image 313 is displayed, and after the predistorted image 313 is subjected to an optical distortion effect of the optical lens, the virtual image 314 without distortion may be formed.
Referring to fig. 4, when performing aligned stereoscopic display of virtual content and physical content, a tracking target provided with a marker may be identified by a tracking camera 301, coordinates of the tracking target in a coordinate system with the tracking camera 301 as an origin in a real space are obtained, coordinate conversion is performed, and coordinates of the tracking target in the coordinate system in the real space are converted into rendering coordinates in the coordinate system with the virtual camera 304 as the origin in the virtual space according to a conversion parameter between the coordinate system in the real space and the coordinate system in the virtual space; generating a left eye display image and a right eye display image according to the rendering coordinates, performing left eye pre-distortion on the left eye display image to obtain a left eye pre-distortion image, performing right eye pre-distortion on the right eye display image to obtain a right eye pre-distortion image, after the left eye pre-distortion image and the right eye pre-distortion image are displayed through a display screen 303, projecting the left eye pre-distortion image and the right eye pre-distortion image to human eyes through an optical lens 302, forming an undistorted left eye virtual image and an undistorted right eye virtual image, and fusing brain of a user to form a three-dimensional image. Thereby realizing the aligned display, the stereo display and the distortion-free display of the virtual content and the entity content.
The following describes the embodiments of the present application in detail.
Referring to fig. 5, an embodiment of the present application provides a display method, which is applicable to a terminal device, and the display method may include:
step S110: and acquiring the target space coordinates of the target marker in the real space.
In this embodiment of the application, when alignment display of a virtual object and a target marker of an entity is implemented, a target space coordinate of the target marker in a real space may be acquired, where the target space coordinate may be used to represent a positional relationship between the target marker and a tracking camera on a head-mounted display device, and may also be used to represent a positional relationship between the target marker and a terminal device.
Wherein the target marker may include at least one sub-marker, and the sub-marker may be a pattern having a certain shape. In one embodiment, each sub-marker may have one or more feature points, wherein the shape of the feature points is not limited, and may be a dot, a ring, a triangle, or other shapes. In addition, the distribution rules of the sub-markers within different target markers are different, and thus, each target marker may have different identity information. The terminal device may acquire identity information corresponding to the target marker by recognizing the sub-marker included in the target marker, where the identity information may be information such as a code that can be used to uniquely identify the target marker, but is not limited thereto.
In one embodiment, the outline of the target marker may be a rectangle, but the target marker may have another shape, and the rectangular region and the plurality of sub-markers in the region constitute one target marker. Of course, the target marker may also be an object which is composed of light spots and can emit light, the light spot marker may emit light of different wavelength bands or different colors, and the terminal device acquires the identity information corresponding to the target marker by identifying information such as the wavelength band or the color of the light emitted by the light spot marker. Of course, the specific target marker is not limited in the embodiment of the present application, and the target marker only needs to be recognized by the terminal device.
After acquiring the image containing the target marker, the terminal device may identify the image containing the target marker to obtain an identification result of the target marker. The identification result of the target marker may include a spatial position of the target marker relative to the terminal device, identity information of the target marker, and the like. The spatial position of the target marker relative to the terminal device may include a position of the target marker relative to the terminal device, attitude information, and the like, and the attitude information is an orientation and a rotation angle of the target marker relative to the terminal device, so that a target spatial coordinate in a first spatial coordinate system in a real space of the target marker using a tracking camera of the terminal device as an origin can be obtained, and the tracking camera is an image acquisition device for the terminal device to track a real object.
Step S120: the target space coordinates are converted to rendering coordinates in virtual space.
After the target space coordinates of the target marker in the first space coordinate system in the real space are acquired, the target space coordinates may be converted into rendering coordinates in the virtual space, so as to generate and display the display content corresponding to the virtual object.
In this embodiment of the present application, converting the target space coordinates into rendering coordinates in a virtual space may include:
reading the stored conversion parameters of a first space coordinate system and a second space coordinate system, wherein the second space coordinate system is a space coordinate system which takes a virtual camera as an origin in a virtual space; and converting the target space coordinates into rendering coordinates in the virtual space according to the conversion parameters.
In the embodiment of the present application, the target space may be converted by using the conversion parameters of the first spatial coordinate system and the second spatial coordinate system, so as to obtain the rendering coordinates. The conversion parameter can be used for aligning the first space coordinate system and the second space coordinate system to realize conversion between the first space coordinate system and the second space coordinate system, wherein the conversion parameter is a parameter in a conversion formula between the first space coordinate system and the second space coordinate system, and the rendering coordinate in the first space coordinate system of the virtual space can be obtained by substituting the target space coordinate and the conversion parameter into the conversion formula for calculation. The virtual camera is a camera used for simulating the visual angle of human eyes in a 3D software system. According to the change of the motion of the virtual camera (namely the head motion), the motion change of a virtual object in a virtual space is tracked, and the virtual object is projected onto the optical lens through rendering to realize three-dimensional display.
Step S130: and acquiring data of the virtual object to be displayed, and rendering the virtual object according to the data of the virtual object and the rendering coordinates to obtain left-eye display content and right-eye display content of the virtual object.
After converting the target space coordinates of the target marker in the first space coordinate system of the display space into rendering coordinates in the second space coordinate system of the virtual space, data of the virtual object to be displayed may be acquired, and the virtual object may be rendered according to the data of the virtual object and the rendering coordinates. The data corresponding to the virtual object to be displayed may include model data of the virtual object, where the model data is data used for rendering the virtual object. For example, the model data may include colors, model vertex coordinates, model contour data, etc. used to build a model corresponding to the virtual object.
In this embodiment, the virtual camera includes a left virtual camera and a right virtual camera. The left virtual camera is used for simulating the left eye of the human eye, and the right virtual camera is used for simulating the right eye of the human eye. Rendering the virtual object according to the data of the virtual object and the rendering coordinates to obtain the left-eye display content and the right-eye display content of the virtual object, including:
constructing and rendering a virtual object according to the data of the virtual object; and respectively calculating the corresponding pixel coordinates of the virtual object in the left virtual camera and the right virtual camera according to the rendering coordinates to obtain left-eye display content and right-eye display content.
It will be appreciated that from the data described above for rendering a virtual object, a virtual object may then be constructed and rendered. According to the rendering coordinates and the virtual object constructed and rendered, the space coordinates of each point of the virtual object in the second space coordinate system in the virtual space can be obtained. And substituting the space coordinates into a conversion formula between a pixel coordinate system corresponding to the left virtual camera and a second space coordinate system in the virtual space to obtain the pixel coordinates of each point of the virtual object in the left virtual camera, and obtaining the left-eye display content according to the pixel value of each point of the virtual object and the pixel coordinates of each point in the left virtual camera. Similarly, the spatial coordinates are substituted into a conversion formula between a pixel coordinate system corresponding to the right virtual camera and a second spatial coordinate system in the virtual space, so that the pixel coordinates of each point of the virtual object corresponding to the right virtual camera can be obtained, and the right-eye display content can be obtained according to the pixel value of each point of the virtual object and the pixel coordinates of each point corresponding to the right virtual camera.
After the virtual object is rendered, left-eye display content and right-eye display content with parallax corresponding to the virtual object can be obtained, so that a stereoscopic display effect during display is achieved.
Step S140: the display method comprises the steps of displaying left eye display content and right eye display content, wherein the left eye display content is used for projecting to a first optical lens, the right eye display content is used for projecting to a second optical lens, and the first optical lens and the second optical lens are respectively used for reflecting the left eye display content and the right eye display content to human eyes.
After the left-eye display content and the right-eye display content of the virtual object are obtained, the left-eye display content and the right-eye display content can be displayed. Specifically, the left-eye display content may be projected to a first optical lens of the head-mounted display device, and the left-eye display content may be incident to the left eye of the user after being reflected by the first optical lens. And projecting right eye display content to a second optical lens of the head mounted display device, the right eye display content being incident to the left eye of the user after being reflected by the second optical lens.
After the left-eye display content and the right-eye display content are displayed, the left-eye display content is projected to the left eye of a user, and the right-eye display content is projected to the left eye of the user, the user can see the left-eye display content and the right-eye display content with parallax, and the three-dimensional display content is formed through the fusion of brains of the user, so that the alignment display of the virtual object and the target marker and the three-dimensional display of the virtual object are achieved. For example, as shown in fig. 6, after the left-eye display content and the right-eye display content are displayed, the stereoscopic virtual object 900 can be seen to be displayed in alignment with the target marker 700.
According to the display method provided by the embodiment of the application, the target space coordinate of the target marker in the real space is obtained, the target space coordinate is converted into the rendering coordinate in the virtual space, the virtual object is rendered according to the data of the virtual object and the rendering coordinate, the left eye display content and the right eye display content of the virtual object are obtained, and finally the left eye display content and the right eye display content are displayed, so that the left eye display content is incident to the left eye of a user, the right eye display content is incident to the right eye of the user, and therefore the alignment display and the three-dimensional display of the virtual object and the target marker are achieved.
Referring to fig. 7, another embodiment of the present application provides a display method, which can be applied to a terminal device, and the method can include:
step S210: displaying the virtual marker.
In the embodiment of the present application, when aligned display of virtual content and physical content is implemented, conversion parameters between spatial coordinate systems need to be acquired. When the conversion parameters between the space coordinate systems are acquired, the virtual markers can be displayed, in addition, the physical markers can be arranged in the real scene, and the physical markers are positioned in the visual field range of the terminal equipment, so that the alignment display of the virtual markers and the physical markers can be realized subsequently. The field of view of the terminal device refers to the field of view of the image capturing device of the terminal device, and the field of view of the image capturing device may be determined by the size of the field of view.
The virtual marker can be stored in the terminal device in advance, and the virtual marker is the same as the physical marker, namely the pattern of the virtual marker is the same as the shape and size of the physical marker.
When the virtual marker is displayed, the left eye content corresponding to the virtual marker is projected to the left eye optical lens and reflected to the left eye of a user through the left eye optical lens, the right eye content corresponding to the virtual marker is projected to the left eye optical lens and reflected to the right eye of the user through the right eye optical lens, the three-dimensional display of the virtual marker is achieved, and when the user views the displayed virtual marker, the user can view the virtual marker superposed to the real scene where the entity marker is located.
In the embodiment of the application, the terminal device is a head-mounted display device, or the terminal device is arranged on the head-mounted display device. Before displaying the virtual marker, parameters of optical distortion correction of the head mounted display device may be determined to ensure proper display of the virtual marker, i.e. display of the virtual marker without distortion.
In verifying the parameters of the optical distortion correction, the parameters of the optical distortion correction may be determined by displaying a preset image, for example, a checkerboard image, for the user. The user can make a determination operation of parameters of optical distortion correction while ensuring that the displayed preset image is undistorted. When the terminal device detects the determination operation of the user, the parameter of the current optical distortion correction can be determined to be accurate. In this embodiment, after displaying the virtual marker, when the user observes that the displayed virtual marker is not aligned with the physical marker, the position of the physical marker may be moved until the virtual marker is observed to be aligned with the physical marker, and an alignment determination operation is performed on the terminal device.
After the virtual marker is displayed, the user can observe that the virtual marker is superimposed on the real scene where the physical marker is located, and at this time, the virtual marker and the physical marker in the virtual space may be in a state of being misaligned, for example, as shown in fig. 8, the physical marker 500 and the virtual marker 600 are misaligned; virtual markers may also be aligned with physical markers, such as shown in fig. 9 where a physical marker 500 is aligned with a virtual marker 600. Here, the alignment means that the positions of the virtual marker and the physical marker in the virtual space are identical, and it can also be understood that the virtual marker and the physical marker are overlapped in the visual perception of the user.
Further, the virtual marker may be aligned with the physical marker by controlling the movement of the marker. In the embodiment of the application, the entity marker is arranged on the controllable moving mechanism, and the controllable moving mechanism is connected with the terminal device.
In an embodiment of the present application, the display method may further include:
and when the movement control operation of the user is detected, sending a movement instruction to the controllable moving mechanism, wherein the movement instruction is used for instructing the controllable moving mechanism to move according to the movement control operation.
It can be understood that, the user can make a movement control operation on the terminal device, and the movement control operation is used for controlling the movement of the controllable moving mechanism to drive the marker to move. When the movement control operation of the user is detected, a movement instruction can be sent to the controllable moving mechanism, so that the controllable moving mechanism moves according to the movement control operation, and the aim of aligning the entity marker with the virtual marker is finally achieved. The above-mentioned mobile control operation may be an operation performed by a key or a touch screen of the terminal device, or may be an operation performed by a controller connected to the terminal device, and of course, a specific operation manner may not be limited in this embodiment of the application.
Step S220: when the alignment determination operation of the user is detected, acquiring first coordinates of the physical marker in a first space coordinate system, wherein the alignment determination operation is used for representing that the virtual marker is aligned with the physical marker, and the virtual marker corresponds to the physical marker.
When the user observes that the virtual marker is not aligned with the solid marker, the position of the solid marker can be moved until the virtual marker is observed to be aligned with the solid marker, and an alignment determination operation is made on the terminal device.
When the user observes that the virtual marker is aligned with the entity marker, an alignment determination operation can be made on the terminal device, and the alignment determination operation is used for representing that the virtual marker is aligned with the entity marker, so that the display that the virtual marker is aligned with the entity marker is realized.
In this embodiment of the application, the alignment determining operation may be an operation performed by a key or a touch screen of the terminal device, or may be an operation performed by a controller connected to the terminal device, and of course, a specific operation manner may not be limited in this embodiment of the application.
The terminal device may detect an alignment determination operation made by a user, determine that the virtual marker is aligned with the physical marker at this time, and determine a conversion parameter between the first spatial coordinate system and the second spatial coordinate system according to a coordinate of the current physical marker in the first spatial coordinate system in the real space and a coordinate of the currently displayed virtual object in the second spatial coordinate system in the virtual space.
In the embodiment of the present application, the first spatial coordinate system is a spatial coordinate system with the tracking camera as an origin in a real space, and the second spatial coordinate system is a spatial coordinate system with the virtual camera as an origin in a virtual space. The tracking camera is an image acquisition device of the terminal equipment, and the virtual camera is a camera used for simulating the visual angle of human eyes in the 3D software system. According to the change of the motion of the virtual camera (namely the head motion), the motion change of a virtual object in a virtual space is tracked, and the virtual object is projected onto the optical lens through rendering to realize three-dimensional display.
In an embodiment of the present application, a first coordinate of a physical marker in a first spatial coordinate system may be obtained when an alignment determination operation by a user is detected.
Wherein the physical marker may include at least one sub-marker, and the sub-marker may be a pattern having a certain shape. In one embodiment, each sub-marker may have one or more feature points, wherein the shape of the feature points is not limited, and may be a dot, a ring, a triangle, or other shapes. In addition, the distribution rules of the sub-markers within different entity markers are different, and thus, each entity marker can have different identity information. The terminal device may obtain identity information corresponding to the entity marker by identifying the sub-marker included in the entity marker, where the identity information may be information that can be used to uniquely identify the entity marker, such as a code, but is not limited thereto.
In one embodiment, the outline of the solid marker may be a rectangle, but the shape of the solid marker may be other shapes, and is not limited herein, and a rectangular region and a plurality of sub-markers in the region constitute one solid marker. Of course, the entity marker may also be an object which is composed of light spots and can emit light, the light spot marker may emit light with different wavelength bands or different colors, and the terminal device acquires the identity information corresponding to the entity marker by identifying information such as the wavelength bands or the colors of the light emitted by the light spot marker. Of course, the specific entity tag is not limited in the embodiment of the present application, and the entity tag only needs to be recognized by the terminal device.
After acquiring the image containing the entity marker, the terminal device may identify the image containing the entity marker to obtain an identification result of the entity marker. The identification result of the target entity marker may include a spatial position of the entity marker relative to the terminal device, identity information of the entity marker, and the like. The spatial position of the physical marker relative to the terminal device may include a position of the physical marker relative to the terminal device, attitude information, and the like, where the attitude information is an orientation and a rotation angle of the physical marker relative to the terminal device, and thus, a first coordinate of the physical marker in the first spatial coordinate system may be obtained.
In the embodiment of the present application, when obtaining the transformation relationship between the first space coordinate system and the second space coordinate system according to the first coordinate of the physical marker in the first space coordinate system and the second coordinate of the virtual marker in the second space coordinate system, the transformation relationship between the first space coordinate system and the second space coordinate system needs to be calculated according to the first coordinate of the physical markers in the first space coordinate system and the second coordinate of the virtual markers in the second space coordinate system, where the physical markers and the virtual markers are in a one-to-one correspondence relationship, that is, each physical marker in the physical markers is aligned with one virtual marker in the virtual markers.
Therefore, when the alignment determination operation of the user is detected, the first coordinates of the physical markers in the first spatial coordinate system are acquired, which may be that when the alignment determination operation for characterizing the alignment of the plurality of physical markers with the plurality of virtual markers is detected, the first coordinates of all the physical markers in the first spatial coordinate system are acquired.
In this embodiment of the application, before the image acquisition device of the terminal device is used to acquire the image including the entity marker to determine the first coordinate of the entity marker in the first spatial coordinate system, the image acquisition device may be calibrated to ensure that the accurate coordinate of the entity marker in the first spatial coordinate system is acquired.
Step S230: second coordinates of the virtual marker in a second spatial coordinate system are acquired.
In this embodiment, the terminal device further needs to acquire a second coordinate of the virtual marker in the second spatial coordinate system, where the second coordinate of the virtual marker in the second spatial coordinate system can be obtained by tracking the virtual marker with the virtual camera. Therefore, second coordinates in a second space coordinate system corresponding to the virtual markers can be obtained, and the virtual markers correspond to the markers one to one.
In this embodiment of the application, after obtaining the first coordinates of the plurality of physical markers in the first spatial coordinate system and the second coordinates of the plurality of virtual markers in the second spatial coordinate system, the first coordinates of the physical markers and the second coordinates of the virtual markers corresponding to the physical markers may be stored as coordinate pairs according to a one-to-one correspondence relationship between the plurality of physical markers and the plurality of virtual markers, so as to be used for subsequently calculating the conversion parameters of the first spatial coordinate system and the second spatial coordinate system. For example, if the physical marker a corresponds to the virtual marker a and the physical marker B corresponds to the virtual marker B, the first coordinate of the physical marker a and the second coordinate of the virtual marker a are stored as one coordinate, and the first coordinate of the physical marker B and the second coordinate of the virtual marker B are stored as one coordinate pair.
Step S240: and acquiring a conversion parameter between the first space coordinate system and the second space coordinate system based on the first coordinate of the solid marker and the second coordinate of the virtual marker corresponding to the solid marker.
After obtaining the first coordinates of the physical marker and the second coordinates of the virtual marker corresponding to the physical marker, the transformation parameters between the first spatial coordinate system and the second spatial coordinate system can be calculated. Wherein the conversion parameter between the first space coordinate system and the second space coordinate system may include: a rotation parameter and a translation parameter.
In the embodiment of the present application, please refer to fig. 10, step S240 may include:
step S241: and establishing a conversion formula between the first space coordinate system and the second space coordinate system according to the attitude transformation algorithm, wherein the conversion formula comprises a rotation parameter and a translation parameter.
In the embodiment of the present application, when the conversion parameter between the first spatial coordinate system and the second spatial coordinate system is calculated according to the first coordinate of the physical marker and the second coordinate of the virtual marker, the conversion formula between the first spatial coordinate system and the second spatial coordinate system may be obtained.
Specifically, a conversion formula between the first space coordinate system and the second space coordinate system may be established according to an attitude transformation algorithm. Wherein, the attitude transformation algorithm may include: a rigid body transformation estimation algorithm, a PNP algorithm, a DCM algorithm, or a POSIT algorithm, and the specific attitude transformation algorithm may not be limited in the embodiments of the present application.
The above-mentioned conversion formula represents a conversion relationship of coordinates in the first spatial coordinate system and coordinates in the second spatial coordinate system, and the conversion formula includes a conversion parameter. The above-mentioned transformation formula may be that the coordinates in the second spatial coordinate system are expressed by the coordinates in the first spatial coordinate system and the transformation parameters, or may be that the coordinates in the first spatial coordinate system are expressed by the coordinates in the second spatial coordinate system and the transformation parameters.
Further, the above conversion formula may be that a matrix formed by the coordinates in the second spatial coordinate system is expressed by multiplying a matrix formed by the coordinates in the first spatial coordinate system by a matrix formed by the conversion parameters, where the matrix formed by the conversion parameters includes a rotation parameter and a translation parameter.
Step S242: and acquiring coordinate pairs with the number larger than a preset value, and substituting the acquired coordinate pairs into a conversion formula to obtain rotation parameters and translation parameters between the first space coordinate system and the second space coordinate system.
In the embodiment of the present application, after obtaining the transformation formula between the first spatial coordinate system and the second spatial coordinate system, the transformation parameter in the transformation formula may be solved by using the first coordinate of the physical marker and the second coordinate of the virtual marker corresponding to the physical marker.
Specifically, the stored coordinate pair of the first coordinate and the corresponding second coordinate of the preset value may be read, the stored coordinate pair of the first coordinate and the second coordinate of the preset value may be substituted into the conversion formula, and the conversion parameter in the conversion formula may be solved, so as to obtain the rotation parameter and the translation parameter. The preset value is determined according to a conversion formula established by a posture conversion algorithm specifically utilized, for example, when the conversion formula is established according to a rigid body conversion estimation algorithm, the preset value may be 4, and the specific preset value may not be limited in the embodiment of the present application.
It is understood that a first coordinate in the first space coordinate system corresponds to a second coordinate in a second space coordinate system in each coordinate pair, and the coordinate pairs are substituted into the conversion formula, so that the first coordinate and the second coordinate in the coordinate pairs are substituted into the conversion formula, that is, the first coordinate is substituted into a matrix formed by the coordinates in the first space coordinate system in the conversion formula, and the second coordinate is substituted into a matrix formed by the coordinates in the second space coordinate system in the conversion formula. After the coordinate pairs of the preset numerical values are respectively substituted into the conversion formulas, a matrix formed by conversion parameters in the conversion formulas can be solved, so that rotation parameters and translation parameters in the matrix are obtained, namely the rotation parameters and the translation parameters between the first space coordinate system and the second space coordinate system are obtained.
In an embodiment of the present application, after obtaining the conversion parameter between the first spatial coordinate system and the second spatial coordinate system, the display method may further include:
and finely adjusting the first camera parameter of the tracking camera and/or the second camera parameter of the virtual camera.
It can be understood that due to the existence of the mirror refraction of the optical lens and the error of the posture transformation algorithm, when the content of the virtual content superimposed on the real scene is displayed by using the above conversion parameters, the virtual content may not be completely aligned with the real content. Therefore, some fine adjustment can be made to the first camera parameter of the tracking camera (image acquisition device) and/or the second camera parameter of the virtual camera, so that the virtual content is completely aligned with the real content when the conversion parameter is used for displaying the virtual content. Specifically, the tilt angle, depth, etc. of the tracking camera and/or virtual camera may be adjusted.
Step S250: target spatial coordinates of the target marker in a first spatial coordinate system are acquired.
After the conversion parameter between the first space coordinate system in the real space and the second space coordinate system in the virtual space is acquired, the aligned display of the virtual content and the real content can be realized according to the conversion parameter.
In the embodiment of the present application, the target space coordinates of the target marker in the first space coordinate system, that is, the coordinates of the target marker in the space coordinate system with the tracking camera as the origin in the real space, may be obtained. The target marker is used for displaying the virtual object, namely displaying the virtual object and the target marker in an aligned mode. The target marker is similar to the solid marker, and the terminal device may acquire an image containing the target marker and then recognize the image containing the target marker, so as to obtain the target space coordinates of the target marker in the first space coordinate system.
Step S260: the target spatial coordinates are converted to rendering coordinates in a second spatial coordinate system using the conversion parameters.
After the target space coordinates of the target marker in the first space coordinate system are acquired, the acquired conversion parameters may be used to convert the target space coordinates of the target marker in the first space coordinate system into coordinates in the second space coordinate system, that is, coordinates in the space coordinate system with the virtual camera as an origin in the virtual space, so as to generate the display content of the virtual object according to the target space coordinates.
Specifically, the target spatial coordinates of the target marker in the first spatial coordinate system and the conversion parameters may be substituted into a conversion formula between the first spatial coordinate system and the second spatial coordinate system, and the rendering coordinates in the second spatial coordinate system may be calculated.
Step S270: and acquiring data of the virtual object to be displayed, and rendering the virtual object according to the data of the virtual object and the rendering coordinates to obtain left-eye display content and right-eye display content of the virtual object.
After the target space coordinates of the target marker in the first space coordinate system are converted into rendering coordinates in the second space coordinate system, data of the virtual object to be displayed can be acquired, and the virtual object is rendered according to the data of the virtual object and the rendering coordinates. The data corresponding to the virtual object to be displayed may include model data of the virtual object, where the model data is data used for rendering the virtual object. For example, the model data may include colors, model vertex coordinates, model contour data, etc. used to build a model corresponding to the virtual object.
For a specific method for obtaining left-eye display content and right-eye display content, reference may be made to the above embodiments, and details are not repeated here. After the virtual object is rendered, left-eye display content and right-eye display content with parallax corresponding to the virtual object can be obtained, so that a stereoscopic display effect during display is achieved.
Step S280: and obtaining a left eye pre-distortion image corresponding to the left eye display content and a right eye pre-distortion image corresponding to the right eye display content according to the optical distortion model, the left eye display content and the right eye display content, wherein the optical distortion model is used for fitting optical distortion generated by the optical lens.
When the head mounted display device displays display contents, a displayed image is distorted due to an optical system of the head mounted display device. If the left-eye display content and the right-eye display content are directly displayed, the user will see a distorted virtual image of the virtual object. For example, referring again to fig. 3, the real image 311 forms a distorted virtual image 312 after being displayed. .
Therefore, when the left-eye display content and the right-eye display content are displayed, the left-eye display content and the right-eye display content may be pre-distorted and displayed so that a user can see a virtual image of a virtual object without distortion.
In this embodiment of the application, the left-eye display content may be subjected to reverse distortion processing according to the stored optical distortion model to obtain a left-eye pre-distorted image corresponding to the left-eye display content, and the right-eye display content may be subjected to reverse distortion processing according to the optical distortion model to obtain a right-eye pre-distorted image corresponding to the right-eye display content. Wherein the optical distortion model is used for fitting the optical distortion of the optical lens of the head-mounted display device, and the optical distortion model can be
Figure BDA0001787654600000121
Wherein X is the abscissa of the real image, Y is the ordinate of the real image, A is the first distortion parameter, B is the second distortion parameter, I1To fit a matrix of transverse radial distortion or a matrix of transverse barrel distortion of the optical lens, I2Matrix fitting the transverse tangential distortion of an optical lens, I3To fit a matrix of longitudinal radial distortion or a matrix of longitudinal barrel distortion of the optical lens, I4Matrix fitting the longitudinal tangential distortion of an optical lens, I1Sitting with virtual imageLabel, I2Including the abscissa and ordinate of the virtual image, I3Including the ordinate of the virtual image, I4Including the abscissa and the ordinate of the virtual image.
In the embodiment of the present application, the correspondence between the optical distortion model and the optical parameters of the optical lens may also be stored, that is, the optical distortion models corresponding to different optical parameters are stored, and when the optical distortion model is read to perform pre-distortion on an image to be displayed, the corresponding optical distortion model may be read according to the optical parameters of the optical lens.
When the left-eye display content and the right-eye display content of the virtual object are pre-distorted, the stored optical distortion model may be read. The coordinate data of the left-eye display content is used as the coordinate data of the virtual image, the coordinate data is substituted into the optical distortion model, the screen coordinate data corresponding to the left-eye display content is calculated, a left-eye pre-distortion image to be displayed can be generated according to the screen coordinate data and the pixel points of the left-eye display content, and the left-eye pre-distortion image corresponds to the left-eye display content.
Similarly, the coordinate data of the right-eye display content is used as the coordinate data of the virtual image, the coordinate data is substituted into the optical distortion model, the screen coordinate data corresponding to the right-eye display content is calculated, a right-eye pre-distortion image to be displayed can be generated according to the screen coordinate data and the pixel points of the right-eye display content, and the right-eye pre-distortion image corresponds to the right-eye display content.
In addition, in the embodiment of the present application, when there is a non-integer value coordinate in the screen coordinate data obtained according to the optical distortion model described above, it is necessary to convert the non-integer value coordinate into an integer value coordinate in order to generate a pre-distorted image. Therefore, the non-integer value coordinates in the screen data may be converted into integer value coordinates using a pixel interpolation method. Specifically, the pixel coordinate closest to the integer value coordinate may be acquired, and then the non-integer value coordinate may be replaced with the acquired pixel coordinate.
Step S290: the left-eye pre-distortion image and the right-eye pre-distortion image are displayed, the left-eye pre-distortion image is used for being projected to a first optical lens and reflected to human eyes through the first optical lens to form undistorted left-eye display content, the right-eye pre-distortion image is used for being projected to a second optical lens and reflected to the human eyes through the second optical lens to form undistorted right-eye display content.
After the pre-distorted left-eye pre-distortion image and the pre-distorted right-eye image are obtained, the left-eye pre-distortion image and the right-eye pre-distortion image can be displayed. After the left-eye pre-distortion image and the right-eye pre-distortion image are displayed, the left-eye pre-distortion image is projected to the first optical lens and then is reflected by the first optical lens to be incident to the left eye of a user. Similarly, after the right-eye pre-distortion image is projected to the second optical lens, the right-eye pre-distortion image is reflected by the second optical lens and then is incident to the right eye of the user, and the undistorted right-eye display content is formed. Therefore, a user can see the undistorted left eye display content and the undistorted right eye display content with parallax, undistorted three-dimensional display content is formed through fusion of brains of the user, and the virtual object and the target marker are displayed in an aligned mode, and undistorted display and three-dimensional display of the virtual object are achieved. For example, referring to fig. 3 again, the pre-distorted image 313 is displayed to obtain an undistorted virtual image 314, and it is ensured that the undistorted virtual image 314 is consistent with the real image 311.
In the embodiment of the present application, the optical distortion model may be obtained before the left-eye display content and the right-eye display content are pre-distorted by the optical distortion model. Therefore, the step of constructing the optical distortion model may include:
reading optical manufacturer data of the optical lens, wherein the optical manufacturer data comprises coordinate data of an experimental image and coordinate data of a distorted virtual image corresponding to the experimental image; performing polynomial fitting on the coordinate data of the experimental image and the coordinate data of the distorted virtual image to obtain an optical distortion model; the optical distortion model is stored.
The optical manufacturer data may include coordinate data of the experimental image and coordinate data of a distorted virtual image after the experimental image is displayed.
For example, the optical manufacturer data is shown in the following table:
Figure BDA0001787654600000141
in this embodiment of the application, after the optical manufacturer data of the optical lens is acquired, the coordinate data of the distorted virtual image may be further adjusted according to a display parameter, where the display parameter includes at least one of a zoom ratio, a screen size, a pixel size, and an optical center position of the optical lens.
It can be understood that the scaling, the screen size, the pixel size and the optical center position corresponding to the optical lens can be obtained, then the coordinate data of the distorted virtual image corresponding to the experimental image is adjusted according to at least one parameter of the scaling, the screen size, the pixel size and the optical center position corresponding to the optical lens, and the effect that the experimental image corresponds to each point of the distorted image and the accuracy is high is achieved.
In this embodiment of the application, performing polynomial fitting on the coordinate data of the experimental image and the coordinate data of the distorted virtual image to obtain an optical distortion model, which may include:
calculating a first distortion parameter and a second distortion parameter of the optical distortion model according to the coordinate data of the experimental image and the coordinate data of the distortion virtual image corresponding to the experimental image, wherein the first distortion parameter is a coefficient of distortion of the fitting optical lens in a first direction, and the second distortion parameter is a coefficient of distortion of the fitting optical lens in a second direction; and constructing an optical distortion model according to the first distortion coefficient and the second distortion data.
Specifically, according to the formula (1), the distortion can be fitted by using a transverse polynomial and a longitudinal polynomial to obtain the abscissa of the real imageA first expression multiplied by a first distortion parameter and a first polynomial: x ═ A ═ I1*I2And a first expression in which the ordinate of the real image is multiplied by the second distortion parameter and the second polynomial: y ═ B ═ I2*I3Wherein X is the abscissa of the real image, Y is the ordinate of the real image, A is the first distortion parameter, B is the second distortion parameter, I1To fit a matrix of transverse radial distortion or a matrix of transverse barrel distortion of the optical lens, I2Matrix fitting the transverse tangential distortion of an optical lens, I3To fit a matrix of longitudinal radial distortion or a matrix of longitudinal barrel distortion of the optical lens, I4Matrix fitting the longitudinal tangential distortion of an optical lens, I1Including the abscissa of the virtual image, I2Including the abscissa and ordinate of the virtual image, I3Including the ordinate of the virtual image, I4Including the abscissa and the ordinate of the virtual image.
The first distortion parameter is a coefficient of distortion of the fitting optical lens in a first direction, and the second distortion parameter is a coefficient of distortion of the fitting optical lens in a second direction. The first direction may be a lateral direction and the second direction may be a longitudinal direction, or the first direction may be a longitudinal direction and the second direction may be a lateral direction.
The first polynomial is obtained by multiplying a matrix for fitting the lateral radial distortion of the optical lens by a matrix for fitting the lateral tangential distortion of the optical lens, or by multiplying a matrix for fitting the lateral barrel distortion of the optical lens by a matrix for fitting the lateral tangential distortion of the optical lens. The matrix used for fitting the transverse radial distortion of the optical lens and the matrix used for fitting the transverse barrel-direction distortion of the optical lens can be four rows of one column of matrix formed by the abscissa of the virtual image, and the matrix used for fitting the transverse tangential distortion of the optical lens is four rows of one column of matrix formed by the abscissa and the ordinate of the virtual image.
The second polynomial is obtained by multiplying a matrix for fitting the longitudinal radial distortion of the optical lens by a matrix for fitting the longitudinal tangential distortion of the optical lens, or by multiplying a matrix for fitting the longitudinal barrel distortion of the optical lens by a matrix for fitting the longitudinal tangential distortion of the optical lens. The matrix used for fitting the longitudinal radial distortion of the optical lens and the matrix used for fitting the longitudinal barrel-direction distortion of the optical lens can be a matrix formed by four rows and one column of the ordinate of the virtual image, and the matrix used for fitting the longitudinal tangential distortion of the optical lens is a matrix formed by the abscissa of the virtual image and four rows and one column of the ordinate.
After the first expression and the second expression are obtained, the coordinate data of the experimental image and the coordinate data of the distorted virtual image adjusted according to the optical parameters can be substituted, and the first distortion parameter in the first expression and the second distortion parameter in the second expression are solved, so that the first distortion parameter and the second distortion parameter are obtained.
After obtaining the first distortion parameter and the second distortion parameter, the first distortion parameter may be substituted into the first expression, and the second distortion parameter may be substituted into the second expression, so as to obtain an optical distortion model, where the optical distortion model includes the first expression and the second expression.
In the embodiment of the present application, after obtaining the optical distortion model, the obtained optical distortion model may be considered to ensure the accuracy of the optical distortion model. Therefore, the display method may further include: and verifying the optical distortion model.
Further, verifying the optical distortion model may include:
obtaining a verification image to be displayed by utilizing the coordinate data of the original image for verifying the optical distortion model and the optical distortion model, and displaying the verification image; acquiring an image of a verification image displayed by the terminal equipment by using image acquisition equipment at a watching position to obtain an image containing the verification image; judging whether the parameters of the image containing the verification image meet preset conditions or not; and if the preset condition is met, storing the optical distortion model.
It is understood that the terminal device stores in advance an original image for verifying the optical distortion model. For example, the original image may be a checkerboard. When the original image is displayed without pre-distorting the original image by using the optical distortion model, the displayed virtual image is a distorted virtual image corresponding to the original image. If the original image is displayed after being subjected to pre-distortion by the optical distortion model, and the displayed virtual image is a virtual image without distortion, the optical distortion model is accurate.
In this embodiment of the application, the obtained optical distortion model may be used to perform inverse operation on the coordinate data of the original image, so as to obtain a to-be-displayed verification image corresponding to the original image.
Specifically, the coordinate data of the original image is used as the coordinate data of the virtual image, the virtual image at the moment is a distortion-free virtual image, the virtual image is substituted into the optical distortion model, the screen coordinate data of the verification image to be displayed can be obtained, the verification image to be displayed can be generated according to the screen coordinate data and the pixel values of all the pixel points of the original image, and the verification image is the image subjected to pre-distortion through the optical distortion model.
After the verification image to be displayed is obtained, the verification image can be displayed, and then image acquisition can be performed on the displayed verification image by using an image acquisition device at the viewing position, so that an image containing the displayed verification image is obtained. For example, an industrial camera may be positioned in a human eye viewing position in a helmet to capture a displayed verification image.
After the image including the displayed verification image is obtained, it may be determined whether the aspect ratio of the verification image in the image is the preset aspect ratio and the linearity is linearity. When the aspect ratio is the preset aspect ratio and the linearity is the preset linearity, the obtained optical distortion model can be determined to be correct, so that the obtained optical distortion model can be stored to realize distortion correction during display.
Of course, in the embodiment of the present application, when a model determination operation performed by a user is detected after the verification image to be displayed is displayed, the model determination operation is used to characterize that the linearity and the aspect ratio of the verification image are normal, and the boundary between the left and right viewing angles is matched, so as to determine that the optical distortion model is correct, and store the optical distortion model.
In the display method provided by the embodiment of the application, when the obtained virtual marker is aligned with the obtained entity marker, after acquiring the transformation parameters between the first spatial coordinate system in real space and the second spatial coordinate system in virtual space, the first coordinate system in real space of the physical marker and the second coordinate system in virtual space of the virtual marker, acquiring the target space coordinate of the target marker in the first space coordinate system corresponding to the rendering coordinate in the second space coordinate system according to the conversion parameter, rendering the virtual object according to the rendering coordinates to generate left-eye display content and right-eye display content, pre-distorting the left-eye display content and the right-eye display content, and displaying, thereby realizing the alignment display of the virtual object and the target marker, the undistorted display of the virtual object and the stereo display.
Referring to fig. 11, a block diagram of a display device 400 according to an embodiment of the present disclosure is shown, where the display device 400 is applied to a terminal device. The display device 400 includes: a spatial coordinate acquisition module 410, a coordinate conversion module 420, a virtual object rendering module 430, and an object display module 440. The spatial coordinate acquiring module 410 is configured to acquire a target spatial coordinate of the target marker in a real space; the spatial coordinate conversion module 420 is configured to convert the target spatial coordinates into rendering coordinates in a virtual space; the virtual object rendering module 430 is configured to obtain data of a virtual object to be displayed, and render the virtual object according to the data of the virtual object and the rendering coordinates, so as to obtain left-eye display content and right-eye display content of the virtual object; the object display module 440 is configured to display left-eye display content and right-eye display content, where the left-eye display content is projected to a first optical lens, the right-eye display content is projected to a second optical lens, and the first optical lens and the second optical lens are respectively configured to reflect the left-eye display content and the right-eye display content to human eyes.
In this embodiment, the coordinate transformation module 420 may be specifically configured to: reading conversion parameters of a first space coordinate system and a second space coordinate system which are stored, wherein the first space coordinate system is a space coordinate system which takes a tracking camera as an original point in a real space, and the second space coordinate system is a space coordinate system which takes a virtual camera as an original point in a virtual space; and converting the target space coordinates into rendering coordinates in the virtual space according to the conversion parameters.
In this embodiment, the virtual cameras may include a left virtual camera and a right virtual camera. The virtual object rendering module 430 may be specifically configured to: constructing and rendering the virtual object according to the data of the virtual object;
and respectively calculating the corresponding pixel coordinates of the virtual object in the left virtual camera and the right virtual camera according to the rendering coordinates to obtain left-eye display content and right-eye display content.
In an embodiment of the present application, the object display module 440 may include: a predistortion unit and a display execution unit. The predistortion unit is used for obtaining a left eye predistortion image corresponding to left eye display content and a right eye predistortion image corresponding to right eye display content according to the optical distortion model, the left eye display content and the right eye display content, and the optical distortion model is used for fitting optical distortion generated by an optical lens; the display execution unit is used for displaying a left-eye pre-distortion image and a right-eye pre-distortion image, the left-eye pre-distortion image is used for being projected to the first optical lens and reflected to human eyes through the first optical lens to form distortion-free left-eye display content, and the right-eye pre-distortion image is used for being projected to the second optical lens and reflected to the human eyes through the second optical lens to form distortion-free right-eye display content.
In the embodiment of the present application, the display device 400 may further include: the device comprises a data reading module, a model obtaining module and a model storing module. The data reading module can be used for reading optical manufacturer data of the optical lens, wherein the optical manufacturer data comprises coordinate data of an experimental image and coordinate data of a distorted virtual image corresponding to the experimental image; the model acquisition module can be used for carrying out polynomial fitting on the coordinate data of the experimental image and the coordinate data of the distorted virtual image to obtain an optical distortion model; the model storage module may be configured to store the optical distortion model.
The model acquisition module may be specifically configured to: calculating a first distortion parameter and a second distortion parameter of the optical distortion model according to the coordinate data of the experimental image and the coordinate data of the distortion virtual image corresponding to the experimental image, wherein the first distortion parameter is a coefficient of distortion of the fitting optical lens in a first direction, and the second distortion parameter is a coefficient of distortion of the fitting optical lens in a second direction; and constructing an optical distortion model according to the first distortion coefficient and the second distortion data.
The display device may further include: and a data adjusting module. The data adjusting module may be configured to adjust the coordinate data of the distorted virtual image according to a display parameter after reading optical manufacturer data of the optical lens, where the display parameter includes at least one of a zoom ratio, a screen size, a pixel size, and a photopentric position of the optical lens.
The display device may further include a model verification module. The model verification module is used for adjusting the coordinate data of the distorted virtual image according to display parameters after reading the optical manufacturer data of the optical lens, wherein the display parameters comprise at least one of the zoom ratio, the screen size, the pixel size and the optical center position of the optical lens.
It can be clearly understood by those skilled in the art that, for convenience and brevity of description, the specific working processes of the above-described apparatuses and modules may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
In the several embodiments provided in the present application, the coupling between the modules may be electrical, mechanical or other type of coupling.
In addition, functional modules in the embodiments of the present application may be integrated into one processing module, or each of the modules may exist alone physically, or two or more modules are integrated into one module. The integrated module can be realized in a hardware mode, and can also be realized in a software functional module mode.
Referring to fig. 12, a block diagram of a terminal device according to an embodiment of the present application is shown. The terminal device 100 may be a terminal device capable of running an application, such as a smart phone, a tablet computer, an electronic book, or the like. The terminal device 100 in the present application may include one or more of the following components: a processor 110, a memory 120, an image acquisition apparatus 130, and one or more applications, wherein the one or more applications may be stored in the memory 120 and configured to be executed by the one or more processors 110, the one or more programs configured to perform a method as described in the aforementioned method embodiments.
Processor 110 may include one or more processing cores. The processor 110 connects various parts within the entire terminal device 100 using various interfaces and lines, and performs various functions of the terminal device 100 and processes data by executing or executing instructions, programs, code sets, or instruction sets stored in the memory 120 and calling data stored in the memory 120. Alternatively, the processor 110 may be implemented in hardware using at least one of Digital Signal Processing (DSP), Field-Programmable Gate Array (FPGA), and Programmable Logic Array (PLA). The processor 110 may integrate one or more of a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), a modem, and the like. Wherein, the CPU mainly processes an operating system, a user interface, an application program and the like; the GPU is used for rendering and drawing display content; the modem is used to handle wireless communications. It is understood that the modem may not be integrated into the processor 110, but may be implemented by a communication chip.
The Memory 120 may include a Random Access Memory (RAM) or a Read-Only Memory (Read-Only Memory). The memory 120 may be used to store instructions, programs, code sets, or instruction sets. The memory 120 may include a stored program area and a stored data area, wherein the stored program area may store instructions for implementing an operating system, instructions for implementing at least one function (such as a touch function, a sound playing function, an image playing function, etc.), instructions for implementing various method embodiments described below, and the like. The storage data area may also store data created by the terminal 100 in use, and the like.
In the embodiment of the present application, the image capturing device 130 is used to capture an image of a marker. The image capturing device 130 may be an infrared camera or a color camera, and the specific type of the camera is not limited in the embodiment of the present application.
Referring to fig. 13, a block diagram of a computer-readable storage medium according to an embodiment of the present application is shown. The computer readable medium 800 has stored therein a program code that can be called by a processor to execute the method described in the above method embodiments.
The computer-readable storage medium 800 may be an electronic memory such as a flash memory, an EEPROM (electrically erasable programmable read only memory), an EPROM, a hard disk, or a ROM. Alternatively, the computer-readable storage medium 800 includes a non-volatile computer-readable storage medium. The computer readable storage medium 800 has storage space for program code 810 to perform any of the method steps of the method described above. The program code can be read from or written to one or more computer program products. The program code 810 may be compressed, for example, in a suitable form.
Finally, it should be noted that: the above embodiments are only used to illustrate the technical solutions of the present application, and not to limit the same; although the present application has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; such modifications and substitutions do not necessarily depart from the spirit and scope of the corresponding technical solutions in the embodiments of the present application.

Claims (10)

1. A display method is applied to a terminal device, and comprises the following steps:
acquiring a target space coordinate of a target marker in a real space;
converting the target space coordinates to rendering coordinates in a virtual space;
acquiring data of a virtual object to be displayed, and rendering the virtual object according to the data of the virtual object and the rendering coordinates to obtain left-eye display content and right-eye display content of the virtual object;
and displaying the left eye display content and the right eye display content, wherein the left eye display content is used for being projected to a first optical lens, the right eye display content is used for being projected to a second optical lens, and the first optical lens and the second optical lens are respectively used for reflecting the left eye display content and the right eye display content to human eyes.
2. The method of claim 1, wherein the converting the target space coordinates to rendering coordinates in virtual space comprises:
reading conversion parameters of a first space coordinate system and a second space coordinate system which are stored, wherein the first space coordinate system is a space coordinate system which takes a tracking camera as an original point in a real space, and the second space coordinate system is a space coordinate system which takes a virtual camera as an original point in a virtual space;
and converting the target space coordinate into a rendering coordinate in a virtual space according to the conversion parameter.
3. The method of claim 2, wherein the virtual camera comprises a left virtual camera and a right virtual camera, and the rendering the virtual object according to the data of the virtual object and the rendering coordinates to obtain left-eye display content and right-eye display content of the virtual object comprises:
constructing and rendering the virtual object according to the data of the virtual object;
and respectively calculating the corresponding pixel coordinates of the virtual object in the left virtual camera and the right virtual camera according to the rendering coordinates to obtain left-eye display content and right-eye display content.
4. The method of claim 1, wherein the displaying the left-eye display content and the right-eye display content comprises:
processing the left-eye display content and the right-eye display content according to an optical distortion model respectively to obtain a left-eye pre-distortion image corresponding to the left-eye display content and a right-eye pre-distortion image corresponding to the right-eye display content, wherein the optical distortion model is used for fitting optical distortion generated by an optical lens;
and displaying the left eye pre-distortion image and the right eye pre-distortion image, wherein the left eye pre-distortion image is used for being projected to a first optical lens and being reflected to human eyes through the first optical lens, and the right eye pre-distortion image is used for being projected to a second optical lens and being reflected to the human eyes through the second optical lens so as to form a virtual image of undistorted three-dimensional display content.
5. The method of claim 4, wherein the step of constructing the optical distortion model comprises:
reading optical manufacturer data of an optical lens, wherein the optical manufacturer data comprises coordinate data of an experimental image and coordinate data of a distorted virtual image corresponding to the experimental image;
and performing polynomial fitting on the coordinate data of the experimental image and the coordinate data of the distorted virtual image to obtain an optical distortion model.
6. The method of claim 5, wherein the polynomial fitting the coordinate data of the experimental image to the coordinate data of the virtual distorted image to obtain an optical distortion model comprises:
calculating a first distortion parameter and a second distortion parameter of an optical distortion model according to the coordinate data of the experimental image and the coordinate data of a distortion virtual image corresponding to the experimental image, wherein the first distortion parameter is a coefficient for fitting the optical lens to be distorted in a first direction, and the second distortion parameter is a coefficient for fitting the optical lens to be distorted in a second direction;
and constructing the optical distortion model according to the first distortion coefficient and the second distortion data.
7. The method according to claim 5, wherein after said reading optical manufacturer data of an optical lens, the method further comprises:
adjusting coordinate data of the distorted virtual image according to display parameters, wherein the display parameters comprise at least one of a scaling, a screen size, a pixel size, and a photopentricity position of the optical lens.
8. A display device is applied to a terminal device, and the device comprises: a space coordinate obtaining module, a space coordinate converting module, a virtual object rendering module and an object display module, wherein,
the space coordinate acquisition module is used for acquiring a target space coordinate of the target marker in a real space;
the space coordinate conversion module is used for converting the target space coordinate into a rendering coordinate in a virtual space;
the virtual object rendering module is used for acquiring data of a virtual object to be displayed, and rendering the virtual object according to the data of the virtual object and the rendering coordinates to obtain left eye display content and right eye display content of the virtual object;
the object display module is used for displaying the left eye display content and the right eye display content, the left eye display content is used for projecting to a first optical lens, the right eye display content is used for projecting to a second optical lens, and the first optical lens and the second optical lens are respectively used for reflecting the left eye display content and the right eye display content to human eyes.
9. A terminal device, comprising:
one or more processors;
a memory;
one or more applications, wherein the one or more applications are stored in the memory and configured to be executed by the one or more processors, the one or more programs configured to perform the method of any of claims 1-7.
10. A computer-readable storage medium, having stored thereon program code that can be invoked by a processor to perform the method according to any one of claims 1 to 7.
CN201811023501.6A 2018-09-03 2018-09-03 Display method, display device, terminal equipment and storage medium Pending CN110874867A (en)

Priority Applications (3)

Application Number Priority Date Filing Date Title
CN201811023501.6A CN110874867A (en) 2018-09-03 2018-09-03 Display method, display device, terminal equipment and storage medium
PCT/CN2019/104240 WO2020048461A1 (en) 2018-09-03 2019-09-03 Three-dimensional stereoscopic display method, terminal device and storage medium
US16/731,094 US11380063B2 (en) 2018-09-03 2019-12-31 Three-dimensional distortion display method, terminal device, and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811023501.6A CN110874867A (en) 2018-09-03 2018-09-03 Display method, display device, terminal equipment and storage medium

Publications (1)

Publication Number Publication Date
CN110874867A true CN110874867A (en) 2020-03-10

Family

ID=69716020

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811023501.6A Pending CN110874867A (en) 2018-09-03 2018-09-03 Display method, display device, terminal equipment and storage medium

Country Status (1)

Country Link
CN (1) CN110874867A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112755523A (en) * 2021-01-12 2021-05-07 网易(杭州)网络有限公司 Target virtual model construction method and device, electronic equipment and storage medium
CN113426117A (en) * 2021-06-23 2021-09-24 网易(杭州)网络有限公司 Virtual camera shooting parameter acquisition method and device, electronic equipment and storage medium
CN116184686A (en) * 2022-05-10 2023-05-30 华为技术有限公司 Stereoscopic display device and vehicle

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4758059A (en) * 1984-12-28 1988-07-19 Ricoh Company, Ltd. Post-objective type optical deflector
CN102221331A (en) * 2011-04-11 2011-10-19 浙江大学 Measuring method based on asymmetric binocular stereovision technology
CN103792674A (en) * 2014-01-21 2014-05-14 浙江大学 Device and method for measuring and correcting distortion of virtual reality displayer

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4758059A (en) * 1984-12-28 1988-07-19 Ricoh Company, Ltd. Post-objective type optical deflector
CN102221331A (en) * 2011-04-11 2011-10-19 浙江大学 Measuring method based on asymmetric binocular stereovision technology
CN103792674A (en) * 2014-01-21 2014-05-14 浙江大学 Device and method for measuring and correcting distortion of virtual reality displayer

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
7DGAME: "微软Hololens+vuforia=汽车介绍演示", 《URL:HTTPS://WWW.BILIBILI.COM/VIDEO/AV9506077/》 *

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112755523A (en) * 2021-01-12 2021-05-07 网易(杭州)网络有限公司 Target virtual model construction method and device, electronic equipment and storage medium
CN112755523B (en) * 2021-01-12 2024-03-15 网易(杭州)网络有限公司 Target virtual model construction method and device, electronic equipment and storage medium
CN113426117A (en) * 2021-06-23 2021-09-24 网易(杭州)网络有限公司 Virtual camera shooting parameter acquisition method and device, electronic equipment and storage medium
CN113426117B (en) * 2021-06-23 2024-03-01 网易(杭州)网络有限公司 Shooting parameter acquisition method and device for virtual camera, electronic equipment and storage medium
CN116184686A (en) * 2022-05-10 2023-05-30 华为技术有限公司 Stereoscopic display device and vehicle

Similar Documents

Publication Publication Date Title
EP3614340B1 (en) Methods and devices for acquiring 3d face, and computer readable storage media
EP3018903B1 (en) Method and system for projector calibration
US11380063B2 (en) Three-dimensional distortion display method, terminal device, and storage medium
CN110809786B (en) Calibration device, calibration chart, chart pattern generation device, and calibration method
CN110874868A (en) Data processing method and device, terminal equipment and storage medium
CN110782499B (en) Calibration method and calibration device for augmented reality equipment and terminal equipment
JP6852355B2 (en) Program, head-mounted display device
CN110874135B (en) Optical distortion correction method and device, terminal equipment and storage medium
US20180005424A1 (en) Display control method and device
CA2984785A1 (en) Virtual reality editor
CN110874867A (en) Display method, display device, terminal equipment and storage medium
CN110362193A (en) With hand or the method for tracking target and system of eyes tracking auxiliary
JP6552266B2 (en) Image processing apparatus, image processing method, and program
Hu et al. Alignment-free offline calibration of commercial optical see-through head-mounted displays with simplified procedures
CN102004623A (en) Three-dimensional image display device and method
CN110737326A (en) Virtual object display method and device, terminal equipment and storage medium
JP2017028510A (en) Multi-viewpoint video generating device, program therefor, and multi-viewpoint video generating system
JP2017102696A (en) Head mounted display device and computer program
GB2585197A (en) Method and system for obtaining depth data
JP6509101B2 (en) Image display apparatus, program and method for displaying an object on a spectacle-like optical see-through type binocular display
CN114092668A (en) Virtual-real fusion method, device, equipment and storage medium
US20220239876A1 (en) Information processing device, information processing method, program, projection device, and information processing system
CN116524022B (en) Offset data calculation method, image fusion device and electronic equipment
CN111818326B (en) Image processing method, device, system, terminal device and storage medium
EP3764641B1 (en) Method and device for processing 360-degree image

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20200310

RJ01 Rejection of invention patent application after publication