CN111127621B - Picture rendering method, device and readable storage medium - Google Patents

Picture rendering method, device and readable storage medium Download PDF

Info

Publication number
CN111127621B
CN111127621B CN201911425529.7A CN201911425529A CN111127621B CN 111127621 B CN111127621 B CN 111127621B CN 201911425529 A CN201911425529 A CN 201911425529A CN 111127621 B CN111127621 B CN 111127621B
Authority
CN
China
Prior art keywords
canvas
display
dimensional
picture
dynamic
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201911425529.7A
Other languages
Chinese (zh)
Other versions
CN111127621A (en
Inventor
邱涛
张向军
刘影疏
王铁存
吕廷昌
刘文杰
陈晨
姜滨
迟小羽
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Goertek Techology Co Ltd
Original Assignee
Goertek Techology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Goertek Techology Co Ltd filed Critical Goertek Techology Co Ltd
Priority to CN201911425529.7A priority Critical patent/CN111127621B/en
Publication of CN111127621A publication Critical patent/CN111127621A/en
Application granted granted Critical
Publication of CN111127621B publication Critical patent/CN111127621B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/10Geometric effects
    • G06T15/20Perspective computation
    • G06T15/205Image-based rendering
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Computing Systems (AREA)
  • Geometry (AREA)
  • Computer Graphics (AREA)
  • General Physics & Mathematics (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The invention discloses a picture rendering method, a device and a readable storage medium, wherein the picture rendering method is used for a head-mounted display device and comprises the following steps: acquiring an observation visual angle; creating a dynamic canvas; setting a display canvas on the dynamic canvas according to the observation visual angle; and performing picture rendering on the display canvas. The technical scheme of the invention can watch pictures with other visual angles in the virtual scene, and improves the flexibility of the user in immersive experience.

Description

Picture rendering method, device and readable storage medium
Technical Field
The present invention relates to the field of man-machine interaction technologies, and in particular, to a method and apparatus for rendering a picture, and a readable storage medium.
Background
The head-mounted display device is a wearable virtual display product. The technical principle of the present head-mounted display device is roughly divided into Virtual Reality (Virtual Reality) abbreviated as VR, augmented Reality (Augmented Reality) abbreviated as AR, mixed Reality (Mixed Reality) abbreviated as MR and Extended Reality (Extended Reality) abbreviated as XR.
As the application of the head-mounted display device product is wider and wider, the application scene is more and more. And when wearing the head-mounted display device, the user cannot observe other user visual angles or other visual angles in the virtual scene, so that the flexibility of immersion experience is poor.
The foregoing is merely provided to facilitate an understanding of the principles of the present application and is not admitted to be prior art.
Disclosure of Invention
Based on this, aiming at the problem that when the user wears the head-mounted display device, other user visual angles or other visual angles cannot be observed in the virtual scene at present, the flexibility of immersion experience is poor, it is necessary to provide a picture rendering method, a picture rendering device and a readable storage medium, which can watch pictures of other visual angles in the virtual scene, and improve the flexibility of the user in immersion experience.
In order to achieve the above object, the present invention provides a picture rendering method for a head-mounted display device, the picture rendering method comprising:
acquiring an observation visual angle;
creating a dynamic canvas;
setting a display canvas on the dynamic canvas according to the observation visual angle;
and performing picture rendering on the display canvas.
Optionally, the display canvas comprises at least one two-dimensional canvas;
the step of setting a display canvas on the dynamic canvas according to the viewing angle comprises the following steps:
setting at least one two-dimensional canvas on the dynamic canvas according to the observation view angle;
the step of performing picture rendering on the display canvas comprises the following steps:
and establishing rendering windows corresponding to the two-dimensional canvases according to the number of the two-dimensional canvases, and binding the content rendered by the picture with the two-dimensional canvases.
Optionally, the viewing angle includes a first viewing angle and a second viewing angle;
the step of setting a display canvas on the dynamic canvas according to the viewing angle further comprises:
setting a two-dimensional canvas on the dynamic canvas according to the first view angle; and/or the number of the groups of groups,
and setting two-dimensional canvases on the dynamic canvases according to the second visual angle.
Optionally, before the step of obtaining the viewing angle, the method includes:
a virtual scene is constructed, and an observation view angle is selected in the virtual scene.
Optionally, the step of creating a dynamic canvas includes:
and acquiring an operation instruction for creating a dynamic canvas, and creating the dynamic canvas according to the operation instruction in the virtual scene.
Optionally, the obtaining an operation instruction for creating a dynamic canvas includes, before the step of creating the dynamic canvas according to the operation instruction in the virtual scene:
and receiving a user gesture and/or an eyeball fixation point, and generating an operation instruction.
Optionally, the step of performing picture rendering on the display canvas includes:
if the transparency of the display canvas is equal to 100%, deleting at least part of the content of the picture below the outermost layer of the display canvas;
and if the transparency of the display canvas is less than 100%, displaying the outermost layer picture of the display canvas.
In addition, in order to achieve the above object, the present invention also provides a picture rendering apparatus for a head-mounted display device, comprising:
the acquisition module is used for acquiring an observation visual angle;
the creation module is used for creating a dynamic canvas;
the setting module is used for setting a display canvas on the dynamic canvas according to the observation visual angle;
and the rendering module is used for performing picture rendering on the display canvas.
Optionally, the display canvas comprises at least one two-dimensional canvas,
the setting module is also used for setting at least one two-dimensional canvas on the dynamic canvas according to the observation visual angle;
the rendering module is used for establishing rendering windows corresponding to the two-dimensional canvas according to the number of the two-dimensional canvas and binding the content rendered by the picture with the two-dimensional canvas.
In addition, in order to achieve the above object, the present invention also provides a readable storage medium having stored thereon a picture rendering program which, when executed by a processor, implements the steps of the picture rendering method as described above.
According to the technical scheme provided by the invention, the observation visual angle is obtained, the dynamic canvas is created, and the corresponding display canvas is arranged on the dynamic canvas according to the observation visual angle, so that the display canvas is subjected to picture rendering, and therefore, the display pictures at other angles can be watched through the observation visual angle, and the flexibility of a user in immersive experience is improved.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings that are required in the embodiments or the description of the prior art will be briefly described, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and other drawings may be obtained according to the structures shown in these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a flowchart of a first embodiment of a frame rendering method according to the present invention;
FIG. 2 is a flowchart of a second embodiment of a frame rendering method according to the present invention;
FIG. 3 is a flowchart of a third embodiment of a frame rendering method according to the present invention;
FIG. 4 is a flowchart of a fourth embodiment of a frame rendering method according to the present invention;
FIG. 5 is a flowchart of a fifth embodiment of a frame rendering method according to the present invention;
FIG. 6 is a flowchart of a sixth embodiment of a frame rendering method according to the present invention;
FIG. 7 is a flowchart of a frame rendering method according to a sixth embodiment of the present invention
Fig. 8 is a schematic diagram of a connection structure of a frame rendering device according to the present invention.
Reference numerals illustrate:
reference numerals Name of the name Reference numerals Name of the name
10 Acquisition module 40 Rendering module
20 Creation module 50 Building modules
30 Setting module 60 Generating module
The achievement of the objects, functional features and advantages of the present invention will be further described with reference to the accompanying drawings, in conjunction with the embodiments.
Detailed Description
The following description of the embodiments of the present invention will be made clearly and fully with reference to the accompanying drawings, in which it is evident that the embodiments described are only some, but not all embodiments of the invention. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
It should be noted that all directional indicators (such as up, down, left, right, front, and rear … …) in the embodiments of the present invention are merely used to explain the relative positional relationship, movement, etc. between the components in a particular posture (as shown in the drawings), and if the particular posture is changed, the directional indicator is changed accordingly.
Furthermore, descriptions such as those referred to as "first," "second," and the like, are provided for descriptive purposes only and are not to be construed as indicating or implying a relative importance or implying an order of magnitude of the indicated technical features in the present disclosure. Thus, a feature defining "a first" or "a second" may explicitly or implicitly include at least one such feature. In the description of the present invention, the meaning of "plurality" means at least two, for example, two, three, etc., unless specifically defined otherwise.
In the present invention, unless specifically stated and limited otherwise, the terms "connected," "affixed," and the like are to be construed broadly, and for example, "affixed" may be a fixed connection, a removable connection, or an integral body; can be mechanically or electrically connected; either directly or indirectly, through intermediaries, or both, may be in communication with each other or in interaction with each other, unless expressly defined otherwise. The specific meaning of the above terms in the present invention can be understood by those of ordinary skill in the art according to the specific circumstances.
In addition, the technical solutions of the embodiments of the present invention may be combined with each other, but it is necessary to be based on the fact that those skilled in the art can implement the technical solutions, and when the technical solutions are contradictory or cannot be implemented, the combination of the technical solutions should be considered as not existing, and not falling within the scope of protection claimed by the present invention.
Referring to fig. 1, a first embodiment of the present invention provides a picture rendering method, which is used for a head-mounted display device, and the picture rendering method includes:
step S10, obtaining an observation visual angle; the viewing angle includes expansion or reduction of the viewing angle range, and also includes switching of the viewing angle. For example, enlarging the viewing angle range, the user can observe a wider range. Or the view angle range is narrowed, and the details of the scenery can be observed. In the virtual environment, one of the positions is selected as the viewing angle. For example, the original user's viewing angle is a landscape standing on the bridge to look at the shore, and by selectively viewing the viewing angle, the user can also be equivalent to a landscape that the user stands on the bridge to look at the shore by switching the viewing angle to the shore. Or a scene within two viewing angles is observed simultaneously in the display interface. The viewing angle may be a pre-bound viewing angle, or a coordinate point arbitrarily selected by the user in the virtual scene may be a point of the viewing angle.
Step S20, creating a dynamic canvas; the dynamic canvas can be understood as the bottom layer of a picture, and in the head-mounted display device, the display process of the picture is a dynamic continuous process, and by setting the dynamic canvas, the setting of the canvas above the dynamic canvas can ensure that the picture watched by a user is a dynamic process. For example, a user may watch a live race while wearing a head mounted display device, the progress of the race being persistent. The created viewing angle, wherein the display is also built on a dynamic canvas.
Step S30, setting a display canvas on the dynamic canvas according to the observation view angle; on the basis of the selected viewing angle, new display content is constructed, the display canvas is arranged on the dynamic canvas, and a user can watch a new picture under the new selected viewing angle by picture rendering of the display canvas. For example, in a virtual environment, a user views a game, and in a picture of a live game, a view angle near a goalkeeper is selected, so that the user can view pictures in two views at the same time, and can also switch the pictures between the two views.
And step S40, performing picture rendering on the display canvas. The picture rendering is simply to render pictures on the display canvas according to the frame section frequency on the surface of the display canvas. The head-mounted display device is provided with a processor, the processor is used for controlling and executing picture rendering of the display canvas, and then the display canvas after picture rendering is switched into a display screen of the head-mounted display device.
In the technical scheme of the embodiment, an observation view angle is obtained, a dynamic canvas is created, a corresponding display canvas is arranged on the dynamic canvas according to the observation view angle, and picture rendering is carried out on the display canvas, so that display pictures of other angles can be watched through the observation view angle, and the flexibility of a user in immersive experience is improved.
Referring to FIG. 2, in addition to the first embodiment of the present invention, a second embodiment of the present invention is presented, the display canvas comprising at least one two-dimensional canvas;
step S30 of setting a display canvas on the dynamic canvas according to the viewing angle, comprising:
step S31, at least one two-dimensional canvas is set on the dynamic canvas according to the observation angle.
When a user needs to watch two-dimensional pictures of a plane, a two-dimensional canvas is required to be arranged on the dynamic canvas, if a plurality of two-dimensional pictures need to be watched, a plurality of independent two-dimensional canvases can be arranged on the dynamic canvas, wherein each two-dimensional canvas corresponds to one observation visual angle, and the observation visual angles can be the same or different.
Two-dimensional canvases can be arranged, the two-dimensional canvases are combined to generate a three-dimensional canvas, a user can switch from a current visual angle to a newly established visual angle through visual angle switching, and in the three-dimensional canvas, the two-dimensional canvases are overlapped and combined to form a three-dimensional picture. Likewise, multiple three-dimensional canvases can be provided, and a user can switch between the multiple three-dimensional canvases at will, thereby being able to obtain an immersive experience at any viewing angle.
Step S40 of rendering a screen on the display canvas includes:
and S41, establishing rendering windows corresponding to the two-dimensional canvases according to the number of the two-dimensional canvases, and binding the content rendered by the picture with the two-dimensional canvases.
For example, a two-dimensional canvas is created, a rendering window corresponding to the two-dimensional canvas and the eyes of the user is created, the content rendered by the picture is bound with the two-dimensional canvas, and the user can watch the picture under the corresponding viewing angle through the rendering window. Similarly, corresponding rendering windows are established on a plurality of two-dimensional canvases, and a plurality of display windows can be displayed in the virtual interface.
Referring to fig. 3, a third embodiment of the present invention is presented on the basis of the second embodiment of the present invention, and the viewing angle includes a first viewing angle and a second viewing angle;
step S30 of setting a display canvas on the dynamic canvas according to the viewing angle, further includes:
step S310, setting a two-dimensional canvas on the dynamic canvas according to the first visual angle; and/or the number of the groups of groups,
step S320, two-dimensional canvases are set on the dynamic canvas according to the second visual angle.
It will also be appreciated that step S30 includes three situations, the first of which is setting a two-dimensional canvas on a dynamic canvas according to a first viewing angle. In a second case, two-dimensional canvas is set on the dynamic canvas according to a second viewing angle. In a third case, a two-dimensional canvas is arranged on the dynamic canvas according to the first view angle, and two-dimensional canvases are arranged on the dynamic canvas according to the second view angle, so that a two-dimensional picture can be formed by arranging one two-dimensional canvas, and a three-dimensional picture can be formed by rendering pictures on the two-dimensional canvases. The first view angle and the second view angle may be the same view angle or different view angles.
In addition, a rendering window can be created on a three-dimensional picture formed by two-dimensional canvases, and a user can select any rendering window to switch the picture visual angle. That is, in the virtual display interface, a plurality of portlets are provided, through which a user can observe display images of other viewing angles, the size of the portlets can be adjusted according to the needs of the user, and image switching is performed by selecting corresponding portlets.
Furthermore, according to the use requirement of the user, a plurality of two-dimensional images and a plurality of image canvas can be simultaneously created, so that a plurality of two-dimensional display windows and a plurality of three-dimensional display window images are obtained, that is, the two-dimensional display windows and the three-dimensional display windows can be simultaneously displayed or can be separately displayed.
Referring to fig. 4, on the basis of the first embodiment of the present invention, a fourth embodiment of the present invention is provided, comprising, before step S10 of obtaining an observation angle:
step S50, constructing a virtual scene, and selecting an observation view angle in the virtual scene.
The observation view angle is selected in the virtual scene, so that a user can select the observation view angle according to the display content of the virtual scene, and the flexibility of selection is higher.
In addition, the selection of the viewing angle may be preset, and the user may make a selection among the preset viewing angles in the virtual scene.
Referring to fig. 5, on the basis of the fourth embodiment of the present invention, a fifth embodiment of the present invention is provided, and a step S20 of creating a dynamic canvas includes:
step S21, an operation instruction for creating the dynamic canvas is obtained, and in the virtual scene, the dynamic canvas is created according to the operation instruction.
Specifically, the operation instruction may be generated according to an input instruction, where the input instruction source may be a voice command or a gesture command. The instruction input may be performed through the eye gaze point of the user.
Referring to fig. 6, a sixth embodiment of the present invention is provided on the basis of the fifth embodiment of the present invention, in which an operation instruction for creating a dynamic canvas is obtained, and before step S21 of creating a dynamic canvas according to the operation instruction in a virtual scene, the method includes:
step S60, receiving a user gesture and/or an eyeball fixation point, and generating an operation instruction.
In the virtual scene, corresponding operation content can be determined through gesture change of a user, and an operation instruction is generated. The corresponding operation content can be determined through the change of the eye gaze point of the user, and the operation instruction is generated. The corresponding operation content can also be determined by combining the gesture of the user and the eye gaze point, and the operation instruction is generated. Through the selection of the user gesture or the eyeball fixation point, the user gesture and the eyeball fixation point are easier to react to the selection of the user gesture and the eyeball fixation point in the virtual scene, and the user gesture and the eyeball fixation point are more in accordance with the operation conditions in the virtual scene.
Referring to fig. 7, on the basis of any one of the first to sixth embodiments of the present invention, a seventh embodiment of the present invention is provided, where step S40 of performing screen rendering on a display canvas includes:
in step S401, if the transparency of the display canvas is equal to 100%, deleting at least part of the content of the screen under the outermost layer of the display canvas.
In general, in a display canvas, if the transparency of the display canvas is equal to 100%, a user views a picture in a transparent state, and objects or lines of a scene in the picture are disordered, so that the display effect of the picture is difficult to distinguish.
And step S402, if the transparency of the display canvas is less than 100%, displaying the outermost layer picture of the display canvas.
The outermost screen is a screen in which the direction of the eyes of the user is the uppermost screen when the user views the display screen. The transparency of the display canvas is not equal to 100%, i.e., the display canvas is opaque, so that the uppermost layer of the display canvas is picture rendered.
Referring to fig. 8, the present invention also provides a picture rendering apparatus for a head-mounted display device, the picture rendering apparatus comprising: the device comprises an acquisition module 10, a creation module 20, a setting module 30 and a rendering module 40.
An acquisition module 10 for acquiring an observation angle; the viewing angle includes expansion or reduction of the viewing angle range, and also includes switching of the viewing angle. For example, enlarging the viewing angle range, the user can observe a wider range. Or the view angle range is narrowed, and the details of the scenery can be observed. In the virtual environment, one of the positions is selected as the viewing angle. For example, the original user's viewing angle is a landscape standing on the bridge to look at the shore, and by selectively viewing the viewing angle, the user can also be equivalent to a landscape that the user stands on the bridge to look at the shore by switching the viewing angle to the shore. Or a scene within two viewing angles is observed simultaneously in the display interface. The viewing angle may be a pre-bound viewing angle, or a coordinate point arbitrarily selected by the user in the virtual scene may be a point of the viewing angle.
A creation module 20 for creating a dynamic canvas; the dynamic canvas can be understood as the bottom layer of a picture, and in the head-mounted display device, the display process of the picture is a dynamic continuous process, and by setting the dynamic canvas, the setting of the canvas above the dynamic canvas can ensure that the picture watched by a user is a dynamic process. For example, a user may watch a live race while wearing a head mounted display device, the progress of the race being persistent. The created viewing angle, wherein the display is also built on a dynamic canvas.
A setting module 30 for setting a display canvas on the dynamic canvas according to the viewing angle; on the basis of the selected viewing angle, new display content is constructed, the display canvas is arranged on the dynamic canvas, and a user can watch a new picture under the new selected viewing angle by picture rendering of the display canvas. For example, in a virtual environment, a user views a game, and in a picture of a live game, a view angle near a goalkeeper is selected, so that the user can view pictures in two views at the same time, and can also switch the pictures between the two views.
And a rendering module 40, configured to render a screen of the display canvas. The picture rendering is simply to render pictures on the display canvas according to the frame section frequency on the surface of the display canvas. The head-mounted display device is provided with a processor, the processor is used for controlling and executing picture rendering of the display canvas, and then the display canvas after picture rendering is switched into a display screen of the head-mounted display device.
In the technical scheme of the embodiment, an observation view angle is obtained, a dynamic canvas is created, a corresponding display canvas is arranged on the dynamic canvas according to the observation view angle, and picture rendering is carried out on the display canvas, so that display pictures of other angles can be watched through the observation view angle, and the flexibility of a user in immersive experience is improved.
Further, the display canvas includes at least one two-dimensional canvas.
The setting module 30 is further configured to set at least one two-dimensional canvas on the dynamic canvas according to the viewing angle; when a user needs to watch two-dimensional pictures of a plane, a two-dimensional canvas is required to be arranged on the dynamic canvas, if a plurality of two-dimensional pictures need to be watched, a plurality of independent two-dimensional canvases can be arranged on the dynamic canvas, wherein each two-dimensional canvas corresponds to one observation visual angle, and the observation visual angles can be the same or different.
Two-dimensional canvases can be arranged, the two-dimensional canvases are combined to generate a three-dimensional canvas, a user can switch from a current visual angle to a newly established visual angle through visual angle switching, and in the three-dimensional canvas, the two-dimensional canvases are overlapped and combined to form a three-dimensional picture. Likewise, multiple three-dimensional canvases can be provided, and a user can switch between the multiple three-dimensional canvases at will, thereby being able to obtain an immersive experience at any viewing angle.
The rendering module 40 is further configured to establish a rendering window corresponding to the two-dimensional canvas according to the number of two-dimensional canvases, and bind the content rendered by the screen with the two-dimensional canvas.
For example, a two-dimensional canvas is created, a rendering window corresponding to the two-dimensional canvas and the eyes of the user is created, the content rendered by the picture is bound with the two-dimensional canvas, and the user can watch the picture under the corresponding viewing angle through the rendering window. Similarly, corresponding rendering windows are established on a plurality of two-dimensional canvases, and a plurality of display windows can be displayed in the virtual interface.
Further, the setting module 30 is further configured to set a two-dimensional canvas on the dynamic canvas according to the first viewing angle; and/or, setting two-dimensional canvases on the dynamic canvases according to the second visual angle.
It will also be appreciated that there are three scenarios in total, the first scenario, in which a two-dimensional canvas is set up on a dynamic canvas according to a first perspective. In a second case, two-dimensional canvas is set on the dynamic canvas according to a second viewing angle. In a third case, a two-dimensional canvas is arranged on the dynamic canvas according to the first view angle, and two-dimensional canvases are arranged on the dynamic canvas according to the second view angle, so that a two-dimensional picture can be formed by arranging one two-dimensional canvas, and a three-dimensional picture can be formed by rendering pictures on the two-dimensional canvases. The first view angle and the second view angle may be the same view angle or different view angles.
In addition, a rendering window can be created on a three-dimensional picture formed by two-dimensional canvases, and a user can select any rendering window to switch the picture visual angle. That is, in the virtual display interface, a plurality of portlets are provided, through which a user can observe display images of other viewing angles, the size of the portlets can be adjusted according to the needs of the user, and image switching is performed by selecting corresponding portlets.
Furthermore, according to the use requirement of the user, a plurality of two-dimensional images and a plurality of image canvas can be simultaneously created, so as to obtain a plurality of two-dimensional display windows and a plurality of three-dimensional display window images, that is, the two-dimensional display windows and the three-dimensional display windows can be simultaneously displayed or can be separately displayed
In addition, a rendering window can be created on a three-dimensional canvas consisting of two-dimensional canvases, and a user can select any rendering window to switch the visual angle of a picture. That is, in the virtual display interface, a plurality of portlets are provided, through which a user can observe display images of other viewing angles, the size of the portlets can be adjusted according to the needs of the user, and image switching is performed by selecting corresponding portlets.
Furthermore, according to the use requirement of the user, a plurality of two-dimensional canvases and a plurality of three-dimensional canvases can be simultaneously created, so that a plurality of two-dimensional display windows and a plurality of three-dimensional display window pictures are obtained, that is, the two-dimensional display windows and the three-dimensional display windows can be simultaneously displayed.
Further, the picture rendering apparatus further includes:
a construction module 50 for constructing a virtual scene in which a viewing angle is selected.
The observation view angle is selected in the virtual scene, so that a user can select the observation view angle according to the display content of the virtual scene, and the flexibility of selection is higher.
In addition, the selection of the viewing angle may be preset, and the user may make a selection among the preset viewing angles in the virtual scene.
Further, the creation module 20 is further configured to obtain an operation instruction for creating a dynamic canvas, where the dynamic canvas is created according to the operation instruction in the virtual scene.
Specifically, the operation instruction may be generated according to an input instruction, where the input instruction source may be a voice command or a gesture command. The instruction input may be performed through the eye gaze point of the user.
Further, the image rendering device further includes a generating module 60, where the generating module 60 is configured to receive a gesture of a user and/or an eye gaze point, and generate an operation instruction.
In the virtual scene, corresponding operation content can be determined through gesture change of a user, and an operation instruction is generated. The corresponding operation content can be determined through the change of the eye gaze point of the user, and the operation instruction is generated. The corresponding operation content can also be determined by combining the gesture of the user and the eye gaze point, and the operation instruction is generated. Through the selection of the user gesture or the eyeball fixation point, the user gesture and the eyeball fixation point are easier to react to the selection of the user gesture and the eyeball fixation point in the virtual scene, and the user gesture and the eyeball fixation point are more in accordance with the operation conditions in the virtual scene.
Further, the rendering module 40 is further configured to delete at least a portion of the content of the frame below the outermost layer of the display canvas if the transparency of the display canvas is equal to 100%.
In general, in a display canvas, if the transparency of the display canvas is equal to 100%, a user views a picture in a transparent state, lines of objects or scenes in the picture are disordered, and the display effect of the picture is difficult to distinguish.
The rendering module 40 is further configured to display an outermost screen of the display canvas if the transparency of the display canvas is less than 100%.
The outermost screen is a screen in which the direction of the eyes of the user is the uppermost screen when the user views the display screen. The transparency of the display canvas is not equal to 100%, i.e., the display canvas is opaque, so that the uppermost layer of the display canvas is picture rendered.
The present invention also provides a readable storage medium having stored thereon a picture rendering program which, when executed by a processor, implements the steps of the picture rendering method as described above.
The specific embodiments of the readable storage medium of the present invention may refer to the embodiments of the above-mentioned image rendering method, and will not be described herein again.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or system that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or system. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article, or system that comprises the element.
The foregoing embodiment numbers of the present invention are merely for the purpose of description, and do not represent the advantages or disadvantages of the embodiments.
From the above description of the embodiments, it will be clear to those skilled in the art that the above-described embodiment method may be implemented by means of software plus a necessary general hardware platform, but of course may also be implemented by means of hardware, but in many cases the former is a preferred embodiment. Based on such understanding, the technical solution of the present invention may be embodied essentially or in a part contributing to the prior art in the form of a software product stored in a storage medium (e.g. ROM/RAM, magnetic disk, optical disk) as described above, comprising instructions for causing a terminal device (which may be a mobile phone, a computer, a server, or a network device, etc.) to perform the method according to the embodiments of the present invention.
The foregoing description of the preferred embodiments of the present invention should not be construed as limiting the scope of the invention, but rather should be understood to cover all modifications, equivalents, and alternatives falling within the spirit and scope of the invention as defined by the following description and drawings or any application directly or indirectly to other relevant art(s).

Claims (8)

1. A picture rendering method for a head-mounted display device, the picture rendering method comprising:
constructing a virtual scene, and selecting an observation view angle in the virtual scene, wherein the observation view angle comprises an enlarged view angle, a reduced view angle or view angles of all positions in the virtual scene;
creating a dynamic canvas;
setting a display canvas on the dynamic canvas according to the observation visual angle;
performing picture rendering on the display canvas;
wherein the display canvas comprises at least one two-dimensional canvas and the viewing perspective comprises a first perspective and a second perspective;
the step of setting a display canvas on the dynamic canvas according to the viewing angle further comprises:
and setting two-dimensional canvases on the dynamic canvases according to the second visual angles, wherein a display picture formed by the two-dimensional canvases comprises a plurality of small windows, and each small window is used for displaying display pictures of other visual angles and is selected by a user to switch pictures.
2. The picture rendering method as claimed in claim 1, wherein the step of setting a display canvas on the dynamic canvas according to the viewing angle comprises:
setting at least one two-dimensional canvas on the dynamic canvas according to the observation view angle;
the step of performing picture rendering on the display canvas comprises the following steps:
and establishing rendering windows corresponding to the two-dimensional canvases according to the number of the two-dimensional canvases, and binding the content rendered by the picture with the two-dimensional canvases.
3. The picture rendering method of claim 1, wherein the creating a dynamic canvas comprises:
and acquiring an operation instruction for creating a dynamic canvas, and creating the dynamic canvas according to the operation instruction in the virtual scene.
4. The picture rendering method as claimed in claim 3, wherein the acquiring operation instructions for creating a dynamic canvas comprises, before the step of creating the dynamic canvas according to the operation instructions in the virtual scene:
and receiving a user gesture and/or an eyeball fixation point, and generating an operation instruction.
5. The picture rendering method as claimed in any one of claims 1 to 4, wherein the step of picture rendering the display canvas comprises:
if the transparency of the display canvas is equal to 100%, deleting at least part of the content of the picture below the outermost layer of the display canvas;
and if the transparency of the display canvas is less than 100%, displaying the outermost layer picture of the display canvas.
6. A picture rendering apparatus for a head-mounted display device, the picture rendering apparatus comprising:
the system comprises an acquisition module, a display module and a display module, wherein the acquisition module is used for constructing a virtual scene, and selecting an observation view angle in the virtual scene, wherein the observation view angle comprises an enlarged view angle, a reduced view angle or view angles of all positions in the virtual scene;
the creation module is used for creating a dynamic canvas;
the setting module is used for setting a display canvas on the dynamic canvas according to the observation visual angle;
the rendering module is used for performing picture rendering on the display canvas;
wherein the display canvas comprises at least one two-dimensional canvas, the viewing perspective comprises a first perspective and a second perspective, and the setup module is further configured to: and setting two-dimensional canvases on the dynamic canvases according to the second visual angles, wherein a display picture formed by the two-dimensional canvases comprises a plurality of small windows, and each small window is used for displaying display pictures of other visual angles and is selected by a user to switch pictures.
7. The picture rendering device of claim 6, wherein the setting module is further for setting at least one two-dimensional canvas on the dynamic canvas according to the viewing angle;
the rendering module is used for establishing rendering windows corresponding to the two-dimensional canvas according to the number of the two-dimensional canvas and binding the content rendered by the picture with the two-dimensional canvas.
8. A readable storage medium, on which a picture rendering program is stored, which when executed by a processor implements the steps of the picture rendering method according to any one of claims 1 to 5.
CN201911425529.7A 2019-12-31 2019-12-31 Picture rendering method, device and readable storage medium Active CN111127621B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911425529.7A CN111127621B (en) 2019-12-31 2019-12-31 Picture rendering method, device and readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911425529.7A CN111127621B (en) 2019-12-31 2019-12-31 Picture rendering method, device and readable storage medium

Publications (2)

Publication Number Publication Date
CN111127621A CN111127621A (en) 2020-05-08
CN111127621B true CN111127621B (en) 2024-02-09

Family

ID=70507217

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911425529.7A Active CN111127621B (en) 2019-12-31 2019-12-31 Picture rendering method, device and readable storage medium

Country Status (1)

Country Link
CN (1) CN111127621B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113313840A (en) * 2021-06-15 2021-08-27 周永奇 Real-time virtual system and real-time virtual interaction method
CN114979732B (en) * 2022-05-12 2023-10-20 咪咕数字传媒有限公司 Video stream pushing method and device, electronic equipment and medium

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104147781A (en) * 2014-07-29 2014-11-19 京东方科技集团股份有限公司 Electronic device, electronic system and electronic device control method
CN106383655A (en) * 2016-09-19 2017-02-08 北京小度互娱科技有限公司 Interaction control method for controlling visual angle conversion in panorama playing process, and device for realizing interaction control method
CN106710002A (en) * 2016-12-29 2017-05-24 深圳迪乐普数码科技有限公司 AR implementation method and system based on positioning of visual angle of observer
CN107343206A (en) * 2017-08-11 2017-11-10 北京铂石空间科技有限公司 Support video generation method, device, medium and the electronic equipment of various visual angles viewing
CN108124150A (en) * 2017-12-26 2018-06-05 歌尔科技有限公司 Virtual reality wears display device and observes the method for real scene by it
CN108717733A (en) * 2018-06-07 2018-10-30 腾讯科技(深圳)有限公司 View angle switch method, equipment and the storage medium of virtual environment
CN108877848A (en) * 2018-05-30 2018-11-23 链家网(北京)科技有限公司 The method and device that user's operation is coped in room mode is said in virtual three-dimensional space
CN109861948A (en) * 2017-11-30 2019-06-07 腾讯科技(成都)有限公司 Virtual reality data processing method, device, storage medium and computer equipment

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090295789A1 (en) * 2008-06-03 2009-12-03 Amlogic, Inc. Methods for Dynamically Displaying Digital Images on Digital Display Devices
US9547173B2 (en) * 2013-10-03 2017-01-17 Honda Motor Co., Ltd. System and method for dynamic in-vehicle virtual reality
US10095461B2 (en) * 2016-09-23 2018-10-09 Intel IP Corporation Outside-facing display for head-mounted displays
CN107833556B (en) * 2017-12-01 2023-10-24 京东方科技集团股份有限公司 Viewing angle switching structure, display device and viewing angle switching method of display device

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104147781A (en) * 2014-07-29 2014-11-19 京东方科技集团股份有限公司 Electronic device, electronic system and electronic device control method
CN106383655A (en) * 2016-09-19 2017-02-08 北京小度互娱科技有限公司 Interaction control method for controlling visual angle conversion in panorama playing process, and device for realizing interaction control method
CN106710002A (en) * 2016-12-29 2017-05-24 深圳迪乐普数码科技有限公司 AR implementation method and system based on positioning of visual angle of observer
CN107343206A (en) * 2017-08-11 2017-11-10 北京铂石空间科技有限公司 Support video generation method, device, medium and the electronic equipment of various visual angles viewing
CN109861948A (en) * 2017-11-30 2019-06-07 腾讯科技(成都)有限公司 Virtual reality data processing method, device, storage medium and computer equipment
CN108124150A (en) * 2017-12-26 2018-06-05 歌尔科技有限公司 Virtual reality wears display device and observes the method for real scene by it
CN108877848A (en) * 2018-05-30 2018-11-23 链家网(北京)科技有限公司 The method and device that user's operation is coped in room mode is said in virtual three-dimensional space
CN108717733A (en) * 2018-06-07 2018-10-30 腾讯科技(深圳)有限公司 View angle switch method, equipment and the storage medium of virtual environment

Also Published As

Publication number Publication date
CN111127621A (en) 2020-05-08

Similar Documents

Publication Publication Date Title
CN109829981B (en) Three-dimensional scene presentation method, device, equipment and storage medium
US11880956B2 (en) Image processing method and apparatus, and computer storage medium
US8866848B2 (en) Image processing device, control method for an image processing device, program, and information storage medium
WO2018188499A1 (en) Image processing method and device, video processing method and device, virtual reality device and storage medium
US20170195664A1 (en) Three-dimensional viewing angle selecting method and apparatus
US10540918B2 (en) Multi-window smart content rendering and optimizing method and projection method based on cave system
CN111127621B (en) Picture rendering method, device and readable storage medium
JP5572647B2 (en) Display control program, display control device, display control system, and display control method
CN106873886B (en) Control method and device for stereoscopic display and electronic equipment
JP2012253690A (en) Program, information storage medium, and image generation system
CN109448050B (en) Method for determining position of target point and terminal
JP5236674B2 (en) GAME DEVICE, GAME DEVICE CONTROL METHOD, AND PROGRAM
JP6121496B2 (en) Program to control the head mounted display system
CN114115525A (en) Information display method, device, equipment and storage medium
CN109448117A (en) Image rendering method, device and electronic equipment
JP2022058753A (en) Information processing apparatus, information processing method, and program
US11182950B2 (en) Information processing device and information processing method
CN108093245B (en) Multi-screen fusion method, system, device and computer readable storage medium
JP5950701B2 (en) Image display system, puzzle game system, image display method, puzzle game method, image display device, puzzle game device, image display program, and puzzle game program
JPWO2015186284A1 (en) Image processing apparatus, image processing method, and program
DE102020104415A1 (en) MOVEMENT IN AN ENVIRONMENT
US20230118515A1 (en) Method for changing viewpoint in virtual space
CN114518825A (en) XR (X-ray diffraction) technology-based man-machine interaction method and system
JP2013168781A (en) Display device
KR20120048343A (en) Method and apparatus for providing panorama image

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right
TA01 Transfer of patent application right

Effective date of registration: 20201010

Address after: 261031, north of Jade East Street, Dongming Road, Weifang hi tech Zone, Shandong province (GoerTek electronic office building, Room 502)

Applicant after: GoerTek Optical Technology Co.,Ltd.

Address before: 266104 Laoshan Qingdao District North House Street investment service center room, Room 308, Shandong

Applicant before: GOERTEK TECHNOLOGY Co.,Ltd.

TA01 Transfer of patent application right
TA01 Transfer of patent application right

Effective date of registration: 20221123

Address after: 266104 No. 500, Songling Road, Laoshan District, Qingdao, Shandong

Applicant after: GOERTEK TECHNOLOGY Co.,Ltd.

Address before: 261031 east of Dongming Road, north of Yuqing East Street, high tech Zone, Weifang City, Shandong Province (Room 502, Geer electronics office building)

Applicant before: GoerTek Optical Technology Co.,Ltd.

GR01 Patent grant
GR01 Patent grant