CN115984519A - VR-based space scene display method, system and storage medium - Google Patents

VR-based space scene display method, system and storage medium Download PDF

Info

Publication number
CN115984519A
CN115984519A CN202211688498.6A CN202211688498A CN115984519A CN 115984519 A CN115984519 A CN 115984519A CN 202211688498 A CN202211688498 A CN 202211688498A CN 115984519 A CN115984519 A CN 115984519A
Authority
CN
China
Prior art keywords
scene
information
rendering
user
display
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211688498.6A
Other languages
Chinese (zh)
Inventor
缪品章
陈苹
缪文雄
翁鲲鹏
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Fuchun Polytron Technologies Inc
Original Assignee
Fuchun Polytron Technologies Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Fuchun Polytron Technologies Inc filed Critical Fuchun Polytron Technologies Inc
Priority to CN202211688498.6A priority Critical patent/CN115984519A/en
Publication of CN115984519A publication Critical patent/CN115984519A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Processing Or Creating Images (AREA)

Abstract

The invention discloses a space scene display method, a space scene display system and a storage medium based on VR, the scheme of the invention ingeniously responds to a VR scene display request initiated by a user, different VR display scene rendering priorities and strategies are corresponded by combining pose information and visual field information of the user, meanwhile, the eye movement change condition of the user is monitored, the current visual field information change condition of the user is obtained in real time, so that the rendering priorities and the strategies are updated in time, and the timely visual sense experience is provided for VR space display of the user; the scheme also divides the scene to be rendered into the sub-scenes in a scene division mode, then numbers and endows different rendering sequences respectively, so that the rendered sub-scenes can be spliced and displayed again based on the numbers, and can be timely rendered and generated according to the priority, so that the terminal equipment can be timely downloaded and loaded.

Description

VR-based space scene display method, system and storage medium
Technical Field
The invention relates to the technical field of virtual reality, in particular to a space scene display method and system based on VR and a storage medium.
Background
As the Virtual Reality (VR) technology is a simulation technology capable of creating and experiencing a virtual world, the virtual reality technology has gradually become one of research hotspots in a human-computer interaction direction, with the development of the virtual reality technology, requirements of a user on the reality degree and the substitution sense of the virtual reality are higher and higher, the display of a space scene is one of the keys of the VR technology, most of VR space scene displays follow a preset picture loading sequence at present, in this case, when the user enters a new temporarily loaded scene, a longer scene loading waiting time is usually needed, which directly affects the VR use experience of the user, especially, VR-based game development is increased, which often relates to the situation of temporarily loading the space scene, although some researchers provide rendering and loading acceleration schemes, most of the VR-based VR technology depends on improving the computing power and storage read-write capability of equipment, namely, enhancing the hardware performance; currently, a loading strategy for optimizing and configuring a scene is rarely researched by combining with the actual appearance and feeling of a user, so that how to provide a scheme capable of rendering and displaying the scene timely and effectively is a very realistic topic.
Disclosure of Invention
In view of this, the present invention provides a VR-based spatial scene display method, system and storage medium, which are reliable in implementation, humanized in use, flexible in spatial display and capable of improving user experience.
In order to achieve the technical purpose, the technical scheme adopted by the invention is as follows:
a VR-based spatial scene exhibiting method, comprising:
responding to a VR scene display request initiated by a user through terminal equipment, determining a VR display scene corresponding to the display request according to a preset condition, and setting the VR display scene as a scene to be rendered;
acquiring pose information and visual field information corresponding to a user, and determining current display scene area information, rendering priority information of different areas and rendering strategy information in a scene to be rendered according to the pose information and the visual field information;
rendering the scene to be rendered according to preset conditions according to the rendering priority information and the rendering strategy information to generate a VR display scene;
and the terminal equipment acquires the VR display scene according to preset conditions, and visually displays the VR display scene by combining with the current display scene area information.
As a preferred alternative, the present solution preferably further comprises:
sensing a starting signal of the terminal equipment, and carrying out wearing identification and/or prompting on a user to enable the terminal equipment to be worn according to a preset condition;
and collecting and confirming the body shape information, the pose information and the visual field information of the user to finish the parameter calibration of the terminal equipment.
Wherein the body shape information comprises: height, arm length, leg length;
the pose information includes: user pose, body orientation, and face orientation;
the visual field information includes: the visual angle range corresponding to the eyes of the user.
As a preferred selection implementation manner, preferably, when the body shape information, the pose information and the visual field information of the user are collected and confirmed, a preset action instruction is further output, the body shape posture or the eye action change of the user is sensed and collected in real time, then the preset action instruction is matched with the preset action instruction and calibration parameters are correspondingly generated, and parameter calibration of the terminal device is completed through the body shape information, the pose information, the visual field information and the calibration parameters of the user.
As a preferred alternative, the scheme further includes:
and acquiring eye movement data of the user according to a preset time frequency, and updating rendering priority information and rendering strategy information in real time according to the eye movement data.
As a preferred selection implementation manner, preferably, the method for acquiring pose information and view information corresponding to a user, and determining, according to the pose information and the view information, rendering priority information and rendering policy information of a current display scene area and different areas in a scene to be rendered includes:
acquiring pose information and visual field information corresponding to a user, and taking the current position of the user as a first target position of a VR display scene;
acquiring a spatial range which can be observed by a user in the current state according to the pose information and the visual field information and preset conditions;
acquiring a scene to be rendered, respectively determining a preset range scene area opposite to the front side of a user body and a preset range scene area entering the visual field of the user in the scene to be rendered according to pose information and the visual field information, setting the preset range scene area as current display scene area information, setting rendering priority information as a first rendering priority, and setting rendering strategy information as a first rendering strategy;
setting rendering priority information corresponding to scene areas except the current display scene area information in the scene to be rendered as a second rendering priority, and setting rendering strategy information as a second rendering strategy;
the rendering processing sequence of the first rendering priority is prior to the second rendering priority, the spatial scene rendering definition of the first rendering strategy is a preset value alpha, and the spatial scene rendering definition of the second rendering strategy is a preset value alpha multiplied by 0.6-0.8.
As a preferred selection implementation, preferably, the rendering processing is performed on the scene to be rendered according to the rendering priority information and the rendering policy information, and the generating the VR display scene includes:
acquiring a scene to be rendered, segmenting a current display scene area and a non-current display scene area in the scene to be rendered, segmenting the current display scene area and the non-current display scene area into a plurality of sub-scenes respectively according to preset conditions, and then numbering the sub-scenes sequentially according to the preset conditions;
rendering the current display scene area and the non-current display scene area according to the rendering priority information and the rendering strategy information, wherein after each sub-scene is rendered, the information of rendering completion of the sub-scene is immediately generated and transmitted to the terminal equipment.
As a preferred selection implementation, preferably, the scheme acquires eye movement data of a user according to a preset time frequency, and updating rendering priority information and rendering policy information in real time according to the eye movement data includes:
acquiring eye movement data of a user according to a preset time frequency;
recording the stay time t of the sight of the current user according to the eye movement data;
and acquiring time t, when the time t is greater than a preset value, acquiring whether an unrendered area exists in an area positioned in a user visual field information range in a VR display scene displayed currently according to the current pose information and visual field information of a user, if the unrendered area does not exist, rendering according to a preset mode, if the unrendered area exists, setting rendering priority information and rendering strategy information of the area as a first rendering priority and a first rendering strategy respectively, performing rendering processing in a preposed mode, and marking priority loading information.
As a preferred selection implementation manner, preferably, the terminal device in the present scheme acquires the VR display scene according to a preset condition, and combines the current display scene area information to visually display the VR display scene, including:
the terminal equipment receives an information prompt of the completion of the sub-scene rendering;
acquiring whether the sub-scenes are marked with priority loading information or not according to information prompts of scene rendering completion, if yes, immediately downloading, and otherwise, downloading according to sequence numbers corresponding to the sub-scenes;
splicing the downloaded sub-scenes according to preset conditions to form a VR display scene, wherein the sub-scenes marked with priority loading information are subjected to splicing treatment in advance;
and visually displaying the VR display scene by combining the current display scene area information.
As a preferred selection implementation, preferably, the scheme obtains a scene to be rendered, segments a current display scene region and a non-current display scene region in the scene to be rendered, and segments the current display scene region and the non-current display scene region into a plurality of sub-scenes according to preset conditions, and then sequentially numbering the sub-scenes according to the preset conditions includes:
acquiring a scene to be rendered, and identifying and marking elements in the scene to be rendered;
dividing a current display scene area and a non-current display scene area in a scene to be rendered;
the method comprises the steps of carrying out region division on a current display scene region and a non-current display scene region according to preset conditions, simultaneously carrying out region numbering, and then carrying out segmentation again into a plurality of sub-scenes according to different region numbers, wherein each sub-scene has a unique region number and is also associated with rendering priority information and rendering strategy information;
acquiring the segmented sub-scenes, identifying element marks in the scenes and distributing weights according to preset conditions;
and acquiring all the divided sub-scenes, and numbering the sub-scenes of the same rendering priority according to the distributed weights, wherein the sub-scenes of the same weight are further judged to be spaced from the sub-scene with the highest weight under the same rendering priority, the rendering sequence with relatively smaller spacing is numbered in the front, and the rendering sequence with the same spacing is numbered randomly.
Based on the above, the invention further provides a space scene display system based on VR, which includes a VR terminal device and a server;
the VR terminal equipment comprises a head-wearing module, a posture sensing module, a communication module and a work monitoring module,
the work monitoring module is used for sensing a starting signal of the terminal equipment, carrying out wearing identification and/or prompt on a user, enabling the terminal equipment to be worn according to a preset condition, and completing parameter calibration of the VR terminal equipment according to body shape information, pose information and visual field information of the user;
the gesture sensing module is used for collecting and confirming body shape information and pose information of a user;
the head-mounted module is used for acquiring visual field information of a user and acquiring eye movement data of the user according to a preset time frequency; acquiring a VR display scene according to a preset condition, and visually displaying the VR display scene by combining the current display scene area information;
the communication module is used for uploading parameters of the VR terminal equipment, eye movement data of a user, body shape information, pose information and visual field information of the user to the server;
the server is accessed to the Internet and comprises a VR scene database, a rendering processing unit, an instruction processing unit and a data judging unit,
VR display scene data are stored in the VR scene database;
the instruction processing unit is used for responding to a VR scene display request initiated by a user through terminal equipment, determining a VR display scene corresponding to the display request according to a preset condition, calling the VR display scene from a VR scene database, and setting the VR display scene as a scene to be rendered;
the data judgment unit is used for acquiring pose information and view information corresponding to a user, and determining current display scene area information, rendering priority information of different areas and rendering strategy information in a scene to be rendered according to the pose information and the view information; the rendering priority information and the rendering strategy information are updated in real time according to the eye movement data;
and the rendering processing unit is used for rendering the scene to be rendered according to the rendering priority information and the rendering strategy information to generate the VR display scene.
Based on the foregoing, the present invention further provides a computer-readable storage medium, where at least one instruction, at least one program, a code set, or a set of instructions is stored in the storage medium, and the at least one instruction, the at least one program, the code set, or the set of instructions is loaded by a processor and executed to implement the VR-based spatial scene displaying method.
By adopting the technical scheme, compared with the prior art, the invention has the beneficial effects that: according to the scheme, the VR scene display request initiated by the user is responded ingeniously, different VR display scene rendering priorities and strategies are corresponded by combining the pose information and the visual field information of the user, the eye movement change condition of the user is monitored, the current visual field information change condition of the user is obtained in real time, the rendering priorities and the strategies are updated in time, and efficient, flexible and timely visual sense experience is provided for VR space display experience of the user; in addition, the scheme also divides the scene to be rendered into the sub-scenes in a scene division mode, then numbers and endows different rendering sequences respectively, so that the rendered sub-scenes can be spliced and displayed again based on the numbers, and can be timely rendered and generated according to the importance or priority, so that the terminal equipment can be timely downloaded and loaded.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts.
FIG. 1 is a schematic flow chart illustrating the operation of one embodiment of the method of the present invention;
FIG. 2 is a schematic block diagram of a system according to an embodiment of the present invention.
Detailed Description
The present invention will be described in further detail with reference to the accompanying drawings and examples. It is to be noted that the following examples are only illustrative of the present invention, and do not limit the scope of the present invention. Similarly, the following examples are only some but not all examples of the present invention, and all other examples obtained by those skilled in the art without any inventive work are within the scope of the present invention.
As shown in fig. 1, the present embodiment provides a VR-based spatial scene display method, which includes:
s01, sensing a starting signal of the terminal equipment, identifying and/or prompting wearing of a user, and enabling the terminal equipment to be worn according to a preset condition;
s02, collecting and confirming the body shape information, the pose information and the visual field information of the user to finish parameter calibration of the terminal equipment;
s03, responding to a VR scene display request initiated by a user through terminal equipment, determining a VR display scene corresponding to the display request according to preset conditions, and setting the VR display scene as a scene to be rendered;
s04, obtaining pose information and visual field information corresponding to a user, and determining current display scene area information, rendering priority information of different areas and rendering strategy information in a scene to be rendered according to the pose information and the visual field information;
s05, acquiring eye movement data of a user according to a preset time frequency, and updating rendering priority information and rendering strategy information in real time according to the eye movement data;
s06, rendering the scene to be rendered according to preset conditions according to the rendering priority information and the rendering strategy information to generate a VR display scene;
s07, the terminal equipment acquires a VR display scene according to preset conditions, and visually displays the VR display scene by combining with the current display scene area information.
Wherein the body shape information comprises: height, arm length, leg length; the pose information includes: user pose, body orientation, and facial orientation; the visual field information includes: the visual angle range corresponding to the eyes of the user.
Because the physical sign information of different users is different, in order to avoid the expansion of VR display errors caused by different physical sign information, preferably, when the body shape information, the pose information and the visual field information of the users are collected and confirmed, a preset action instruction is further output, the body shape posture or eye movement change of the users is sensed and collected in real time, then the body shape information, the pose information and the visual field information of the users are matched with the preset action instruction, calibration parameters are correspondingly generated, parameter calibration of the terminal equipment is completed through the body shape information, the pose information and the visual field information of the users, when the terminal equipment is started, the preset initial body shape information, the pose information and the visual field information are loaded, when the body shape information, the pose information and the visual field information of the users are collected, the terminal equipment is updated, in order to further confirm the body shape information, the pose information and the visual field information, the preset action instruction is output to enable the users to be correspondingly completed, then the collection is carried out in real time, and then the loaded body shape information, the pose information and the visual field information are updated secondarily according to avoid the collection errors of single VR display errors caused by the collected in the process of completing the preset action.
According to the scheme, the server accessed to the Internet responds to the VR scene display request initiated by the user through the terminal device, the server is provided with a VR scene database and stores VR display scene data, different VR display scene data have unique numbers, and the user initiates the display request through the numbers of the VR display scene data.
In the scheme S04, acquiring pose information and view information corresponding to a user, and determining current display scene area information, rendering priority information of different areas and rendering strategy information in a scene to be rendered according to the pose information and the view information comprises the following steps:
s041, obtaining pose information and visual field information corresponding to a user, and taking the current position of the user as a first target position of a VR display scene, wherein the first target position is a preset user position during VR scene display and is used for determining a relative position during VR scene display, namely, a relative distance between an element in a display picture and a viewer (user), and by setting, the size display of an article during VR scene display can be further improved, namely, the element in the display picture is more real and stable in size as much as possible;
s042, acquiring a space range which can be observed by the current state of the user according to the pose information and the visual field information and preset conditions;
s043, acquiring a scene to be rendered, respectively determining a preset range scene area opposite to the front side of the body of the user and a preset range scene area entering the visual field of the user in the scene to be rendered according to the pose information and the visual field information, setting the preset range scene areas as current display scene area information, setting rendering priority information of the current display scene area information as a first rendering priority, and setting rendering strategy information as a first rendering strategy;
and S044, setting rendering priority information corresponding to the scene areas except the current display scene area information in the scene to be rendered as a second rendering priority, and setting rendering strategy information as a second rendering strategy.
The rendering processing sequence of the first rendering priority is prior to the second rendering priority, the spatial scene rendering definition of the first rendering strategy is a preset value alpha, the spatial scene rendering definition of the second rendering strategy is a preset value alpha x (0.6-0.8), the preset value alpha of the definition can be automatically adjusted or set by contrasting different preset definitions according to network speed and delay parameters obtained by testing when network transmission speed test is carried out on the terminal device and the server, or can be actively set by a user, besides, when all to-be-rendered scenes of the current first rendering priority and the current second rendering priority are rendered, the rendering scenes of the second rendering priority can be further rendered and replaced by the first rendering strategy again, and full-scene high-definition display is completed.
Because the user is when experiencing VR, its sight often follows in autonomic consciousness or the space scene some comparatively "irritate the eye" the element transform, in order to can in time render and demonstrate the space scene in the user field of vision scope, improve VR show flexibility, as a preferred selection implementation, preferred, in this scheme S05, according to predetermineeing time frequency and obtaining user' S eye movement data, it includes to update in real time according to eye movement data and render priority information and render strategy information:
s051, acquiring eye movement data of the user according to a preset time frequency;
s052, recording the time t of the current sight of the user according to the eye movement data;
s053, acquiring time t, when the time t is larger than a preset value, acquiring whether an unrendered area exists in an area located in a user visual field information range in a VR display scene displayed currently according to the current pose information and visual field information of a user, rendering in a preset mode if the unrendered area does not exist, and when the unrendered area exists, setting rendering priority information and rendering strategy information of the area as a first rendering priority and a first rendering strategy respectively, performing rendering processing (plug-in-queue rendering) in a preposed mode, and marking priority loading information.
By adjusting the rendering priority and the rendering strategy of the eye movement data of the user, the rendering speed of the scene in the current visual field range of the user is improved, and the flexibility and the use experience of VR space display are improved.
In addition, in this solution S06, rendering the scene to be rendered according to the rendering priority information and the rendering policy information, and generating the VR display scene includes:
s061, acquiring a scene to be rendered, segmenting a current display scene region and a non-current display scene region in the scene to be rendered, segmenting the current display scene region and the non-current display scene region into a plurality of sub-scenes respectively according to preset conditions, and then sequentially numbering the sub-scenes respectively according to the preset conditions;
and S062, rendering the current display scene area and the non-current display scene area according to the rendering priority information and the rendering strategy information, wherein after each sub-scene is rendered, the information of rendering completion of the sub-scene is immediately generated and transmitted to the terminal equipment.
In the aspect of display, in this scheme S07, the terminal device obtains a VR display scene according to a preset condition, and combines the current display scene area information to visually display the VR display scene, including:
s071, the terminal equipment receives an information prompt of the completion of the sub-scene rendering;
s072, according to the information prompt of the completion of scene rendering, acquiring whether the sub-scene is marked with priority loading information, if so, immediately downloading, otherwise, numbering according to the sequence corresponding to the sub-scene for downloading;
s073, splicing the downloaded sub-scenes according to preset conditions to form a VR display scene, wherein the sub-scenes marked with priority loading information are subjected to splicing processing in advance;
s074, visually displaying the VR display scene by combining the current display scene area information.
Partial traditional VR renders and is to render the scene with lower image quality earlier, after whole completion render, unified output and loading again, this kind of mode often needs the secondary to render and needs occupy certain latency, another kind of mode is directly render and output the scene that the user just faces, then accomplish rendering and loading in other regions in proper order, this kind of mode is though can be very fast to enter into the show output link, but because the user is after entering VR space scene, more not render the region can appear in its sight, lead to visual region can be less in the short term, also can't in time in the meantime carry out the priority with its field of vision concern region and render, on this basis, in this scheme S061, obtain the scene of waiting to render, current show scene region and the non-current show scene region in the scene of waiting to render are cut apart and are cut apart into a plurality of sub-scenes respectively according to the condition once more again, then carry out the sequence numbering according to the condition of presetting respectively and include:
s0611, acquiring a scene to be rendered, and identifying and marking elements in the scene to be rendered;
s0612, segmenting a current display scene area and a non-current display scene area in the scene to be rendered;
s0613, area division is carried out on a current display scene area and a non-current display scene area according to preset conditions, area numbering is carried out simultaneously, then, the current display scene area and the non-current display scene area are divided into a plurality of sub-scenes again according to different area numbers, wherein each sub-scene has a unique area number, and rendering priority information and rendering strategy information are associated with each sub-scene;
s0614, acquiring the segmented sub-scenes, identifying element marks in the scenes and distributing weights according to preset conditions;
s0615, all the divided sub-scenes are obtained, the sub-scenes with the same rendering priority are numbered according to the distributed weights, the sub-scenes with the same weight are further judged to be spaced from the sub-scene with the highest weight under the same rendering priority, the rendering sequence with the relatively smaller spacing is numbered in the front, and the rendering sequence is randomly numbered at the same spacing.
Because the VR exhibition scene data stored in the VR scene database will increase with the continuous expansion of the data ecology, in order to improve the marking efficiency of the internal elements and reduce the workload of manual repetition, the scheme can also automatically distribute weights to the elements in the VR exhibition scene by introducing a trained neural network in combination with a preset weight setting reference table, for example, pre-distributing weight values to different elements (such as people, articles and outstanding scenes), then performing element positioning and marking to the VR exhibition scene data by the trained positioning neural network and the detection neural network, and then endowing the weight values by combining with the reference table, so that the processing efficiency is improved by only adding the element weights when distributing the weights in step S0614; the training method of the neural network is the prior art, and is roughly as follows:
collecting preset training elements, marking the preset training elements, then respectively extracting a preset amount of training data as a training set and a verification set, training the neural network through the training set, verifying the trained neural network through the verification set until the neural network converges, and completing training to obtain a positioning neural network and a detection neural network. The positioning neural network mainly positions elements in the VR display scene data, detects the neural network to judge the positioned elements, marks (for example, numbers), and performs weight summation by using a look-up table mode according to the numbers in step S0614.
By the method, the scene to be rendered can be divided into a plurality of sub-scenes, then the server can respectively perform rendering processing on the plurality of sub-scenes in parallel under the condition that a plurality of threads are established (under the condition of computing power and thread satisfaction), and the rendering efficiency of the scene to be rendered is improved.
With reference to fig. 2, based on the foregoing, the present embodiment further provides a space scene display system based on VR, which includes a VR terminal device and a server;
the VR terminal equipment comprises a head-wearing module, a posture sensing module, a communication module and a work monitoring module,
the work monitoring module is used for sensing a starting signal of the terminal equipment, carrying out wearing identification and/or prompt on a user, enabling the terminal equipment to be worn according to a preset condition, and completing parameter calibration of the VR terminal equipment according to body shape information, pose information and visual field information of the user;
the gesture sensing module is used for collecting and confirming body shape information and pose information of a user;
the head-mounted module is used for acquiring visual field information of a user and acquiring eye movement data of the user according to a preset time frequency; acquiring a VR display scene according to a preset condition, and visually displaying the VR display scene by combining the current display scene area information;
the communication module is used for uploading parameters of the VR terminal equipment, eye movement data of the user, body shape information, pose information and visual field information of the user to the server;
the server is accessed to the Internet and comprises a VR scene database, a rendering processing unit, an instruction processing unit and a data judging unit,
VR display scene data are stored in the VR scene database;
the instruction processing unit is used for responding to a VR scene display request initiated by a user through terminal equipment, determining a VR display scene corresponding to the display request according to a preset condition, calling the VR display scene from a VR scene database, and setting the VR display scene as a scene to be rendered;
the data judgment unit is used for acquiring pose information and view information corresponding to a user, and determining current display scene area information, rendering priority information of different areas and rendering strategy information in a scene to be rendered according to the pose information and the view information; the rendering priority information and the rendering strategy information are updated in real time according to the eye movement data;
and the rendering processing unit is used for rendering the scene to be rendered according to the rendering priority information and the rendering strategy information to generate a VR display scene.
In addition, functional units in the embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated unit, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present invention may be substantially or partially implemented in the form of a software product stored in a storage medium and including instructions for causing a computer device (which may be a personal computer, a server, a network device, or the like) or a processor (processor) to execute all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.
The above description is only a part of the embodiments of the present invention, and not intended to limit the scope of the present invention, and all equivalent devices or equivalent processes performed by the present invention through the contents of the specification and the drawings, or directly or indirectly applied to other related technical fields, are included in the scope of the present invention.

Claims (10)

1. A VR-based spatial scene display method is characterized by comprising the following steps:
responding to a VR scene display request initiated by a user through terminal equipment, determining a VR display scene corresponding to the display request according to a preset condition, and setting the VR display scene as a scene to be rendered;
acquiring pose information and visual field information corresponding to a user, and determining current display scene area information, rendering priority information of different areas and rendering strategy information in a scene to be rendered according to the pose information and the visual field information;
rendering the scene to be rendered according to preset conditions according to the rendering priority information and the rendering strategy information to generate a VR display scene;
and the terminal equipment acquires the VR display scene according to preset conditions, and visually displays the VR display scene by combining with the current display scene area information.
2. The VR-based spatial scene presentation method of claim 1, further comprising:
sensing a starting signal of the terminal equipment, and carrying out wearing identification and/or prompt on a user to enable the terminal equipment to be worn according to a preset condition;
acquiring and confirming the body shape information, the pose information and the visual field information of a user to finish parameter calibration of the terminal equipment;
wherein the body shape information comprises: height, arm length, leg length;
the pose information includes: user pose, body orientation, and face orientation;
the visual field information includes: the visual angle range corresponding to the eyes of the user.
3. The VR-based spatial scene display method of claim 2, wherein when the body shape information, the pose information, and the visual field information of the user are collected and confirmed, a preset action command is further output, the body shape pose or the eye movement change of the user is sensed and collected in real time, then the preset action command is matched with the preset action command, calibration parameters are correspondingly generated, and parameter calibration of the terminal device is completed according to the body shape information, the pose information, the visual field information, and the calibration parameters of the user.
4. The VR-based spatial scene presentation method of claim 3, further comprising:
and acquiring eye movement data of the user according to a preset time frequency, and updating rendering priority information and rendering strategy information in real time according to the eye movement data.
5. The VR-based spatial scene display method of claim 4, wherein obtaining pose information and view information corresponding to a user, and determining rendering priority information and rendering strategy information for a current displayed scene area and different areas in a scene to be rendered according to the pose information and the view information comprises:
acquiring pose information and visual field information corresponding to a user, and taking the current position of the user as a first target position of a VR display scene;
acquiring a spatial range which can be observed by a user in the current state according to the pose information and the visual field information and preset conditions;
acquiring a scene to be rendered, respectively determining a preset range scene area opposite to the front side of a user body and a preset range scene area entering the visual field of the user in the scene to be rendered according to pose information and the visual field information, setting the preset range scene area as current display scene area information, setting rendering priority information as a first rendering priority, and setting rendering strategy information as a first rendering strategy;
setting rendering priority information corresponding to scene areas except the current display scene area information in the scene to be rendered as a second rendering priority, and setting rendering strategy information as a second rendering strategy;
the rendering processing sequence of the first rendering priority is prior to the second rendering priority, the spatial scene rendering definition of the first rendering strategy is a preset value alpha, and the spatial scene rendering definition of the second rendering strategy is a preset value alpha multiplied by 0.6-0.8.
6. The VR-based spatial scene display method of claim 5, wherein rendering the to-be-rendered scene according to the rendering priority information and the rendering policy information, and generating the VR display scene comprises:
acquiring a scene to be rendered, segmenting a current display scene area and a non-current display scene area in the scene to be rendered, segmenting the current display scene area and the non-current display scene area into a plurality of sub-scenes respectively according to preset conditions, and then numbering the sub-scenes sequentially according to the preset conditions;
and rendering the current display scene area and the non-current display scene area according to the rendering priority information and the rendering strategy information, wherein after each sub-scene is rendered, the information of rendering completion of the sub-scene is immediately generated and transmitted to the terminal equipment.
7. The VR-based spatial scene presentation method of claim 6, wherein the obtaining eye movement data of the user according to the preset time frequency, and the updating the rendering priority information and the rendering policy information in real time according to the eye movement data includes:
acquiring eye movement data of a user according to a preset time frequency;
recording the stay time t of the sight of the current user according to the eye movement data;
acquiring time t, when the time t is greater than a preset value, acquiring whether an unrendered area exists in an area located in a user visual field information range in a VR display scene displayed currently according to the current pose information and visual field information of a user, if the unrendered area does not exist, rendering according to a preset mode, if the unrendered area exists, setting rendering priority information and rendering strategy information of the area as a first rendering priority and a first rendering strategy respectively, performing rendering processing in a preposed mode, and marking priority loading information;
the terminal equipment acquires the VR display scene according to preset conditions, and the visual display of the VR display scene is combined with the current display scene area information and comprises the following steps:
the terminal equipment receives an information prompt of the completion of the sub-scene rendering;
acquiring whether the sub-scenes are marked with priority loading information or not according to the information prompt of the completion of scene rendering, if so, immediately downloading, otherwise, numbering according to the sequence corresponding to the sub-scenes and downloading;
splicing the downloaded sub-scenes according to preset conditions to form a VR display scene, wherein the sub-scenes marked with priority loading information are subjected to splicing treatment in advance;
and visually displaying the VR display scene by combining the current display scene area information.
8. The VR-based spatial scene display method of claim 6, wherein the obtaining a scene to be rendered, dividing a current displayed scene area and a non-current displayed scene area in the scene to be rendered, and dividing the current displayed scene area and the non-current displayed scene area into a plurality of sub-scenes according to preset conditions, and then sequentially numbering according to the preset conditions respectively comprises:
acquiring a scene to be rendered, and identifying and marking elements in the scene to be rendered;
segmenting a current display scene area and a non-current display scene area in a scene to be rendered;
the method comprises the steps of carrying out region division on a current display scene region and a non-current display scene region according to preset conditions, simultaneously carrying out region numbering, and then carrying out segmentation again into a plurality of sub-scenes according to different region numbers, wherein each sub-scene has a unique region number and is also associated with rendering priority information and rendering strategy information;
acquiring the segmented sub-scenes, identifying element marks in the scenes and distributing weights according to preset conditions;
and acquiring all the divided sub-scenes, and numbering the sub-scenes of the same rendering priority according to the distributed weights, wherein the sub-scenes of the same weight are further judged to be spaced from the sub-scene with the highest weight under the same rendering priority, the rendering sequence with relatively smaller spacing is numbered in the front, and the rendering sequence with the same spacing is numbered randomly.
9. A space scene display system based on VR is characterized in that the space scene display system comprises VR terminal equipment and a server;
the VR terminal equipment comprises a head-wearing module, a posture sensing module, a communication module and a work monitoring module,
the work monitoring module is used for sensing a starting signal of the terminal equipment, carrying out wearing identification and/or prompt on a user, enabling the terminal equipment to be worn according to a preset condition, and completing parameter calibration of the VR terminal equipment according to the body shape information, the pose information and the visual field information of the user;
the gesture sensing module is used for collecting and confirming body shape information and pose information of a user;
the head-mounted module is used for acquiring visual field information of a user and acquiring eye movement data of the user according to a preset time frequency; acquiring a VR display scene according to a preset condition, and visually displaying the VR display scene by combining the current display scene area information;
the communication module is used for uploading parameters of the VR terminal equipment, eye movement data of a user, body shape information, pose information and visual field information of the user to the server;
the server is accessed to the Internet and comprises a VR scene database, a rendering processing unit, an instruction processing unit and a data judging unit,
VR display scene data are stored in the VR scene database;
the instruction processing unit is used for responding to a VR scene display request initiated by a user through terminal equipment, determining a VR display scene corresponding to the display request according to a preset condition, calling the VR display scene from a VR scene database, and setting the VR display scene as a scene to be rendered;
the data judgment unit is used for acquiring pose information and view information corresponding to a user, and determining current display scene area information, rendering priority information of different areas and rendering strategy information in a scene to be rendered according to the pose information and the view information; the rendering priority information and the rendering strategy information are updated in real time according to the eye movement data;
and the rendering processing unit is used for rendering the scene to be rendered according to the rendering priority information and the rendering strategy information to generate the VR display scene.
10. A computer-readable storage medium, characterized in that: the storage medium has at least one instruction, at least one program, a set of codes, or a set of instructions stored therein, which is loaded and executed by a processor to implement the VR based spatial scene rendering method as claimed in any one of claims 1 to 8.
CN202211688498.6A 2022-12-27 2022-12-27 VR-based space scene display method, system and storage medium Pending CN115984519A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211688498.6A CN115984519A (en) 2022-12-27 2022-12-27 VR-based space scene display method, system and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211688498.6A CN115984519A (en) 2022-12-27 2022-12-27 VR-based space scene display method, system and storage medium

Publications (1)

Publication Number Publication Date
CN115984519A true CN115984519A (en) 2023-04-18

Family

ID=85975507

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211688498.6A Pending CN115984519A (en) 2022-12-27 2022-12-27 VR-based space scene display method, system and storage medium

Country Status (1)

Country Link
CN (1) CN115984519A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117611472A (en) * 2024-01-24 2024-02-27 四川物通科技有限公司 Fusion method for metaspace and cloud rendering

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117611472A (en) * 2024-01-24 2024-02-27 四川物通科技有限公司 Fusion method for metaspace and cloud rendering
CN117611472B (en) * 2024-01-24 2024-04-09 四川物通科技有限公司 Fusion method for metaspace and cloud rendering

Similar Documents

Publication Publication Date Title
US11231775B2 (en) Eye image selection
KR102151898B1 (en) Identity authentication method and device based on virtual reality environment
US9135508B2 (en) Enhanced user eye gaze estimation
US9842246B2 (en) Fitting glasses frames to a user
CN108305325A (en) The display methods and device of virtual objects
JP6932224B1 (en) Advertising display system
CN109491508B (en) Method and device for determining gazing object
WO2016118169A1 (en) Rendering glasses shadows
CN113467619B (en) Picture display method and device, storage medium and electronic equipment
JP2023504207A (en) Systems and methods for operating head mounted display systems based on user identification
US20200160602A1 (en) Virtual content display opportunity in mixed reality
CN115984519A (en) VR-based space scene display method, system and storage medium
CN109670456A (en) A kind of content delivery method, device, terminal and storage medium
CN105808190A (en) Display screen display method and terminal equipment
US20190371039A1 (en) Method and smart terminal for switching expression of smart terminal
JP2023536064A (en) Eye Tracking Using Alternate Sampling
CN103581602B (en) Automatically update the method and system of contact head image
CN114967128B (en) Sight tracking system and method applied to VR glasses
CN110262663B (en) Schedule generation method based on eyeball tracking technology and related product
CN111738967B (en) Model generation method and apparatus, storage medium, and electronic apparatus
CN115004235A (en) Augmented state control for anchor-based cross reality applications
US20180144722A1 (en) Display Control Method and Display Control Apparatus
CN114939272B (en) Vehicle-mounted interactive game method and system based on HUD
CN112416114B (en) Electronic device and picture visual angle recognition method thereof
CN106484114B (en) Interaction control method and device based on virtual reality

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination