CN117934777A - Space arrangement system and method based on virtual reality - Google Patents

Space arrangement system and method based on virtual reality Download PDF

Info

Publication number
CN117934777A
CN117934777A CN202410111657.9A CN202410111657A CN117934777A CN 117934777 A CN117934777 A CN 117934777A CN 202410111657 A CN202410111657 A CN 202410111657A CN 117934777 A CN117934777 A CN 117934777A
Authority
CN
China
Prior art keywords
data
user
scene
virtual
space
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202410111657.9A
Other languages
Chinese (zh)
Inventor
张敏
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Yangzhou Zizai Island Ecotourism Investment Development Co ltd
Original Assignee
Yangzhou Zizai Island Ecotourism Investment Development Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Yangzhou Zizai Island Ecotourism Investment Development Co ltd filed Critical Yangzhou Zizai Island Ecotourism Investment Development Co ltd
Priority to CN202410111657.9A priority Critical patent/CN117934777A/en
Publication of CN117934777A publication Critical patent/CN117934777A/en
Pending legal-status Critical Current

Links

Abstract

The invention discloses a space arrangement system and method based on virtual reality, and belongs to the technical field of virtual reality. The system comprises a live-action data acquisition module, a virtual scene construction module, a difference data determination module and a self-adaptive updating module; the live-action data acquisition module acquires image data and environment data in live-action by using acquisition equipment; the virtual scene construction module performs preprocessing operation on the acquired data, performs image processing and boundary strengthening processing according to the processed data to acquire object contours, constructs two-dimensional plane scene data and maps the two-dimensional plane scene data to complete virtual scene construction; the difference data determining module tracks the action track of the user virtual image in the virtual scene, performs bit plane distinction and data dynamic and static state regulation on the virtual space, determines difference data in the action data and calculates data updating priority; and the self-adaptive updating module updates the difference area according to the priority and performs short storage on the original data of the virtual scene.

Description

Space arrangement system and method based on virtual reality
Technical Field
The invention relates to the technical field of virtual reality, in particular to a space arrangement system and method based on virtual reality.
Background
Virtual reality is a three-dimensional, interactive, real-time virtual environment simulated and generated by computer technology, so that a user can interact with the virtual world through sense organs; VR technology provides an immersive experience for users through hardware such as head-mounted displays, motion tracking systems, input devices, and corresponding software;
At present, VR technology is applied to multiple social fields, wherein VR and travel are combined to form a novel entertainment mode which is popular nowadays, and people can realize instant and independent travel experience through VR equipment; however, the virtual reality travel scene generally requires a technician to update the data change of the virtual scene through a computer, but cannot capture the change of the actual scene by itself for self-updating; if the technician and the actual data acquisition work do not update the live-action scene data in the software in a timely and consistent manner, the user cannot experience a real-time dynamic and real scene, and the interest and experience of the user on VR travel can be reduced by the same sense of taste.
Disclosure of Invention
The invention aims to provide a space arrangement system and a space arrangement method based on virtual reality, which are used for solving the problems in the background technology.
In order to solve the technical problems, the invention provides the following technical scheme:
A virtual reality-based spatial arrangement method, the method comprising the steps of:
S100, image data acquisition and dynamic environment data acquisition are carried out on a live-action land in advance in a live-action land device monitoring device, and the data are packaged and transmitted to a rear-end data center;
s200, the data center performs unpacking extraction and data information detection on received data, performs pixel information plane extraction and image boundary pixel reinforcement on detected image data, performs splicing and plane scene restoration on multi-pixel information, and performs space projection on plane scene information and environment data to build a virtual reality scene;
s300, in a virtual scene, mapping the user posture data, tracking the motion trail of the user in the virtual scene, automatically regulating and controlling the dynamic and static states of the data in the virtual environment by reflecting the real scene environment in parallel, constructing a comparison model, dynamically comparing the real scene data with the virtual scene data through the model, and making a self-adaptive updating plan according to the comparison result;
s400, performing self-adaptive data regulation and control on the virtual scene through data updating output, prompting a user through intelligent regulation and control data, and completing virtual scene updating according to a user instruction.
In S100, the specific steps of performing image data acquisition and dynamic environment data acquisition on the live-action land in advance in the live-action land device monitoring device, and packaging and transmitting the data to the rear-end data center are as follows:
S101, dividing a determined region by determining a region range of data to be acquired in a real scene, and acquiring multi-view data of each region; the multi-view data acquisition performs time line on-site data acquisition on live scenes at different heights and angles through unmanned aerial vehicle equipment; simulating a real user visual angle through the cruising intelligent robot to conduct route planning, and collecting scene data in different time of different subareas; image acquisition is carried out on real-time scene data in different subareas by erecting a camera device; the unmanned aerial vehicle and intelligent robot acquisition data comprise real scene image data and environment data; the environment data comprise real scene temperature data, humidity data, wind speed and wind direction data and air pressure data;
S102, after integrating the acquired data, packaging the integrated data with corresponding time and sub-region numbers and sending the integrated data to a back-end data center; and numbering the subareas after determining the real scene area of data acquisition and dividing the subareas.
The step S200 is that the data center performs unpacking extraction and data information detection on the received data, performs pixel information plane extraction and image boundary pixel reinforcement on the detected image data, performs splicing and planar scene restoration on multi-pixel information, and performs space projection on planar scene information combined with environmental data to build a virtual reality scene, wherein the specific steps are as follows:
S201, the data center receives the package data, and independently slices the data according to time and area to obtain image data and environment data of different time lines corresponding to each area of the real scene; the slice data is cleaned, screened out and filled up, and the processed image data is subjected to modulating and expanding operations to enable the image data of all the subareas at the same time point to be arranged on a normalization plane; the data processing comprises the steps of screening and removing fuzzy image data, error image data and repeated image data in slice data; the adjustment and expansion operations are to normalize the dimension of the image data, and the dimension of the presentation data of the same thing in the image are different due to the influence of the view angle and the distance relation existing in the front-end data acquisition, so that the image data needs to be unified when being restored;
S202, in a plane, taking a region number as a first-level dividing layer mark, and carrying out normalization division on image data of each sub-region; clustering the image data of the same time point in the corresponding subarea by taking the time node data of the image as a secondary dividing hierarchy mark; positioning image pixel point data of the clustered image data, and extracting each pixel point data; the pixel point data are color data and brightness data;
S203, performing pixel blurring processing on the pixel points of the real scene object image in the single image by using pixel point color data and brightness data; the blurring processing is to blur the edges of the object image, only the color and the brightness of the pixel points of the object are reserved, so that a plurality of interference details in the image can be denoised, and boundaries among different objects in the image can be obtained clearly to obtain the pixel boundaries of the object image; carrying out object pixel point boundary brightness enhancement on the blurred image to obtain occupation contours of all objects in image data; performing contour matching on the image data processed by the same time point and the sub-region, and splicing the matched different image data to obtain image data with higher integrity; repeating the operation until the image data of the subareas in the same time point are completely constructed; when the matching and splicing of the images are carried out, whether the same object has multi-area repetition in different images or not is confirmed through the comparison of object outlines in multiple images, if yes, independent data extraction is needed to be carried out on the object, and the complete construction of the images in the area at the time point is restored after the splicing is completed;
s204, taking the restored two-dimensional image data as a comparison object, taking the scaling of image data acquisition as a restoration scaling factor, and filling a three-dimensional space scene in a virtual space in a mapping mode; in the mapped scene space, mixing environment data corresponding to the time point position aiming at the restored scene area and moment thereof, and realizing virtual parallel space mapping of the real scene at a certain position at a certain moment; and penetrating the virtual mapping scene of the multiple time points through the corresponding time line, so as to realize the construction of the virtual scene.
In the step S300, in the virtual scene, virtual scene mapping is performed on the user posture data, the motion trail of the user in the virtual scene is tracked, the dynamic and static states of the data in the virtual environment are automatically regulated and controlled by parallel mapping of the real scene environment, a comparison model is constructed, dynamic comparison is performed on the real scene data and the virtual scene data through the model, and a self-adaptive update plan is formulated according to the comparison result:
S301, when a user logs in a virtual scene, acquiring physical data of the user after authority access is performed on the user, and constructing an avatar of the user in the virtual scene through a twin technology according to the physical data of the user;
S302, capturing a behavior track of a user in a virtual space through a sensor of a device in VR equipment worn by the user; the user behavior track comprises a user head motion track, a hand motion track, a leg motion track and a trunk motion track; the face of the user faces towards the observable surface to be used as a normal position surface in the virtual space, the initial user horizontal visual angle range is [ theta 12 ], and the initial user vertical visual angle range is [ theta 34 ]; the corresponding back facing non-observable surface of the user is a back surface in the virtual space, the initial user horizontal viewing angle range is [ alpha 12 ], and the initial user vertical viewing angle range is [ alpha 34 ]; the back surface cannot be observed by a user, so that the corresponding visual range is the visual blind area range of the user; capturing the track of the head of the user, if the rotation angle of the head of the user is captured to be beta, respectively calculating rotation components beta h and beta v of the rotation of the head of the user in the horizontal direction and the vertical direction, wherein the calculation formula is beta h=β*sinβ,βv = beta cos beta; wherein the method comprises the steps of Θ 1 is the starting point angle value of the initial horizontal view angle of the user front surface, θ 2 is the ending point angle value of the initial horizontal view angle of the user front surface, θ 3 is the starting point angle value of the initial vertical view angle of the user front surface, θ 4 is the ending point angle value of the initial vertical view angle of the user front surface, α 1 is the starting point angle value of the initial horizontal view angle of the user back surface, α 2 is the ending point angle value of the initial horizontal view angle of the user front surface, α 3 is the starting point angle value of the initial vertical view angle of the user front surface, and α 4 is the ending point angle value of the initial vertical view angle of the user front surface; wherein,The head of a normal person rotates in one direction by the maximum angle; dynamically adjusting the front view angle range and the back view angle range of the user in the virtual space according to the rotation condition of the head of the user; if the user head rotates in a single horizontal direction, only the horizontal visual angle ranges of the front visual angle range and the back visual angle range of the user are adjusted, and the adjustment results are [ theta 1±β,θ2 +/-beta ] and [ alpha 1±β,α2 +/-beta ]; the user head is subjected to angle subtraction when horizontally rotating left, and angle addition when horizontally rotating left; if the user head rotates to rotate in a single vertical direction, only the vertical view angle ranges of the front view angle range and the back view angle range of the user are adjusted, and the adjustment results are [ theta 3±β,θ4 +/-beta ] and [ alpha 3±β,α4 +/-beta ]; wherein, the angles of the user heads are subtracted when the user heads are vertically downward, and the angles of the user heads are added when the user heads are vertically upward; if the rotation of the head of the user is in the horizontal direction and the vertical direction, the horizontal and vertical direction visual angle ranges of the front visual angle range and the back visual angle range of the user are adjusted, and the adjustment results are [ theta 1±βh2±βh]、[θ3±βv4±βv ] and [ alpha 1±βh2±βh]、[α3±βv4±βv ]; when the head of the user rotates obliquely left and upwards, the horizontal viewing angle of the bit plane changes to be angle subtraction, and the vertical viewing angle changes to be angle addition; when the head of the user rotates downwards in a left inclined way, the horizontal viewing angle of the position surface is changed into angle subtraction, and the vertical viewing angle is changed into angle subtraction; when the head of the user rotates obliquely upwards, the horizontal viewing angle of the bit plane is changed into angle addition, and the vertical viewing angle is changed into angle addition; when the head of the user rotates obliquely downwards, the horizontal viewing angle change of the bit plane is angle addition, and the vertical viewing angle change is angle subtraction;
s303, automatically adjusting the bit surface range according to the motion trail of the user, performing dynamic real scene mapping simulation on the space data of the front bit surface in the virtual space, and performing static real scene data dormancy storage on the space data of the back bit surface in the virtual space; the data dormancy space is a space behind the back of the user in a parallel vertical plane which is a distance from a vertical plane s where the user stands still in the back plane; wherein s is the distance from the hand end to the sole when the arm and the leg of the user are in a line; the dynamic and static adjustment of the space data is carried out by tracking the user track in real time, so that the overall calculation amount of the virtual space can be reduced while the VR requirement of the user is met, and the occurrence probability of error data is reduced;
S304, constructing a dynamic space coordinate system of the virtual space by taking the position of the eyes of the user as a space coordinate base point in the virtual space; acquiring coordinate data of each object data point of the virtual scene mapped in the virtual space by using a space coordinate system; storing object data and environment data at positions corresponding to real scenes at all coordinate points of a space in a data tree converging mode; the object data is data of an object in a corresponding real scene after image blurring and strengthening processing, and comprises color data, brightness data and contour data of the object; the data tree is stored in a converging mode, and when the data at the point is called, the data is displayed in a data tree form; when not called, the package is preserved by inner packing; the tree branches of the data tree are of an object data type and an environment data type at the current position, the root of the data tree is of current position data, and the branch and branch parts of the data tree are occupied spaces after normalization and conversion of the object data and the environment data;
When the difference analysis is carried out between the scene and the real scene in the virtual space, the dynamic comparison analysis is carried out on the virtual scene and the real scene by constructing a comparison model, wherein the model construction steps are as follows:
s304-1, performing difference analysis by retrieving the data number at the same mapping space position, and performing normalization and conversion on the object data and the environment data at the same position; superposing the data trees at the same coordinates in a parallel mapping mode, and calculating a difference value lambda between the two data trees, wherein the calculation mode is that Wherein, P vs is the space volume of the data tree at the coordinate point in the virtual space, and P rs is the space volume of the data tree at the coordinate point in the real scene; if lambda is more than 1+gamma or lambda is less than 1-gamma, judging that the difference exists between the environmental data at the current coordinate point in the virtual scene and the environmental data in the real scene; wherein gamma is an error correction value; regarding that the data difference between a certain point in the virtual scene and the corresponding point in the real scene is smaller, the data calculation error is considered to be caused, or the difference is too small to be brought to the attention of a user, so that the result of smaller calculation difference is considered to be consistent with the real scene;
S304-2, performing peripheral spherical radiation by using a coordinate point, taking V as a unit radiation volume, counting the number of coordinate points with differences between a virtual scene and a real scene in the unit radiation sphere, and calculating a difference occupation ratio k in the unit radiation sphere, wherein a calculation formula is as follows Wherein M is the number of different coordinate points in the unit radiating sphere, and M is the total number of coordinate points in the unit radiating sphere;
s304-3, calculating the comprehensive difference degree eta of each unit radiating sphere by analyzing the difference data of two or more units radiating spheres in the positive surface space and the dynamic data back surface space in the virtual space, wherein the calculation formula is as follows Wherein lambda (m) is a difference value set of each difference coordinate point in the unit radiation sphere; collecting the distance z from the central point of each unit radiating sphere to the coordinate base point, and taking/>Calculating the data difference influence degree D of each unit radiation sphere as a distance influence coefficient, wherein the calculation formula is D=q×eta;
S304-4, sorting the numerical values of the data difference degree results of the unit radiant balls, and formulating a difference data correction updating sequence of the positions of the corresponding unit radiant balls in the virtual space according to the sequence from large to small; by introducing the distance between the position of the difference data in the virtual space and the user as an influence coefficient, the degree of difference of the aggregated data is calculated, and the comprehensive evaluation of the data of the position with large data difference and the position close to the user experience can be ensured, so that reasonable updating priority is formulated.
In the step S400, the self-adaptive data regulation and control are carried out on the virtual scene through data updating output, the regulation and control data are intelligently prompted to a user, and the specific steps of completing the virtual scene updating according to the user instruction are as follows:
s401, according to a difference position priority ordering result of data correction in the virtual scene, the data of the corresponding position in the real scene is called and replaced and mapped into the virtual scene;
And S402, after the virtual scene data is updated, retaining the original data of the virtual scene before updating in the current virtual scene experience of the user, and allowing the user to restore the data.
The space arrangement system based on the virtual reality comprises a live-action data acquisition module, a virtual scene construction module, a difference data determination module and a self-adaptive update module;
The live-action data acquisition module acquires image data and environment data in live-action by utilizing data acquisition equipment at the front end; the virtual scene construction module performs preprocessing operation on the acquired data, performs image blurring processing and color strengthening on the processed image data, performs strengthening display on the strengthened color boundary pixels to obtain object contours, performs re-matching according to the contour data to perform two-dimensional plane scene data construction, and maps the two-dimensional scene data to a virtual space in parallel to complete virtual scene construction; the difference data determining module is used for carrying out bit plane distinction on the virtual space according to the user track by projecting the virtual image of the user and tracking the action track of the user in the virtual scene, and regulating and controlling the data dynamic and static states of the virtual space according to the bit plane; analyzing the difference condition of the corresponding positions of the virtual scene and the real scene on the dynamic data bit plane, and calculating the data updating priority according to the regional difference condition; the self-adaptive updating module performs primary reservation on the original data of the virtual scene according to the updating priority of the difference area, and performs data replacement updating on the difference area;
the live-action data acquisition module is connected with the virtual scene construction module; the virtual scene construction module is connected with the difference data determination module; the difference data determining module is connected with the adaptive updating module.
The live-action data acquisition module comprises a multi-angle data acquisition unit and a data encapsulation transmission unit; the multi-angle data acquisition unit determines the area range of data to be acquired in the real scene, divides the determined area into subareas, and acquires multi-angle data of each subarea; the multi-view data acquisition performs time line on-site data acquisition on live scenes at different heights and angles through unmanned aerial vehicle equipment; simulating a real user visual angle through the cruising intelligent robot to conduct route planning, and collecting scene data in different time of different subareas; image acquisition is carried out on real-time scene data in different subareas by erecting a camera device; the unmanned aerial vehicle and intelligent robot acquisition data comprise real scene image data and environment data; the environment data comprise real scenes including temperature data, humidity data, wind speed and wind direction data and air pressure data; the data packaging unit packages the collected data through integration, and then packages the integrated data with corresponding time and sub-region numbers and sends the integrated data to the back-end data center; and numbering the subareas after determining the real scene area of data acquisition and dividing the subareas.
The virtual scene construction module comprises a data preprocessing unit, an image data strengthening unit and a virtual scene construction unit; the data preprocessing unit performs independent slicing on the data according to time and area to obtain image data and environment data corresponding to different time lines of each area of the real scene; the slice data is cleaned, screened out and filled up, and the processed image data is subjected to modulating and expanding operations to enable the image data of all the subareas at the same time point to be arranged on a normalization plane; the image data strengthening unit performs normalization division on the image data of each sub-region in a plane by taking the region number as a first-level division layer mark; clustering the image data of the same time point in the corresponding subarea by taking the time node data of the image as a secondary dividing hierarchy mark; positioning image pixel point data of the clustered image data, and extracting each pixel point data; the pixel point data are color data and brightness data; pixel blurring processing is carried out on the pixel points of the real scene object image in the single image by using pixel point color data and brightness data; carrying out object pixel point boundary brightness enhancement on the blurred image to obtain occupation contours of all objects in image data; performing contour matching on the image data processed by the same time point and the sub-region, and splicing the matched different image data to obtain image data with higher integrity; repeating the operation until the image data of the subareas in the same time point are completely constructed; the virtual scene constructing unit takes the restored two-dimensional image data as a comparison object, and performs three-dimensional space scene filling in a virtual space in a mapping mode by taking the scaling of image data acquisition as a restoration scaling factor; in the mapped scene space, mixing environment data corresponding to the time point position aiming at the restored scene area and moment thereof, and realizing virtual parallel space mapping of the real scene at a certain position at a certain moment; and penetrating the virtual mapping scene of the multiple time points through the corresponding time line, so as to realize the construction of the virtual scene.
The difference data determining module comprises a user track tracking unit, a dynamic and static data regulating and controlling unit and a data difference determining unit; the user track tracking unit acquires the physical state data of the user after performing authority access on the user when the user logs in the virtual scene, and constructs the virtual character image of the user in the virtual scene through a twin technology according to the physical state data of the user; capturing a behavior track of a user in a virtual space through a sensor of a device in VR equipment worn by the user; the user behavior track comprises a user head motion track, a hand motion track, a leg motion track and a trunk motion track; when the dynamic and static data regulation unit changes according to the dynamic state of a user, the virtual space is dynamically divided into a front surface and a back surface according to a virtual scene range interval which is observed by the user and changes along with the change of the visual angle of the user in the virtual space; the face of the user facing the observable face is taken as a positive face in the virtual space, and the face of the back of the user facing the unobservable face is taken as a back face in the virtual space; according to the dynamic motion range and the observation range of the user, carrying out dynamic real scene mapping simulation on the front surface area and the back surface area which can be contacted by the user, and carrying out static real scene data dormancy storage on the back surface area which can not be contacted by the user and can not be observed; the data difference determining unit is used for sequencing correction priorities of the difference areas by constructing a data comparison model between the real scene data and the virtual scene data, analyzing difference point data, difference area data and influence degree of the difference areas between the two scenes through model calculation.
The self-adaptive updating module comprises a data updating control unit and a data restoring unit; the data updating control unit is used for calling and replacing the data at the corresponding position in the real scene according to the difference position priority ordering result of the data correction in the virtual scene and mapping the data into the virtual scene; after the data restoring unit updates the virtual scene data, the original data of the virtual scene before updating is reserved in the current virtual scene experience of the user, so that the user can restore the data.
Compared with the prior art, the application has the following beneficial effects: according to the method, data are collected from the front end and processed by the rear end through the multi-module function, the cloud end builds a virtual scene, and synchronous updating of the cloud end and the actual scene is achieved; according to the method, dynamic bit plane classification is carried out on the virtual scene by tracking the action track of the user in the virtual scene, the virtual space which is in visual touch and visible for the user and the actual environment are synchronously mapped to simulate the data scene in the actual environment, and the data before static is kept after mapping for the non-touch and non-visible virtual space of the user and the data is maintained not to participate in real environment change; in addition, in the dynamic data virtual scene area, the virtual space scene data and the real scene data are compared and analyzed in real time by constructing a comparison model, and the update priority of the virtual space difference data is analyzed in multiple aspects by combining the calculation error, the influence degree of the area difference data and the body feeling of a user, so that the authenticity and the correctness of the virtual space can be ensured to be maintained in the absence of the user.
Drawings
The accompanying drawings are included to provide a further understanding of the invention and are incorporated in and constitute a part of this specification, illustrate the invention and together with the embodiments of the invention, serve to explain the invention. In the drawings:
fig. 1 is a schematic structural diagram of a spatial arrangement system based on virtual reality according to the present invention.
Detailed Description
The following description of the embodiments of the present invention will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present invention, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
Referring to fig. 1, the present invention provides the following technical solutions:
A virtual reality-based spatial arrangement method, the method comprising the steps of:
S100, image data acquisition and dynamic environment data acquisition are carried out on a live-action land in advance in a live-action land device monitoring device, and the data are packaged and transmitted to a rear-end data center;
s200, the data center performs unpacking extraction and data information detection on received data, performs pixel information plane extraction and image boundary pixel reinforcement on detected image data, performs splicing and plane scene restoration on multi-pixel information, and performs space projection on plane scene information and environment data to build a virtual reality scene;
s300, in a virtual scene, mapping the user posture data, tracking the motion trail of the user in the virtual scene, automatically regulating and controlling the dynamic and static states of the data in the virtual environment by reflecting the real scene environment in parallel, constructing a comparison model, dynamically comparing the real scene data with the virtual scene data through the model, and making a self-adaptive updating plan according to the comparison result;
s400, performing self-adaptive data regulation and control on the virtual scene through data updating output, prompting a user through intelligent regulation and control data, and completing virtual scene updating according to a user instruction.
In S100, the specific steps of performing image data acquisition and dynamic environment data acquisition on the live-action land in advance in the live-action land device monitoring device, and packaging and transmitting the data to the rear-end data center are as follows:
S101, dividing a determined region by determining a region range of data to be acquired in a real scene, and acquiring multi-view data of each region; the multi-view data acquisition performs time line on-site data acquisition on live scenes at different heights and angles through unmanned aerial vehicle equipment; simulating a real user visual angle through the cruising intelligent robot to conduct route planning, and collecting scene data in different time of different subareas; image acquisition is carried out on real-time scene data in different subareas by erecting a camera device; the unmanned aerial vehicle and intelligent robot acquisition data comprise real scene image data and environment data; the environment data comprise real scene temperature data, humidity data, wind speed and wind direction data and air pressure data;
S102, after integrating the acquired data, packaging the integrated data with corresponding time and sub-region numbers and sending the integrated data to a back-end data center; and numbering the subareas after determining the real scene area of data acquisition and dividing the subareas.
The step S200 is that the data center performs unpacking extraction and data information detection on the received data, performs pixel information plane extraction and image boundary pixel reinforcement on the detected image data, performs splicing and planar scene restoration on multi-pixel information, and performs space projection on planar scene information combined with environmental data to build a virtual reality scene, wherein the specific steps are as follows:
S201, the data center receives the package data, and independently slices the data according to time and area to obtain image data and environment data of different time lines corresponding to each area of the real scene; the slice data is cleaned, screened out and filled up, and the processed image data is subjected to modulating and expanding operations to enable the image data of all the subareas at the same time point to be arranged on a normalization plane;
S202, in a plane, taking a region number as a first-level dividing layer mark, and carrying out normalization division on image data of each sub-region; clustering the image data of the same time point in the corresponding subarea by taking the time node data of the image as a secondary dividing hierarchy mark; positioning image pixel point data of the clustered image data, and extracting each pixel point data; the pixel point data are color data and brightness data;
S203, performing pixel blurring processing on the pixel points of the real scene object image in the single image by using pixel point color data and brightness data; carrying out object pixel point boundary brightness enhancement on the blurred image to obtain occupation contours of all objects in image data; performing contour matching on the image data processed by the same time point and the sub-region, and splicing the matched different image data to obtain image data with higher integrity; repeating the operation until the image data of the subareas in the same time point are completely constructed;
s204, taking the restored two-dimensional image data as a comparison object, taking the scaling of image data acquisition as a restoration scaling factor, and filling a three-dimensional space scene in a virtual space in a mapping mode; in the mapped scene space, mixing environment data corresponding to the time point position aiming at the restored scene area and moment thereof, and realizing virtual parallel space mapping of the real scene at a certain position at a certain moment; and penetrating the virtual mapping scene of the multiple time points through the corresponding time line, so as to realize the construction of the virtual scene.
In the step S300, in the virtual scene, virtual scene mapping is performed on the user posture data, the motion trail of the user in the virtual scene is tracked, the dynamic and static states of the data in the virtual environment are automatically regulated and controlled by parallel mapping of the real scene environment, a comparison model is constructed, dynamic comparison is performed on the real scene data and the virtual scene data through the model, and a self-adaptive update plan is formulated according to the comparison result:
S301, when a user logs in a virtual scene, acquiring physical data of the user after authority access is performed on the user, and constructing an avatar of the user in the virtual scene through a twin technology according to the physical data of the user;
S302, capturing a behavior track of a user in a virtual space through a sensor of a device in VR equipment worn by the user; the user behavior track comprises a user head motion track, a hand motion track, a leg motion track and a trunk motion track; the face of the user faces towards the observable surface to be used as a normal position surface in the virtual space, the initial user horizontal visual angle range is [ theta 12 ], and the initial user vertical visual angle range is [ theta 34 ]; the corresponding back facing non-observable surface of the user is a back surface in the virtual space, the initial user horizontal viewing angle range is [ alpha 12 ], and the initial user vertical viewing angle range is [ alpha 34 ]; capturing the track of the head of the user, if the rotation angle of the head of the user is captured to be beta, respectively calculating rotation components beta h and beta v of the rotation of the head of the user in the horizontal direction and the vertical direction, wherein the calculation formula is beta h=β*sinβ,βv = beta cos beta; wherein the method comprises the steps of Θ 1 is the starting point angle value of the initial horizontal view angle of the user front surface, θ 2 is the ending point angle value of the initial horizontal view angle of the user front surface, θ 3 is the starting point angle value of the initial vertical view angle of the user front surface, θ 4 is the ending point angle value of the initial vertical view angle of the user front surface, α 1 is the starting point angle value of the initial horizontal view angle of the user back surface, α 2 is the ending point angle value of the initial horizontal view angle of the user front surface, α 3 is the starting point angle value of the initial vertical view angle of the user front surface, and α 4 is the ending point angle value of the initial vertical view angle of the user front surface; dynamically adjusting the front view angle range and the back view angle range of the user in the virtual space according to the rotation condition of the head of the user; if the user head rotates in a single horizontal direction, only the horizontal visual angle ranges of the front visual angle range and the back visual angle range of the user are adjusted, and the adjustment results are [ theta 1±β,θ2 +/-beta ] and [ alpha 1±β,α2 +/-beta ]; the user head is subjected to angle subtraction when horizontally rotating left, and angle addition when horizontally rotating left; if the user head rotates to rotate in a single vertical direction, only the vertical view angle ranges of the front view angle range and the back view angle range of the user are adjusted, and the adjustment results are [ theta 3±β,θ4 +/-beta ] and [ alpha 3±β,α4 +/-beta ]; wherein, the angles of the user heads are subtracted when the user heads are vertically downward, and the angles of the user heads are added when the user heads are vertically upward; if the rotation of the head of the user is in the horizontal direction and the vertical direction, the horizontal and vertical direction visual angle ranges of the front visual angle range and the back visual angle range of the user are adjusted, and the adjustment results are [ theta 1±βh2±βh]、[θ3±βv4±βv ] and [ alpha 1±βh2±βh]、[α3±βv4±βv ]; when the head of the user rotates obliquely left and upwards, the horizontal viewing angle of the bit plane changes to be angle subtraction, and the vertical viewing angle changes to be angle addition; when the head of the user rotates downwards in a left inclined way, the horizontal viewing angle of the position surface is changed into angle subtraction, and the vertical viewing angle is changed into angle subtraction; when the head of the user rotates obliquely upwards, the horizontal viewing angle of the bit plane is changed into angle addition, and the vertical viewing angle is changed into angle addition; when the head of the user rotates obliquely downwards, the horizontal viewing angle change of the bit plane is angle addition, and the vertical viewing angle change is angle subtraction;
S303, automatically adjusting the bit surface range according to the motion trail of the user, performing dynamic real scene mapping simulation on the space data of the front bit surface in the virtual space, and performing static real scene data dormancy storage on the space data of the back bit surface in the virtual space; the data dormancy space is a space behind the back of the user in a parallel vertical plane which is a distance from a vertical plane s where the user stands still in the back plane; wherein s is the distance from the hand end to the sole when the arm and the leg of the user are in a line;
S304, constructing a dynamic space coordinate system of the virtual space by taking the position of the eyes of the user as a space coordinate base point in the virtual space; acquiring coordinate data of each object data point of the virtual scene mapped in the virtual space by using a space coordinate system; storing object data and environment data at positions corresponding to real scenes at all coordinate points of a space in a data tree converging mode; the object data is data of an object in a corresponding real scene after image blurring and strengthening processing, and comprises color data, brightness data and contour data of the object;
When the difference analysis is carried out between the scene and the real scene in the virtual space, the dynamic comparison analysis is carried out on the virtual scene and the real scene by constructing a comparison model, wherein the model construction steps are as follows:
s304-1, performing difference analysis by retrieving the data number at the same mapping space position, and performing normalization and conversion on the object data and the environment data at the same position; superposing the data trees at the same coordinates in a parallel mapping mode, and calculating a difference value lambda between the two data trees, wherein the calculation mode is that Wherein, P vs is the space volume of the data tree at the coordinate point in the virtual space, and P rs is the space volume of the data tree at the coordinate point in the real scene; if lambda is more than 1+gamma or lambda is less than 1-gamma, judging that the difference exists between the environmental data at the current coordinate point in the virtual scene and the environmental data in the real scene; wherein gamma is an error correction value;
S304-2, performing peripheral spherical radiation by using a coordinate point, taking V as a unit radiation volume, counting the number of coordinate points with differences between a virtual scene and a real scene in the unit radiation sphere, and calculating a difference occupation ratio k in the unit radiation sphere, wherein a calculation formula is as follows Wherein M is the number of different coordinate points in the unit radiating sphere, and M is the total number of coordinate points in the unit radiating sphere;
s304-3, calculating the comprehensive difference degree eta of each unit radiating sphere by analyzing the difference data of two or more units radiating spheres in the positive surface space and the dynamic data back surface space in the virtual space, wherein the calculation formula is as follows Wherein lambda (m) is a difference value set of each difference coordinate point in the unit radiation sphere; collecting the distance z from the central point of each unit radiating sphere to the coordinate base point, and taking/>Calculating the data difference influence degree D of each unit radiation sphere as a distance influence coefficient, wherein the calculation formula is D=q×eta;
S304-4, sorting the numerical values of the data difference degree results of the unit radiant balls, and setting a correction and update sequence of the difference data of the corresponding unit radiant ball positions in the virtual space according to the sequence from large to small.
In the step S400, the self-adaptive data regulation and control are carried out on the virtual scene through data updating output, the regulation and control data are intelligently prompted to a user, and the specific steps of completing the virtual scene updating according to the user instruction are as follows:
s401, according to a difference position priority ordering result of data correction in the virtual scene, the data of the corresponding position in the real scene is called and replaced and mapped into the virtual scene;
And S402, after the virtual scene data is updated, retaining the original data of the virtual scene before updating in the current virtual scene experience of the user, and allowing the user to restore the data.
The space arrangement system based on the virtual reality comprises a live-action data acquisition module, a virtual scene construction module, a difference data determination module and a self-adaptive update module;
The live-action data acquisition module acquires image data and environment data in live-action by utilizing data acquisition equipment at the front end; the virtual scene construction module performs preprocessing operation on the acquired data, performs image blurring processing and color strengthening on the processed image data, performs strengthening display on the strengthened color boundary pixels to obtain object contours, performs re-matching according to the contour data to perform two-dimensional plane scene data construction, and maps the two-dimensional scene data to a virtual space in parallel to complete virtual scene construction; the difference data determining module is used for carrying out bit plane distinction on the virtual space according to the user track by projecting the virtual image of the user and tracking the action track of the user in the virtual scene, and regulating and controlling the data dynamic and static states of the virtual space according to the bit plane; analyzing the difference condition of the corresponding positions of the virtual scene and the real scene on the dynamic data bit plane, and calculating the data updating priority according to the regional difference condition; the self-adaptive updating module performs primary reservation on the original data of the virtual scene according to the updating priority of the difference area, and performs data replacement updating on the difference area;
the live-action data acquisition module is connected with the virtual scene construction module; the virtual scene construction module is connected with the difference data determination module; the difference data determining module is connected with the adaptive updating module.
The live-action data acquisition module comprises a multi-angle data acquisition unit and a data encapsulation transmission unit; the multi-angle data acquisition unit determines the area range of data to be acquired in the real scene, divides the determined area into subareas, and acquires multi-angle data of each subarea; the multi-view data acquisition performs time line on-site data acquisition on live scenes at different heights and angles through unmanned aerial vehicle equipment; simulating a real user visual angle through the cruising intelligent robot to conduct route planning, and collecting scene data in different time of different subareas; image acquisition is carried out on real-time scene data in different subareas by erecting a camera device; the unmanned aerial vehicle and intelligent robot acquisition data comprise real scene image data and environment data; the environment data comprise real scenes including temperature data, humidity data, wind speed and wind direction data and air pressure data; the data packaging unit packages the collected data through integration, and then packages the integrated data with corresponding time and sub-region numbers and sends the integrated data to the back-end data center; and numbering the subareas after determining the real scene area of data acquisition and dividing the subareas.
The virtual scene construction module comprises a data preprocessing unit, an image data strengthening unit and a virtual scene construction unit; the data preprocessing unit performs independent slicing on the data according to time and area to obtain image data and environment data corresponding to different time lines of each area of the real scene; the slice data is cleaned, screened out and filled up, and the processed image data is subjected to modulating and expanding operations to enable the image data of all the subareas at the same time point to be arranged on a normalization plane; the image data strengthening unit performs normalization division on the image data of each sub-region in a plane by taking the region number as a first-level division layer mark; clustering the image data of the same time point in the corresponding subarea by taking the time node data of the image as a secondary dividing hierarchy mark; positioning image pixel point data of the clustered image data, and extracting each pixel point data; the pixel point data are color data and brightness data; pixel blurring processing is carried out on the pixel points of the real scene object image in the single image by using pixel point color data and brightness data; carrying out object pixel point boundary brightness enhancement on the blurred image to obtain occupation contours of all objects in image data; performing contour matching on the image data processed by the same time point and the sub-region, and splicing the matched different image data to obtain image data with higher integrity; repeating the operation until the image data of the subareas in the same time point are completely constructed; the virtual scene constructing unit takes the restored two-dimensional image data as a comparison object, and performs three-dimensional space scene filling in a virtual space in a mapping mode by taking the scaling of image data acquisition as a restoration scaling factor; in the mapped scene space, mixing environment data corresponding to the time point position aiming at the restored scene area and moment thereof, and realizing virtual parallel space mapping of the real scene at a certain position at a certain moment; and penetrating the virtual mapping scene of the multiple time points through the corresponding time line, so as to realize the construction of the virtual scene.
The difference data determining module comprises a user track tracking unit, a dynamic and static data regulating and controlling unit and a data difference determining unit; the user track tracking unit acquires the physical state data of the user after performing authority access on the user when the user logs in the virtual scene, and constructs the virtual character image of the user in the virtual scene through a twin technology according to the physical state data of the user; capturing a behavior track of a user in a virtual space through a sensor of a device in VR equipment worn by the user; the user behavior track comprises a user head motion track, a hand motion track, a leg motion track and a trunk motion track; when the dynamic and static data regulation unit changes according to the dynamic state of a user, the virtual space is dynamically divided into a front surface and a back surface according to a virtual scene range interval which is observed by the user and changes along with the change of the visual angle of the user in the virtual space; the face of the user facing the observable face is taken as a positive face in the virtual space, and the face of the back of the user facing the unobservable face is taken as a back face in the virtual space; according to the dynamic motion range and the observation range of the user, carrying out dynamic real scene mapping simulation on the front surface area and the back surface area which can be contacted by the user, and carrying out static real scene data dormancy storage on the back surface area which can not be contacted by the user and can not be observed; the data difference determining unit is used for sequencing correction priorities of the difference areas by constructing a data comparison model between the real scene data and the virtual scene data, analyzing difference point data, difference area data and influence degree of the difference areas between the two scenes through model calculation.
The self-adaptive updating module comprises a data updating control unit and a data restoring unit; the data updating control unit is used for calling and replacing the data at the corresponding position in the real scene according to the difference position priority ordering result of the data correction in the virtual scene and mapping the data into the virtual scene; after the data restoring unit updates the virtual scene data, the original data of the virtual scene before updating is reserved in the current virtual scene experience of the user, so that the user can restore the data.
In an embodiment:
The 'VR+travel' project needs to construct a cloud virtual reality scene of a scenic spot and can realize synchronous data updating and maintenance, and the scheme of the application is adopted;
Firstly, dividing a subarea of a scenic spot and collecting regional data, and collecting image data and environment data in the subarea through an unmanned aerial vehicle, an intelligent robot and sensor equipment; after integrating the acquired data, packaging the integrated data with corresponding time and sub-region numbers and sending the integrated data to a back-end data center;
The data center receives the encapsulation data, and independently slices the data in time and area to obtain image data and environment data corresponding to different time lines of each area of the real scene; the slice data is cleaned, screened out and filled up, and the processed image data is subjected to modulating and expanding operations to enable the image data of all the subareas at the same time point to be arranged on a normalization plane; taking the region number as a first-level dividing layer mark, and carrying out normalization division on the image data of each sub-region; clustering the image data of the same time point in the corresponding subarea by taking the time node data of the image as a secondary dividing hierarchy mark; positioning image pixel point data of the clustered image data, and extracting each pixel point data; pixel blurring processing is carried out on the pixel points of the real scene object image in the single image by using pixel point color data and brightness data; carrying out object pixel point boundary brightness enhancement on the blurred image to obtain occupation contours of all objects in image data; performing contour matching on the image data processed by the same time point and the sub-region, and splicing the matched different image data to obtain image data with higher integrity; repeating the operation until the image data of the subareas in the same time point are completely constructed; taking the restored two-dimensional image data as a comparison object, taking the scaling of image data acquisition as a restoration scaling factor, and filling a three-dimensional space scene in a virtual space in a mapping mode; in the mapped scene space, mixing environment data corresponding to the time point position aiming at the restored scene area and moment thereof, and realizing virtual parallel space mapping of the real scene at a certain position at a certain moment; penetrating the virtual mapping scene of the multiple time points through the corresponding time line to build the virtual scene;
When a user logs in a virtual scene, acquiring physical data of the user after authority access is performed on the user, and constructing an avatar of the user in the virtual scene through a twin technology according to the physical data of the user; capturing a behavior track of a user in a virtual space through a sensor of a device in VR equipment worn by the user; capturing the left rotation of the horizontal head of the current user Then the dynamic normal plane horizontal visual range in the virtual space is/>The dynamic normal plane vertical visual range in the virtual space is/>Here, the calculation is performed with the horizontal vision range in front of normal human front vision being pi and the leftmost vision range being 0; vertical visual range is/>And the lowest visual range is 0, the normal level horizontal range at the conventional moment is [0, pi ], and the vertical range is/>The dynamic back surface horizontal visual range in the virtual space is/>Dynamic dorsal aspect vertical visual range is/>Performing dynamic real scene mapping simulation on space data of a front surface in the virtual space and a space behind the back of a user of a parallel vertical surface which is 2 meters away from the vertical surface where the user stands still in the back surface, and performing static real scene data dormancy storage on the rest back surface space data;
A contrast model is built in a virtual space, a difference area is analyzed, difference analysis is carried out between one point (2, 4, 7) and a corresponding point in a real scene in the simulation space, data trees at the same coordinates are overlapped in a parallel mapping mode, and a difference value lambda between the two data trees is calculated, wherein the calculation mode is that The calculation result is 0.8; since λ < 1- γ, where γ is 0.1; judging that the environment data at the current coordinate point in the virtual scene is different from the environment data in the real scene; performing peripheral spherical radiation by using the current coordinate point, taking 2 cubic meters as a unit radiation volume, counting the number of coordinate points with differences between a virtual scene and a real scene in the unit radiation sphere, and calculating a difference occupation ratio k in the unit radiation sphere, wherein the calculation formula is as followsThe calculation result is 0.1; analyzing difference data of two unit radiating balls 1 and 2 existing in the front surface space and the dynamic data back surface space in the virtual space, and calculating the comprehensive difference degree eta of each unit radiating ball, wherein the calculation formula is as followsThe calculation result is 0.09 for sphere 1 and 0.17 for sphere 2, respectively; collecting the distances from the center point of each unit radiation sphere to the coordinate base point as ball 1 as 4 and ball 2 as 2, and taking/>Calculating the data difference influence degree D of each unit radiation sphere as a distance influence coefficient, wherein the calculation formula is D=q×eta, and the calculation result is that sphere 1 is 0.025 and sphere 2 is 0.085; since the difference influence degree of the ball 2 is larger than that of the ball 1, the update operation of the ball 2 is preferentially performed; and (3) the data of the corresponding position in the real scene is called and replaced and mapped into the virtual scene, and after the virtual scene data is updated, the original data of the virtual scene before the update is reserved in the current virtual scene experience of the user, so that the user can restore the data.
It is noted that relational terms such as first and second, and the like are used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Moreover, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus.
Finally, it should be noted that: the foregoing description is only a preferred embodiment of the present invention, and the present invention is not limited thereto, but it is to be understood that modifications and equivalents of some of the technical features described in the foregoing embodiments may be made by those skilled in the art, although the present invention has been described in detail with reference to the foregoing embodiments. Any modification, equivalent replacement, improvement, etc. made within the spirit and principle of the present invention should be included in the protection scope of the present invention.

Claims (10)

1. A space arrangement method based on virtual reality is characterized in that: the method comprises the following steps:
S100, image data acquisition and dynamic environment data acquisition are carried out on a live-action land in advance in a live-action land device monitoring device, and the data are packaged and transmitted to a rear-end data center;
s200, the data center performs unpacking extraction and data information detection on received data, performs pixel information plane extraction and image boundary pixel reinforcement on detected image data, performs splicing and plane scene restoration on multi-pixel information, and performs space projection on plane scene information and environment data to build a virtual reality scene;
s300, in a virtual scene, mapping the user posture data, tracking the motion trail of the user in the virtual scene, automatically regulating and controlling the dynamic and static states of the data in the virtual environment by reflecting the real scene environment in parallel, constructing a comparison model, dynamically comparing the real scene data with the virtual scene data through the model, and making a self-adaptive updating plan according to the comparison result;
s400, performing self-adaptive data regulation and control on the virtual scene through data updating output, prompting a user through intelligent regulation and control data, and completing virtual scene updating according to a user instruction.
2. The virtual reality-based spatial arrangement method of claim 1, characterized by: in S100, the specific steps of performing image data acquisition and dynamic environment data acquisition on the live-action land in advance in the live-action land device monitoring device, and packaging and transmitting the data to the rear-end data center are as follows:
S101, dividing a determined region by determining a region range of data to be acquired in a real scene, and acquiring multi-view data of each region; the multi-view data acquisition performs time line on-site data acquisition on live scenes at different heights and angles through unmanned aerial vehicle equipment; simulating a real user visual angle through the cruising intelligent robot to conduct route planning, and collecting scene data in different time of different subareas; image acquisition is carried out on real-time scene data in different subareas by erecting a camera device; the unmanned aerial vehicle and intelligent robot acquisition data comprise real scene image data and environment data; the environment data comprise real scene temperature data, humidity data, wind speed and wind direction data and air pressure data;
S102, after integrating the acquired data, packaging the integrated data with corresponding time and sub-region numbers and sending the integrated data to a back-end data center; and numbering the subareas after determining the real scene area of data acquisition and dividing the subareas.
3. The virtual reality-based spatial arrangement method of claim 2, characterized by: the step S200 is that the data center performs unpacking extraction and data information detection on the received data, performs pixel information plane extraction and image boundary pixel reinforcement on the detected image data, performs splicing and planar scene restoration on multi-pixel information, and performs space projection on planar scene information combined with environmental data to build a virtual reality scene, wherein the specific steps are as follows:
S201, the data center receives the package data, and independently slices the data according to time and area to obtain image data and environment data of different time lines corresponding to each area of the real scene; the slice data is cleaned, screened out and filled up, and the processed image data is subjected to modulating and expanding operations to enable the image data of all the subareas at the same time point to be arranged on a normalization plane;
S202, in a plane, taking a region number as a first-level dividing layer mark, and carrying out normalization division on image data of each sub-region; clustering the image data of the same time point in the corresponding subarea by taking the time node data of the image as a secondary dividing hierarchy mark; positioning image pixel point data of the clustered image data, and extracting each pixel point data; the pixel point data are color data and brightness data;
S203, performing pixel blurring processing on the pixel points of the real scene object image in the single image by using pixel point color data and brightness data; carrying out object pixel point boundary brightness enhancement on the blurred image to obtain occupation contours of all objects in image data; performing contour matching on the image data processed by the same time point and the sub-region, and splicing the matched different image data to obtain image data with higher integrity; repeating the operation until the image data of the subareas in the same time point are completely constructed;
s204, taking the restored two-dimensional image data as a comparison object, taking the scaling of image data acquisition as a restoration scaling factor, and filling a three-dimensional space scene in a virtual space in a mapping mode; in the mapped scene space, mixing environment data corresponding to the time point position aiming at the restored scene area and moment thereof, and realizing virtual parallel space mapping of the real scene at a certain position at a certain moment; and penetrating the virtual mapping scene of the multiple time points through the corresponding time line, so as to realize the construction of the virtual scene.
4. A virtual reality-based spatial arrangement method according to claim 3, characterized by: in the step S300, in the virtual scene, virtual scene mapping is performed on the user posture data, the motion trail of the user in the virtual scene is tracked, the dynamic and static states of the data in the virtual environment are automatically regulated and controlled by parallel mapping of the real scene environment, a comparison model is constructed, dynamic comparison is performed on the real scene data and the virtual scene data through the model, and a self-adaptive update plan is formulated according to the comparison result:
S301, when a user logs in a virtual scene, acquiring physical data of the user after authority access is performed on the user, and constructing an avatar of the user in the virtual scene through a twin technology according to the physical data of the user;
S302, capturing a behavior track of a user in a virtual space through a sensor of a device in VR equipment worn by the user; the user behavior track comprises a user head motion track, a hand motion track, a leg motion track and a trunk motion track; the face of the user faces towards the observable surface to be used as a normal position surface in the virtual space, the initial user horizontal visual angle range is [ theta 12 ], and the initial user vertical visual angle range is [ theta 34 ]; the corresponding back facing non-observable surface of the user is a back surface in the virtual space, the initial user horizontal viewing angle range is [ alpha 12 ], and the initial user vertical viewing angle range is [ alpha 34 ]; capturing the track of the head of the user, if the rotation angle of the head of the user is captured to be beta, respectively calculating rotation components beta h and beta v of the rotation of the head of the user in the horizontal direction and the vertical direction, wherein the calculation formula is beta h=β*sinβ,βv = beta cos beta; wherein the method comprises the steps of Θ 1 is the starting point angle value of the initial horizontal view angle of the user front surface, θ 2 is the ending point angle value of the initial horizontal view angle of the user front surface, θ 3 is the starting point angle value of the initial vertical view angle of the user front surface, θ 4 is the ending point angle value of the initial vertical view angle of the user front surface, α 1 is the starting point angle value of the initial horizontal view angle of the user back surface, α 2 is the ending point angle value of the initial horizontal view angle of the user front surface, α 3 is the starting point angle value of the initial vertical view angle of the user front surface, and α 4 is the ending point angle value of the initial vertical view angle of the user front surface; dynamically adjusting the front view angle range and the back view angle range of the user in the virtual space according to the rotation condition of the head of the user; if the user head rotates in a single horizontal direction, only the horizontal visual angle ranges of the front visual angle range and the back visual angle range of the user are adjusted, and the adjustment results are [ theta 1±β,θ2 +/-beta ] and [ alpha 1±β,α2 +/-beta ]; the user head is subjected to angle subtraction when horizontally rotating left, and angle addition when horizontally rotating left; if the user head rotates to rotate in a single vertical direction, only the vertical view angle ranges of the front view angle range and the back view angle range of the user are adjusted, and the adjustment results are [ theta 3±β,θ4 +/-beta ] and [ alpha 3±β,α4 +/-beta ]; wherein, the angles of the user heads are subtracted when the user heads are vertically downward, and the angles of the user heads are added when the user heads are vertically upward; if the rotation of the user's head is in the horizontal and vertical directions, the horizontal and vertical direction viewing angle ranges of the front viewing angle range and the back viewing angle range of the user are adjusted, and the adjustment results are [ theta 1±βh2±βh]、[θ3±βv4±βv ] and
[ Alpha 1±βh2±βh]、[α3±βv4±βv ]; when the head of the user rotates obliquely left and upwards, the horizontal viewing angle of the bit plane changes to be angle subtraction, and the vertical viewing angle changes to be angle addition; when the head of the user rotates downwards in a left inclined way, the horizontal viewing angle of the position surface is changed into angle subtraction, and the vertical viewing angle is changed into angle subtraction; when the head of the user rotates obliquely upwards, the horizontal viewing angle of the bit plane is changed into angle addition, and the vertical viewing angle is changed into angle addition; when the head of the user rotates obliquely downwards, the horizontal viewing angle change of the bit plane is angle addition, and the vertical viewing angle change is angle subtraction;
S303, automatically adjusting the bit surface range according to the motion trail of the user, performing dynamic real scene mapping simulation on the space data of the front bit surface in the virtual space, and performing static real scene data dormancy storage on the space data of the back bit surface in the virtual space; the data dormancy space is a space behind the back of the user in a parallel vertical plane which is a distance from a vertical plane s where the user stands still in the back plane; wherein s is the distance from the hand end to the sole when the arm and the leg of the user are in a line;
S304, constructing a dynamic space coordinate system of the virtual space by taking the position of the eyes of the user as a space coordinate base point in the virtual space; acquiring coordinate data of each object data point of the virtual scene mapped in the virtual space by using a space coordinate system; storing object data and environment data at positions corresponding to real scenes at all coordinate points of a space in a data tree converging mode; the object data is data of an object in a corresponding real scene after image blurring and strengthening processing, and comprises color data, brightness data and contour data of the object;
When the difference analysis is carried out between the scene and the real scene in the virtual space, the dynamic comparison analysis is carried out on the virtual scene and the real scene by constructing a comparison model, wherein the model construction steps are as follows:
s304-1, performing difference analysis by retrieving the data number at the same mapping space position, and performing normalization and conversion on the object data and the environment data at the same position; superposing the data trees at the same coordinates in a parallel mapping mode, and calculating a difference value lambda between the two data trees, wherein the calculation mode is that Wherein, P vs is the space volume of the data tree at the coordinate point in the virtual space, and P rs is the space volume of the data tree at the coordinate point in the real scene; if lambda is more than 1+gamma or lambda is less than 1-gamma, judging that the difference exists between the environmental data at the current coordinate point in the virtual scene and the environmental data in the real scene; wherein gamma is an error correction value;
S304-2, performing peripheral spherical radiation by using a coordinate point, taking V as a unit radiation volume, counting the number of coordinate points with differences between a virtual scene and a real scene in the unit radiation sphere, and calculating a difference occupation ratio k in the unit radiation sphere, wherein a calculation formula is as follows Wherein M is the number of different coordinate points in the unit radiating sphere, and M is the total number of coordinate points in the unit radiating sphere;
s304-3, calculating the comprehensive difference degree eta of each unit radiating sphere by analyzing the difference data of two or more units radiating spheres in the positive surface space and the dynamic data back surface space in the virtual space, wherein the calculation formula is as follows Wherein lambda (m) is a difference value set of each difference coordinate point in the unit radiation sphere; collecting the distance z from the central point of each unit radiating sphere to the coordinate base point, and taking/>Calculating the data difference influence degree D of each unit radiation sphere as a distance influence coefficient, wherein the calculation formula is D=q×eta;
S304-4, sorting the numerical values of the data difference degree results of the unit radiant balls, and setting a correction and update sequence of the difference data of the corresponding unit radiant ball positions in the virtual space according to the sequence from large to small.
5. The virtual reality-based spatial arrangement method of claim 4, further comprising: in the step S400, the self-adaptive data regulation and control are carried out on the virtual scene through data updating output, the regulation and control data are intelligently prompted to a user, and the specific steps of completing the virtual scene updating according to the user instruction are as follows:
s401, according to a difference position priority ordering result of data correction in the virtual scene, the data of the corresponding position in the real scene is called and replaced and mapped into the virtual scene;
And S402, after the virtual scene data is updated, retaining the original data of the virtual scene before updating in the current virtual scene experience of the user, and allowing the user to restore the data.
6. A virtual reality-based spatial arrangement system, characterized by: the space arrangement system based on the virtual reality comprises a live-action data acquisition module, a virtual scene construction module, a difference data determination module and a self-adaptive updating module;
The live-action data acquisition module acquires image data and environment data in live-action by utilizing data acquisition equipment at the front end; the virtual scene construction module performs preprocessing operation on the acquired data, performs image blurring processing and color strengthening on the processed image data, performs strengthening display on the strengthened color boundary pixels to obtain object contours, performs re-matching according to the contour data to perform two-dimensional plane scene data construction, and maps the two-dimensional scene data to a virtual space in parallel to complete virtual scene construction; the difference data determining module is used for carrying out bit plane distinction on the virtual space according to the user track by projecting the virtual image of the user and tracking the action track of the user in the virtual scene, and regulating and controlling the data dynamic and static states of the virtual space according to the bit plane; analyzing the difference condition of the corresponding positions of the virtual scene and the real scene on the dynamic data bit plane, and calculating the data updating priority according to the regional difference condition; the self-adaptive updating module performs primary reservation on the original data of the virtual scene according to the updating priority of the difference area, and performs data replacement updating on the difference area;
the live-action data acquisition module is connected with the virtual scene construction module; the virtual scene construction module is connected with the difference data determination module; the difference data determining module is connected with the adaptive updating module.
7. A virtual reality based spatial arrangement system according to claim 6, characterized in that: the live-action data acquisition module comprises a multi-angle data acquisition unit and a data encapsulation transmission unit; the multi-angle data acquisition unit determines the area range of data to be acquired in the real scene, divides the determined area into subareas, and acquires multi-angle data of each subarea; the multi-view data acquisition performs time line on-site data acquisition on live scenes at different heights and angles through unmanned aerial vehicle equipment; simulating a real user visual angle through the cruising intelligent robot to conduct route planning, and collecting scene data in different time of different subareas; image acquisition is carried out on real-time scene data in different subareas by erecting a camera device; the unmanned aerial vehicle and intelligent robot acquisition data comprise real scene image data and environment data; the environment data comprise real scenes including temperature data, humidity data, wind speed and wind direction data and air pressure data; the data packaging unit packages the collected data through integration, and then packages the integrated data with corresponding time and sub-region numbers and sends the integrated data to the back-end data center; and numbering the subareas after determining the real scene area of data acquisition and dividing the subareas.
8. A virtual reality based spatial arrangement system according to claim 7, characterized in that: the virtual scene construction module comprises a data preprocessing unit, an image data strengthening unit and a virtual scene construction unit; the data preprocessing unit performs independent slicing on the data according to time and area to obtain image data and environment data corresponding to different time lines of each area of the real scene; the slice data is cleaned, screened out and filled up, and the processed image data is subjected to modulating and expanding operations to enable the image data of all the subareas at the same time point to be arranged on a normalization plane; the image data strengthening unit performs normalization division on the image data of each sub-region in a plane by taking the region number as a first-level division layer mark; clustering the image data of the same time point in the corresponding subarea by taking the time node data of the image as a secondary dividing hierarchy mark; positioning image pixel point data of the clustered image data, and extracting each pixel point data; the pixel point data are color data and brightness data; pixel blurring processing is carried out on the pixel points of the real scene object image in the single image by using pixel point color data and brightness data; carrying out object pixel point boundary brightness enhancement on the blurred image to obtain occupation contours of all objects in image data; performing contour matching on the image data processed by the same time point and the sub-region, and splicing the matched different image data to obtain image data with higher integrity; repeating the operation until the image data of the subareas in the same time point are completely constructed; the virtual scene constructing unit takes the restored two-dimensional image data as a comparison object, and performs three-dimensional space scene filling in a virtual space in a mapping mode by taking the scaling of image data acquisition as a restoration scaling factor; in the mapped scene space, mixing environment data corresponding to the time point position aiming at the restored scene area and moment thereof, and realizing virtual parallel space mapping of the real scene at a certain position at a certain moment; and penetrating the virtual mapping scene of the multiple time points through the corresponding time line, so as to realize the construction of the virtual scene.
9. A virtual reality based spatial arrangement system according to claim 8, characterized in that: the difference data determining module comprises a user track tracking unit, a dynamic and static data regulating and controlling unit and a data difference determining unit; the user track tracking unit acquires the physical state data of the user after performing authority access on the user when the user logs in the virtual scene, and constructs the virtual character image of the user in the virtual scene through a twin technology according to the physical state data of the user; capturing a behavior track of a user in a virtual space through a sensor of a device in VR equipment worn by the user; the user behavior track comprises a user head motion track, a hand motion track, a leg motion track and a trunk motion track; when the dynamic and static data regulation unit changes according to the dynamic state of a user, the virtual space is dynamically divided into a front surface and a back surface according to a virtual scene range interval which is observed by the user and changes along with the change of the visual angle of the user in the virtual space; the face of the user facing the observable face is taken as a positive face in the virtual space, and the face of the back of the user facing the unobservable face is taken as a back face in the virtual space; according to the dynamic motion range and the observation range of the user, carrying out dynamic real scene mapping simulation on the front surface area and the back surface area which can be contacted by the user, and carrying out static real scene data dormancy storage on the back surface area which can not be contacted by the user and can not be observed; the data difference determining unit is used for sequencing correction priorities of the difference areas by constructing a data comparison model between the real scene data and the virtual scene data, analyzing difference point data, difference area data and influence degree of the difference areas between the two scenes through model calculation.
10. A virtual reality based spatial arrangement system according to claim 9, characterized in that: the self-adaptive updating module comprises a data updating control unit and a data restoring unit; the data updating control unit is used for calling and replacing the data at the corresponding position in the real scene according to the difference position priority ordering result of the data correction in the virtual scene and mapping the data into the virtual scene;
after the data restoring unit updates the virtual scene data, in the current virtual scene experience of the user,
And retaining the original data of the virtual scene before updating, and allowing a user to restore the data.
CN202410111657.9A 2024-01-26 2024-01-26 Space arrangement system and method based on virtual reality Pending CN117934777A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202410111657.9A CN117934777A (en) 2024-01-26 2024-01-26 Space arrangement system and method based on virtual reality

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202410111657.9A CN117934777A (en) 2024-01-26 2024-01-26 Space arrangement system and method based on virtual reality

Publications (1)

Publication Number Publication Date
CN117934777A true CN117934777A (en) 2024-04-26

Family

ID=90757159

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202410111657.9A Pending CN117934777A (en) 2024-01-26 2024-01-26 Space arrangement system and method based on virtual reality

Country Status (1)

Country Link
CN (1) CN117934777A (en)

Similar Documents

Publication Publication Date Title
US10229511B2 (en) Method for determining the pose of a camera and for recognizing an object of a real environment
Vallino Interactive augmented reality
US10657714B2 (en) Method and system for displaying and navigating an optimal multi-dimensional building model
US11417069B1 (en) Object and camera localization system and localization method for mapping of the real world
CN105094335B (en) Situation extracting method, object positioning method and its system
CN110148217A (en) A kind of real-time three-dimensional method for reconstructing, device and equipment
EP2175636A1 (en) Method and system for integrating virtual entities within live video
CN107315470A (en) Graphic processing method, processor and virtual reality system
CN105989625A (en) Data processing method and apparatus
CN106797458A (en) The virtual change of real object
US20230377287A1 (en) Systems and methods for selective image compositing
CN109255749A (en) From the map structuring optimization in non-autonomous platform of advocating peace
CN110648274B (en) Method and device for generating fisheye image
JP2024054137A (en) Image Display System
CN109446929A (en) A kind of simple picture identifying system based on augmented reality
CN111161398A (en) Image generation method, device, equipment and storage medium
Ohta et al. Live 3D video in soccer stadium
CN106843790A (en) A kind of information display system and method
WO2022023142A1 (en) Virtual window
Takemura et al. Diminishing head-mounted display for shared mixed reality
CN117934777A (en) Space arrangement system and method based on virtual reality
CN115375857A (en) Three-dimensional scene reconstruction method, device, equipment and storage medium
CN113961068A (en) Close-distance real object eye movement interaction method based on augmented reality helmet
CN106228509A (en) Performance methods of exhibiting and device
Li et al. Research on MR virtual scene location method based on image recognition

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination