CN115661417B - Virtual world scene generation method and system in meta-space - Google Patents

Virtual world scene generation method and system in meta-space Download PDF

Info

Publication number
CN115661417B
CN115661417B CN202211592134.8A CN202211592134A CN115661417B CN 115661417 B CN115661417 B CN 115661417B CN 202211592134 A CN202211592134 A CN 202211592134A CN 115661417 B CN115661417 B CN 115661417B
Authority
CN
China
Prior art keywords
scene
model
transition
pixel
edge
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202211592134.8A
Other languages
Chinese (zh)
Other versions
CN115661417A (en
Inventor
李方悦
颜佳
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Aoya Design Inc
Original Assignee
Shenzhen Aoya Design Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Aoya Design Inc filed Critical Shenzhen Aoya Design Inc
Priority to CN202211592134.8A priority Critical patent/CN115661417B/en
Publication of CN115661417A publication Critical patent/CN115661417A/en
Application granted granted Critical
Publication of CN115661417B publication Critical patent/CN115661417B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Processing Or Creating Images (AREA)

Abstract

The invention belongs to the technical field of virtual reality, and provides a method and a system for generating virtual world scenes in a meta-space, wherein scene sequences are obtained by sequentially arranging scene models according to the splicing sequence of the scene models; sequentially calculating the edge adaptation degree between each scene model and the adjacent scene model in the scene sequence; marking scene models needing edge transition in each scene model as transition models through the edge adaptation degree; performing edge transition on each transition model in the scene sequence; sequentially splicing all three-dimensional models in a scene sequence; the method can highlight the weak difference at the scene joint under the virtual reality environment, improve the immersive experience of the user, and stably eliminate the phenomena of sudden exposure and color mutation when the user is close to the joint position in the virtual scene of the metasma through edge transition.

Description

Virtual world scene generation method and system in meta-space
Technical Field
The invention belongs to the technical field of virtual reality, and particularly relates to a method and a system for generating a virtual world scene in a meta-space.
Background
At present, the metaspace is a collective virtual shared space, although virtual world scenes in the metaspace space are images of a real world generated based on a digital twin technology, there are also virtual world images, but scenes of a large metaspace space are sourced from different space servers and then spliced, if the difference of the seam positions of the scenes of each metaspace is too large, the problem that the scenes of the virtual world are switched too hard is generated, so that a user loses a real immersion experience, many sources of the scenes of each space server are independently developed, the difference between the development environment and the scene sources inevitably causes a large difference between the scenes of each space server, for example, the scene of one space server is obtained by three-dimensional reconstruction, the scene of the other space server is obtained according to three-dimensional scanning or three-dimensional modeling, the inevitable difference between the two space servers occurs, when the user passes through the seam of the two scenes, or the entire color of the scene model changes suddenly and then suddenly returns, a series of problems of obvious incoordination and the like occur in the near places in the virtual reality, and therefore, the matching relationship between the scenes is very weak immersion experience of the user, and the user loses the immersion experience of the scenes.
Disclosure of Invention
The invention aims to provide a method and a system for generating a virtual world scene in a meta-space, which are used for solving one or more technical problems in the prior art and at least providing a beneficial selection or creation condition.
In order to achieve the above object, according to an aspect of the present invention, there is provided a method for generating a virtual world scene in a metaspace space, the method including the steps of:
s100, acquiring three-dimensional models of different scenes as scene models, and sequentially arranging the scene models according to the splicing sequence of the scene models to obtain a scene sequence;
s200, sequentially calculating the edge adaptation degree between each scene model and the adjacent scene model in the scene sequence;
s300, marking scene models needing edge transition in each scene model as transition models through the edge adaptation degree;
s400, performing edge transition on each transition model in the scene sequence;
and S500, sequentially splicing all three-dimensional models in the scene sequence to obtain a virtual world scene.
Preferably, the virtual world scene is output to a virtual reality headset for display.
Further, in S100, the three-dimensional models of the different scenes are: taking a three-dimensional model of a scene obtained by three-dimensional modeling or three-dimensional scanning as a scene model, or taking a three-dimensional model obtained by photographing the scene and performing three-dimensional reconstruction as the scene model; the scene is a building, tree, vehicle and/or geographical environment of a preset area.
Wherein the three-dimensional models of different scenes originate from different metastables servers.
Preferably, the hardware of the metaspace server is a heterogeneous server; the software of the metasphere server comprises a Simulation SDKs for structure, perception and control Simulation, an SDKs for rendering, real-time light tracking and AI noise reduction, wherein different SDKs are combined in a modularization mode through the Kit function of the SDKs, the development of customized App or micro service is completed rapidly, create for modeling and rendering is used for visual View, and the software of the metasphere server also comprises a database and a cooperation engine NUCLEUS, and a modeling tool interconnection plug-in CONNECT which is used for supporting the interconnection with software such as 3DS MAX, UE, MAYA and the like installed on each client connected to the metasphere server.
Further, in S100, the splicing sequence of the scene models is an order of splicing the virtual scenes from the scene model to each scene, starting with the first acquired scene model.
Preferably, in S100, the splicing sequence of the scene models is the splicing sequence of the scene models in the time sequence of acquiring the scene models.
Further, in S200, the method for calculating the edge adaptation degree between the scene model and the adjacent scene model includes the following steps:
recording the number of scene models in a scene sequence as N, taking i as the serial number of the scene models in the scene sequence, taking i belonging to [2, N-1], and taking L (i) as a splicing edge line between the ith scene model and the (i-1) th scene model; taking R (i) as a splicing edge line between the ith scene model and the (i + 1) th scene model; the splicing edge line is a common edge line after the two scene models are combined (namely the edge line at the position where the two scene models are prepared to be combined or the edge line at the position where the two scene models are prepared to be jointed);
in the value range of i, sequentially calculating the edge adaptation degree Suit (i) between the ith scene model and the adjacent scene model as follows:
Figure 897869DEST_PATH_IMAGE001
j is a variable, and the mean function is the mean value of the gray values of all pixel points on the splicing edge line; ln refers to natural logarithm, mart (i, j) refers to the pixel brightness difference between the ith scene model and the jth scene model in the scene sequence, and the calculation method is as follows: mart (i, j) = | MaxG (L (i)) -MaxG (L (j)) | - | MaxG (L (i)) -MaxG (R (j)) |;
the MaxG function is the maximum value of gray values of all pixel points on the splicing edge line; l (j) is a splicing edge line between the jth scene model and the jth-1 scene model; and R (j) is a splicing edge line between the jth scene model and the j +1 th scene model.
The beneficial effects are as follows: the edge adaptation degree obtained through the pixel brightness difference calculation can well highlight the brightness adaptation degree of the edge between the scene model and the adjacent scene model, highlight the weak difference of the scene joint under the virtual reality environment, and distinguish the difference characteristics between the current scene model and the peripheral scene model according to the gray level change trend on the common edge line of the scene model and the peripheral scene model.
Further, in S300, the method for marking the scene model requiring edge transition in each scene model as the transition model by the edge adaptation degree includes: taking the mean value of the edge adaptation degrees between all the scene models and the adjacent scene models as Suitmean; within the value range of i, when the edge adaptation degree Suit (i) < Suitmean between the ith scene model and the adjacent scene model, judging that the scene model needs edge transition and marking the scene model as a transition model.
Further, in S400, the method for performing edge transition on each transition model in the scene sequence includes:
recording the number of each transition model in the scene sequence as N2, taking k as the serial number of the transition model in the scene sequence, taking k as an element [2, N2-1], and obtaining the serial number of the corresponding scene model of the kth transition model in the scene sequence as k (i);
the method for screening the region to be transited of the transition model comprises the following specific steps:
taking L (k (i)) as a splicing edge line between the kth transition model and the kth (i) -1 scene model; taking R (k (i)) as a splicing edge line between the kth transition model and the kth (i) +1 scene sequence; respectively carrying out corner detection on L (k (i)) and R (k (i)) to obtain corners, taking the corner with the maximum gray value on L (k (i)) as MAX _ Lk, and taking the corner with the maximum gray value on R (k (i)) as MAX _ Rk; taking projection points of the middle points of MAX _ Lk and MAX _ Rk on a kth transition model as depth points PJ (k), or taking the point with the shortest distance from the middle points of MAX _ Lk and MAX _ Rk to each pixel point on the kth transition model as a depth point PJ (k), and taking a circumscribed circle of a triangle formed by mutually connecting the three points of MAX _ Lk, MAX _ Rk and PJ (k) as CYC1; taking a region of the kth transition model in the CYC1 range, and recording the region as a to-be-transitioned region Trend (k) of the kth transition model;
performing edge transition on the regions to be transitioned of each transition model, specifically:
marking CYC1 (p) as the p-th pixel point in the region to be transited, and p as the serial number of the pixel point in the region to be transited, traversing all CYC1 (p) in the range of p to carry out edge transition, specifically: calculating an absolute value A (p) of a difference between the gray value of the pixel CYC1 (p) and the gray value of MAX _ Lk, calculating an absolute value B (p) of a difference between the gray value of the pixel CYC1 (p) and the gray value of MAX _ Rk, if A (p) > B (p), decreasing the pixel value of the pixel CYC1 (p) by | MaxP (L (k (i))) -MaxP (R (k (i))) |, and otherwise increasing the pixel value of the pixel CYC1 (p) by | MinP (L (k (i))) -MinP (R (k (i))) |;
the MaxP function is the maximum value of pixel values of all pixel points on the splicing edge line; the MinP function is the minimum value of the pixel values of all the pixel points on the splicing edge line.
By performing edge transition on each transition model in the scene sequence, the problem of overlarge difference of seam positions of scenes of each metas can be greatly reduced, the immersive experience of a user is improved, and through the range fine adjustment of the pixel level, the edge transition can stably eliminate the phenomena of sudden exposure and color mutation when the user is close to the seam position in the virtual scene of the metas; however, if the range of the exposure and color mutation generated at the seam position in the region to be transitioned Trend (k) is too large, there is a possibility that a smaller range of the exposure and color mutation may still be generated when the user approaches the seam position in the virtual scene of the metasma, and in order to eliminate this problem, the present invention proposes the following priority scheme for performing edge transition on the region to be transitioned of each transition model:
preferably, or, the method for performing edge transition on the region to be transitioned of each transition model specifically includes:
sequentially screening a region Trend (k-1) to be transited of a k-1 transition model and a region Trend (k + 1) to be transited of a k +1 transition model;
marking CYC1 (p) as the p-th pixel point in the region to be transited, and the p is the serial number of the pixel point in the region to be transited, traversing all CYC1 (p) in the range of the p to perform edge transition, and specifically: calculating that the distance between the pixel points CYC1 (p) and MAX _ Lk is smaller than the distance between the pixel points CYC1 (p) and MAX _ Rk, marking MAX _ Lk as a reference repair line LR, otherwise marking MAX _ Rk as the reference repair line LR; calculating an absolute value C (p) of a difference value between the maximum gray value of each pixel point on the reference repair line LR and the maximum gray value of each pixel point in Trend (k-1), and calculating an absolute value D (p) of a difference value between the maximum gray value of each pixel point on the reference repair line LR and the maximum gray value of each pixel point in Trend (k + 1); if C (p) > D (p), reducing the pixel value of the pixel point CYC1 (p) by | MeanP (L (k (i))) -MeanL (Trend (k + 1)) |, and otherwise increasing the pixel value of the pixel point CYC1 (p) by | MeanP (L (k (i))) -MeanL (Trend (k-1)) |;
the mean P function is the average value of pixel values of all pixel points on the splicing edge line; the MeanL function is the average value of pixel values of all pixel points in the region to be transited. The above pixel values may be replaced with gray scale values.
The beneficial effects are that: the salient regions of sudden exposure and color mutation phenomena when the salient regions are close to the joint positions in the virtual scene in the current scene sequence can be intelligently screened out through the to-be-transited regions, so that the to-be-transited regions and the abnormal joint regions in the virtual scene have extremely strong relevance, the problems of small exposure and color mutation caused by the large exposure and color mutation range of the joint positions in the to-be-transited regions are solved, meanwhile, the impact of the afterimage phenomena caused by the loss of part of three-dimensional scenes on the immersion feeling of users can be greatly reduced, and the visual immersion performance of multi-scene seamless switching is improved.
Further, in S500, the method for sequentially splicing all three-dimensional models in the scene sequence to obtain the virtual world scene includes: and sequentially splicing all three-dimensional models in the scene sequence by any one of an APAP method, an SPHP method or a PT method to obtain the virtual world scene.
Preferably, in S500, the method for sequentially stitching all three-dimensional models in the scene sequence to obtain the virtual world scene includes: all three-dimensional models in a scene sequence are sequentially spliced to obtain a virtual world scene through any one of a self-adaptive no-mark-point three-dimensional point cloud automatic splicing method with the patent publication number of CN104392426B, a geometric data and texture data automatic registration algorithm of a CN103049896B three-dimensional model, a three-dimensional image splicing method, a three-dimensional image splicing device and a readable storage medium of CN109598677A, or a high-precision three-dimensional point cloud map automatic splicing and optimizing method and a three-dimensional point cloud map automatic splicing and optimizing system of CN 114283250A.
The invention also provides a method and a system for generating the virtual world scene in the meta-space, wherein the method and the system for generating the virtual world scene in the meta-space comprise the following steps: the processor, the memory and the computer program stored in the memory and capable of running on the processor, when the processor executes the computer program, implement the steps in the virtual world scene generation method in the metaspace, the virtual world scene generation method in metaspace system may run in a computing device such as a desktop computer, a notebook computer, a palm computer and a cloud data center, and the system that can run may include, but is not limited to, the processor, the memory and a server cluster, and the processor executes the computer program to run in the units of the following system:
the scene model acquisition unit is used for acquiring three-dimensional models of different scenes as scene models and sequentially arranging the scene models according to the splicing sequence of the scene models to obtain a scene sequence;
the edge adaptation calculation unit is used for calculating the edge adaptation degree between each scene model and the adjacent scene model in the scene sequence in sequence;
the transition model marking unit is used for marking scene models needing edge transition in each scene model as transition models through the edge adaptation degree;
the edge transition processing unit is used for carrying out edge transition on each transition model in the scene sequence;
and the virtual scene splicing unit is used for sequentially splicing all the three-dimensional models in the scene sequence to obtain a virtual world scene.
And the virtual scene display unit is used for outputting the virtual world scene to the virtual reality helmet for display.
The beneficial effects of the invention are as follows: the invention provides a method and a system for generating a virtual world scene in a metaspace space, which can well highlight the brightness adaptation degree of the edge between a scene model and an adjacent scene model through the edge adaptation degree obtained by pixel brightness difference calculation, highlight the weak difference of a scene joint under a virtual reality environment, greatly reduce the problem of overlarge difference of the joint position of each metaspace scene, improve the immersive experience of a user, and stably eliminate the sudden exposure and color mutation phenomenon when the user approaches the joint position in the metaspace virtual scene through edge transition.
Drawings
The above and other features of the present invention will become more apparent by describing in detail embodiments thereof with reference to the attached drawings in which like reference numerals designate the same or similar elements, it being apparent that the drawings in the following description are merely exemplary of the present invention and other drawings can be obtained by those skilled in the art without inventive effort, wherein:
FIG. 1 is a flow chart of a method for generating a virtual world scene in a meta-space;
fig. 2 is a system structure diagram of a method for generating a virtual world scene in a meta-space.
Detailed Description
The conception, the specific structure and the technical effects of the present invention will be clearly and completely described in conjunction with the embodiments and the accompanying drawings to fully understand the objects, the schemes and the effects of the present invention. It should be noted that the embodiments and features of the embodiments in the present application may be combined with each other without conflict.
Fig. 1 is a flowchart of a method for generating a virtual world scene in a metaspace space, and the method for generating a virtual world scene in a metaspace space according to an embodiment of the present invention is described below with reference to fig. 1, and the method includes the following steps:
s100, acquiring three-dimensional models of different scenes as scene models, and sequentially arranging the scene models according to the splicing sequence of the scene models to obtain a scene sequence;
s200, sequentially calculating the edge adaptation degree between each scene model and the adjacent scene model in the scene sequence;
s300, marking scene models needing edge transition in each scene model as transition models through the edge adaptation degree;
s400, performing edge transition on each transition model in the scene sequence;
and S500, sequentially splicing all the three-dimensional models in the scene sequence to obtain the virtual world scene.
Preferably, the virtual world scene is output to a virtual reality headset for display.
Further, in S100, the three-dimensional models of the different scenes are: taking a three-dimensional model of a scene obtained by three-dimensional modeling or three-dimensional scanning as a scene model, or taking a three-dimensional model obtained by photographing the scene and performing three-dimensional reconstruction as the scene model; the scene is a building, a tree, a vehicle and/or a geographical environment of a preset area.
Wherein the three-dimensional models of different scenes originate from different metastables servers.
Preferably, the hardware of the metasystem server is a heterogeneous server; the software of the metasphere server comprises a Simulation SDKs for structure, perception and control Simulation, an SDKs for rendering, real-time light tracking and AI noise reduction, wherein different SDKs are combined in a modularization mode through the Kit function of the SDKs, the development of customized App or micro service is completed rapidly, create for modeling and rendering is used for visual View, and the software of the metasphere server also comprises a database and a cooperation engine NUCLEUS, and a modeling tool interconnection plug-in CONNECT which is used for supporting the interconnection with software such as 3DS MAX, UE, MAYA and the like installed on each client connected to the metasphere server.
Further, in S100, the splicing sequence of the scene models is an order of splicing the virtual scenes from the scene model to each scene, starting with the first acquired scene model.
Preferably, in S100, the splicing sequence of the scene models is the splicing sequence of the scene models in the time sequence of acquiring the scene models.
Further, in S200, the method for calculating the edge fitness between the scene model and the adjacent scene model includes the following steps:
recording the number of scene models in a scene sequence as N, taking i as the serial number of the scene models in the scene sequence, taking i belonging to [2, N-1], and taking L (i) as a splicing edge line between the ith scene model and the (i-1) th scene model; taking R (i) as a splicing edge line between the ith scene model and the (i + 1) th scene model; the splicing edge line is a common edge line of the two scene model preparation merging positions (namely the edge line of the two scene model preparation merging positions or the edge line of the position of the two scene model preparation junction);
in the value range of i, sequentially calculating the edge adaptation degree Suit (i) between the ith scene model and the adjacent scene model as follows:
Figure DEST_PATH_IMAGE002
j is a variable, and the Meang function is the mean value of the gray values of all the pixel points on the splicing edge line; ln refers to natural logarithm, mart (i, j) refers to the pixel brightness difference between the ith scene model and the jth scene model in the scene sequence, and the calculation method is as follows: mart (i, j) = | MaxG (L (i)) -MaxG (L (j)) | - | MaxG (L (i)) -MaxG (R (j)) |;
the MaxG function is the maximum value of gray values of all pixel points on the splicing edge line; l (j) is a splicing edge line between the jth scene model and the j-1 st scene model; and R (j) is a splicing edge line between the jth scene model and the j +1 th scene model.
The beneficial effects are as follows: the edge adaptation degree obtained through the pixel brightness difference calculation can well highlight the brightness adaptation degree of the edge between the scene model and the adjacent scene model, highlight the weak difference of the scene joint under the virtual reality environment, and distinguish the difference characteristics between the current scene model and the peripheral scene model according to the gray level change trend on the common edge line of the scene model and the peripheral scene model.
Further, in S300, the method for marking the scene model requiring edge transition in each scene model as the transition model through the edge adaptation degree includes: taking the mean value of the edge adaptation degrees between all the scene models and the adjacent scene models as Suitmean; and in the value range of i, when the edge adaptation degree Suit (i) < Suitmean between the ith scene model and the adjacent scene model, judging that the scene model needs edge transition and marking the scene model as a transition model.
Further, in S400, the method for performing edge transition on each transition model in the scene sequence includes:
recording the number of each transition model in the scene sequence as N2, taking k as the serial number of the transition model in the scene sequence, taking k belonging to [2, N2-1], and obtaining the serial number of the corresponding scene model of the kth transition model in the scene sequence as k (i);
the method for screening the region to be transited of the transition model comprises the following specific steps:
taking L (k (i)) as a splicing edge line between the kth transition model and the kth (i) -1 scene model; taking R (k (i)) as a splicing edge line between the kth transition model and the kth (i) +1 scene sequence; respectively carrying out corner detection on L (k (i)) and R (k (i)) to obtain corners, taking the corner with the maximum gray value on L (k (i)) as MAX _ Lk, and taking the corner with the maximum gray value on R (k (i)) as MAX _ Rk; taking projection points of the middle points of MAX _ Lk and MAX _ Rk on a kth transition model as depth points PJ (k), or taking the point with the shortest distance from the middle points of MAX _ Lk and MAX _ Rk to each pixel point on the kth transition model as a depth point PJ (k), and taking a circumscribed circle of a triangle formed by mutually connecting the three points of MAX _ Lk, MAX _ Rk and PJ (k) as CYC1; taking a region of the kth transition model in the CYC1 range as a region to be transitioned Trend (k) of the kth transition model;
performing edge transition on the regions to be transitioned of each transition model, specifically:
marking CYC1 (p) as the p-th pixel point in the region to be transited, and the p is the serial number of the pixel point in the region to be transited, traversing all CYC1 (p) in the range of the p to perform edge transition, and specifically: calculating an absolute value A (p) of a difference between the gray value of the pixel CYC1 (p) and the gray value of MAX _ Lk, calculating an absolute value B (p) of a difference between the gray value of the pixel CYC1 (p) and the gray value of MAX _ Rk, if A (p) > B (p), decreasing the pixel value of the pixel CYC1 (p) by | MaxP (L (k (i))) -MaxP (R (k (i))) |, and otherwise increasing the pixel value of the pixel CYC1 (p) by | MinP (L (k (i))) -MinP (R (k (i))) |;
the MaxP function is the maximum value of pixel values of all pixel points on the splicing edge line; the MinP function is the minimum value of the pixel values of all the pixel points on the splicing edge line.
By performing edge transition on each transition model in the scene sequence, the problem of overlarge difference of seam positions of scenes of each metas can be greatly reduced, the immersive experience of a user is improved, and through the range fine adjustment of the pixel level, the edge transition can stably eliminate the phenomena of sudden exposure and color mutation when the user is close to the seam position in the virtual scene of the metas; however, if the range of the exposure and color mutation generated at the seam position in the region to be transitioned Trend (k) is too large, there is a possibility that a smaller range of the exposure and color mutation may still be generated when the user approaches the seam position in the virtual scene of the metasma, and in order to eliminate this problem, the present invention proposes the following priority scheme for performing edge transition on the region to be transitioned of each transition model:
preferably, or, the method for performing edge transition on the region to be transitioned of each transition model specifically includes:
sequentially screening a region to be transited Trend (k-1) of a k-1 transition model and a region to be transited Trend (k + 1) of a k +1 transition model;
marking CYC1 (p) as the p-th pixel point in the region to be transited, and the p is the serial number of the pixel point in the region to be transited, traversing all CYC1 (p) in the range of the p to perform edge transition, and specifically: calculating that the distance between the pixel points CYC1 (p) and MAX _ Lk is smaller than the distance between the pixel points CYC1 (p) and MAX _ Rk, marking MAX _ Lk as a reference repair line LR, otherwise marking MAX _ Rk as the reference repair line LR; calculating an absolute value C (p) of a difference value between the maximum gray value of each pixel point on the reference repair line LR and the maximum gray value of each pixel point in Trend (k-1), and calculating an absolute value D (p) of a difference value between the maximum gray value of each pixel point on the reference repair line LR and the maximum gray value of each pixel point in Trend (k + 1); if C (p) > D (p), reducing the pixel value of the pixel point CYC1 (p) by | MeanP (L (k (i))) -MeanL (Trend (k + 1)) |, otherwise increasing the pixel value of the pixel point CYC1 (p) by | MeanP (L (k (i))) -MeanL (Trend (k-1)) |;
the mean P function is the average value of pixel values of all pixel points on the splicing edge line; the MeanL function is the average of the pixel values of all the pixel points in the region to be transited. The above pixel values may be replaced with gray scale values.
The beneficial effects are that: the salient regions of sudden exposure and color mutation phenomena when the salient regions are close to the joint positions in the virtual scene in the current scene sequence can be intelligently screened out through the to-be-transited regions, so that the to-be-transited regions and the abnormal joint regions in the virtual scene have extremely strong relevance, the problems of small exposure and color mutation caused by the large exposure and color mutation range of the joint positions in the to-be-transited regions are solved, meanwhile, the impact of the afterimage phenomena caused by the loss of part of three-dimensional scenes on the immersion feeling of users can be greatly reduced, and the visual immersion performance of multi-scene seamless switching is improved.
Further, in S500, the method for sequentially stitching all three-dimensional models in the scene sequence to obtain the virtual world scene includes: and sequentially splicing all three-dimensional models in the scene sequence by any one of an APAP method, an SPHP method or a PT method to obtain the virtual world scene.
Preferably, in S500, the method for sequentially stitching all three-dimensional models in the scene sequence to obtain the virtual world scene includes: all three-dimensional models in a scene sequence are sequentially spliced to obtain a virtual world scene through any one of a self-adaptive landmark-point-free three-dimensional point cloud automatic splicing method with the patent publication number of CN104392426B, a geometric data and texture data automatic registration algorithm of a CN103049896B three-dimensional model, a three-dimensional image splicing method, a three-dimensional image splicing device and a readable storage medium with the patent publication number of CN109598677A, or a high-precision three-dimensional point cloud map automatic splicing and optimizing method and a three-dimensional point cloud map automatic splicing and optimizing system with the patent publication number of CN 114283250A.
An embodiment of the present invention provides a method and system for generating a virtual world scene in a metaspace space, and as shown in fig. 2, a system structure diagram of the method and system for generating a virtual world scene in a metaspace space is shown, where the method and system for generating a virtual world scene in a metaspace space of the present invention includes: the virtual world scene generation method comprises a processor, a memory and a computer program which is stored in the memory and can run on the processor, wherein the processor executes the computer program to realize the steps in the virtual world scene generation method system embodiment in the metaspace.
The system comprises: a memory, a processor, and a computer program stored in the memory and executable on the processor, the processor executing the computer program to run in the units of the system:
the scene model acquisition unit is used for acquiring three-dimensional models of different scenes as scene models and sequentially arranging the scene models according to the splicing sequence of the scene models to obtain a scene sequence;
the edge adaptation calculation unit is used for sequentially calculating the edge adaptation degree between each scene model and the adjacent scene model in the scene sequence;
the transition model marking unit is used for marking scene models needing edge transition in each scene model as transition models through the edge adaptation degree;
the edge transition processing unit is used for performing edge transition on each transition model in the scene sequence;
and the virtual scene splicing unit is used for sequentially splicing all the three-dimensional models in the scene sequence to obtain a virtual world scene.
And the virtual scene display unit is used for outputting the virtual world scene to the virtual reality helmet for display.
The virtual world scene generation method and system in the meta space can be operated in computing equipment such as desktop computers, notebook computers, palm computers and cloud servers. The system for generating the virtual world scene in the meta-space can be operated by a system comprising, but not limited to a processor and a memory. Those skilled in the art will appreciate that the example is only an example of a virtual world scene generation method system in a meta-space, and does not constitute a limitation of the virtual world scene generation method system in a meta-space, and may include more or less components in proportion, or combine certain components, or different components, for example, the virtual world scene generation method system in a meta-space may further include an input-output device, a network access device, a bus, and the like.
The Processor may be a Central Processing Unit (CPU), other general purpose Processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) or other Programmable logic device, discrete Gate or transistor logic, discrete hardware components, etc. The general-purpose processor may be a microprocessor or the processor may be any conventional processor, etc., the processor is a control center of the virtual world scene generation method system operation system in the metaspace space, and various interfaces and lines are used for connecting various parts of the virtual world scene generation method system operation system in the whole metaspace space.
The memory can be used for storing the computer program and/or the module, and the processor realizes various functions of the virtual world scene generation method system in the metaspace space by operating or executing the computer program and/or the module stored in the memory and calling data stored in the memory. The memory may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required by at least one function (such as a sound playing function, an image playing function, etc.), and the like; the storage data area may store data (such as audio data, a phonebook, etc.) created according to the use of the cellular phone, etc. In addition, the memory may include high speed random access memory, and may also include non-volatile memory, such as a hard disk, a memory, a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card), at least one magnetic disk storage device, a Flash memory device, or other volatile solid state storage device.
Although the description of the present invention has been presented in considerable detail and with reference to a few illustrated embodiments, it is not intended to be limited to any such detail or embodiment or any particular embodiment so as to effectively encompass the intended scope of the invention. Furthermore, the foregoing describes the invention in terms of embodiments foreseen by the inventor for which an enabling description was available, notwithstanding that insubstantial modifications of the invention, not presently foreseen, may nonetheless represent equivalent modifications thereto.

Claims (6)

1. A method for generating a virtual world scene in a meta-space is characterized by comprising the following steps:
s100, acquiring three-dimensional models of different scenes as scene models, and sequentially arranging the scene models according to the splicing sequence of the scene models to obtain a scene sequence;
s200, sequentially calculating the edge adaptation degree between each scene model and the adjacent scene model in the scene sequence;
s300, marking scene models needing edge transition in each scene model as transition models through the edge adaptation degree;
s400, performing edge transition on each transition model in the scene sequence;
s500, sequentially splicing all three-dimensional models in the scene sequence to obtain a virtual world scene;
in S200, the method for calculating the edge suitability between a scene model and an adjacent scene model includes the following steps:
recording the number of the scene models in the scene sequence as N, taking i as the serial number of the scene models in the scene sequence, taking i as an element [2, N-1], and taking L (i) as a splicing edge line between the ith scene model and the (i-1) th scene model; taking R (i) as a splicing edge line between the ith scene model and the (i + 1) th scene model; the splicing edge line is a common edge line after the two scene models are combined;
in the value range of i, sequentially calculating the edge adaptation degree Suit (i) between the ith scene model and the adjacent scene model as follows:
Figure QLYQS_1
j is a variable, and the Meang function is the mean value of the gray values of all the pixel points on the splicing edge line; ln refers to natural logarithm, mart (i, j) refers to the pixel brightness difference between the ith scene model and the jth scene model in the scene sequence, and the calculation method is as follows: mart (i, j) = | MaxG (L (i)) -MaxG (L (j)) | - | MaxG (L (i)) -MaxG (R (j)) |;
the MaxG function is the maximum value of gray values of all pixel points on the splicing edge line; l (j) is a splicing edge line between the jth scene model and the jth-1 scene model; and R (j) is a splicing edge line between the jth scene model and the j +1 th scene model.
2. The method according to claim 1, wherein in S100, the order of splicing the scene models is from the first acquired scene model to the order of splicing the virtual scenes with each other between the scenes.
3. The method for generating virtual world scenes in the meta-space according to claim 1, wherein in S300, the method for marking the scene model needing edge transition in each scene model as the transition model through the edge adaptation degree comprises the following steps: taking the mean value of the edge adaptation degrees between all the scene models and the adjacent scene models as Suitmean; within the value range of i, when the edge adaptation degree Suit (i) < Suitmean between the ith scene model and the adjacent scene model, judging that the scene model needs edge transition and marking the scene model as a transition model.
4. The method for generating a virtual world scene in a metaspace space according to claim 1, wherein in S400, the method for performing edge transition on each transition model in the scene sequence is:
recording the number of each transition model in the scene sequence as N2, taking k as the serial number of the transition model in the scene sequence, enabling k to belong to [2, N2-1], and obtaining the serial number of the corresponding scene model of the kth transition model in the scene sequence as k (i);
the method for screening the region to be transited of the transition model comprises the following specific steps:
taking L (k (i)) as a splicing edge line between the kth transition model and the kth (i) -1 scene model; taking R (k (i)) as a splicing edge line between the kth transition model and the kth scene sequence (i) + 1); respectively carrying out corner detection on L (k (i)) and R (k (i)) to obtain corners, taking the corner with the maximum gray value on L (k (i)) as MAX _ Lk, and taking the corner with the maximum gray value on R (k (i)) as MAX _ Rk; taking projection points of the middle points of MAX _ Lk and MAX _ Rk on the kth transition model as depth points PJ (k), or taking the point with the shortest distance from the middle points of MAX _ Lk and MAX _ Rk to each pixel point on the kth transition model as a depth point PJ (k), and taking a circumscribed circle of a triangle formed by mutually connecting the three points of MAX _ Lk, MAX _ Rk and PJ (k) as CYC1; taking a region of the kth transition model in the CYC1 range as a region to be transitioned Trend (k) of the kth transition model;
performing edge transition on the to-be-transitioned area of each transition model, specifically:
marking CYC1 (p) as the p-th pixel point in the region to be transited, and the p is the serial number of the pixel point in the region to be transited, traversing all CYC1 (p) in the range of the p to perform edge transition, and specifically: calculating an absolute value A (p) of a difference value between the gray value of the pixel CYC1 (p) and the gray value of MAX _ Lk, calculating an absolute value B (p) of a difference value between the gray value of the pixel CYC1 (p) and the gray value of MAX _ Rk, if A (p) > B (p), reducing the pixel value of the pixel CYC1 (p) by | MaxP (L (k (i))) -MaxP (R (k (i))) |, and otherwise increasing the pixel value of the pixel CYC1 (p) by | MinP (L (k (i))) -MinP (R (k (i))) |;
the MaxP function is the maximum value of pixel values of all pixel points on the splicing edge line; the MinP function is the minimum value of the pixel values of all the pixel points on the splicing edge line.
5. The method for generating the virtual world scene in the meta-space according to claim 4, wherein the method for performing the edge transition on the region to be transitioned of each transition model specifically comprises:
sequentially screening a region Trend (k-1) to be transited of a k-1 transition model and a region Trend (k + 1) to be transited of a k +1 transition model;
marking CYC1 (p) as the p-th pixel point in the region to be transited, and p as the serial number of the pixel point in the region to be transited, traversing all CYC1 (p) in the range of p to carry out edge transition, specifically: calculating that the distance between the pixel points CYC1 (p) and MAX _ Lk is smaller than the distance between the pixel points CYC1 (p) and MAX _ Rk, marking MAX _ Lk as a reference repair line LR, otherwise marking MAX _ Rk as the reference repair line LR; calculating an absolute value C (p) of a difference value between the maximum gray value of each pixel point on the reference repair line LR and the maximum gray value of each pixel point in Trend (k-1), and calculating an absolute value D (p) of a difference value between the maximum gray value of each pixel point on the reference repair line LR and the maximum gray value of each pixel point in Trend (k + 1); if C (p) > D (p), reducing the pixel value of the pixel point CYC1 (p) by | MeanP (L (k (i))) -MeanL (Trend (k + 1)) |, otherwise increasing the pixel value of the pixel point CYC1 (p) by | MeanP (L (k (i))) -MeanL (Trend (k-1)) |;
the mean P function is the average value of pixel values of all pixel points on the splicing edge line; the MeanL function is the average of the pixel values of all the pixel points in the region to be transited.
6. A system for generating virtual world scenes in a meta-space, the system comprising: a processor, a memory, and a computer program stored in the memory and executed on the processor, the processor implementing the steps in a method for generating virtual world scenes in metaspace according to any of claims 1 to 5 when executing the computer program.
CN202211592134.8A 2022-12-13 2022-12-13 Virtual world scene generation method and system in meta-space Active CN115661417B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211592134.8A CN115661417B (en) 2022-12-13 2022-12-13 Virtual world scene generation method and system in meta-space

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211592134.8A CN115661417B (en) 2022-12-13 2022-12-13 Virtual world scene generation method and system in meta-space

Publications (2)

Publication Number Publication Date
CN115661417A CN115661417A (en) 2023-01-31
CN115661417B true CN115661417B (en) 2023-03-31

Family

ID=85019543

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211592134.8A Active CN115661417B (en) 2022-12-13 2022-12-13 Virtual world scene generation method and system in meta-space

Country Status (1)

Country Link
CN (1) CN115661417B (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101770656A (en) * 2010-02-11 2010-07-07 中铁第一勘察设计院集团有限公司 Stereo orthophoto pair-based large-scene stereo model generating method and measuring method thereof
CN111612897A (en) * 2020-06-05 2020-09-01 腾讯科技(深圳)有限公司 Three-dimensional model fusion method, device and equipment and readable storage medium
CN112231020A (en) * 2020-12-16 2021-01-15 成都完美时空网络技术有限公司 Model switching method and device, electronic equipment and storage medium
CN114004939A (en) * 2021-12-31 2022-02-01 深圳奥雅设计股份有限公司 Three-dimensional model optimization method and system based on modeling software script

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
FR2781300B1 (en) * 1998-07-16 2000-09-29 France Telecom METHOD FOR MODELING 3D OBJECTS OR SCENES

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101770656A (en) * 2010-02-11 2010-07-07 中铁第一勘察设计院集团有限公司 Stereo orthophoto pair-based large-scene stereo model generating method and measuring method thereof
CN111612897A (en) * 2020-06-05 2020-09-01 腾讯科技(深圳)有限公司 Three-dimensional model fusion method, device and equipment and readable storage medium
CN112231020A (en) * 2020-12-16 2021-01-15 成都完美时空网络技术有限公司 Model switching method and device, electronic equipment and storage medium
WO2022127275A1 (en) * 2020-12-16 2022-06-23 成都完美时空网络技术有限公司 Method and device for model switching, electronic device, and storage medium
CN114004939A (en) * 2021-12-31 2022-02-01 深圳奥雅设计股份有限公司 Three-dimensional model optimization method and system based on modeling software script

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
王淑青 ; 李叶伟 ; .基于亮度一致性的多曝光图像融合.湖北工业大学学报.(第01期), *

Also Published As

Publication number Publication date
CN115661417A (en) 2023-01-31

Similar Documents

Publication Publication Date Title
CN109771951B (en) Game map generation method, device, storage medium and electronic equipment
US10957093B2 (en) Scene-based foveated rendering of graphics content
CN109712165B (en) Similar foreground image set segmentation method based on convolutional neural network
JP2022503647A (en) Cross-domain image conversion
JP2023545199A (en) Model training method, human body posture detection method, apparatus, device and storage medium
CN114820905B (en) Virtual image generation method and device, electronic equipment and readable storage medium
US10810707B2 (en) Depth-of-field blur effects generating techniques
WO2021003936A1 (en) Image segmentation method, electronic device, and computer-readable storage medium
WO2021249091A1 (en) Image processing method and apparatus, computer storage medium, and electronic device
CN114373056A (en) Three-dimensional reconstruction method and device, terminal equipment and storage medium
CN112686824A (en) Image correction method, image correction device, electronic equipment and computer readable medium
TW202137133A (en) Image processing method, electronic device and computer readable storage medium
WO2021114870A1 (en) Parallax estimation system and method, electronic device and computer-readable storage medium
CN111340077A (en) Disparity map acquisition method and device based on attention mechanism
CN112465122A (en) Device and method for optimizing original dimension operator in neural network model
CN114463223A (en) Image enhancement processing method and device, computer equipment and medium
CN116797768A (en) Method and device for reducing reality of panoramic image
CN113706431B (en) Model optimization method and related device, electronic equipment and storage medium
CN113516697B (en) Image registration method, device, electronic equipment and computer readable storage medium
CN115661417B (en) Virtual world scene generation method and system in meta-space
CN116330667B (en) Toy 3D printing model design method and system
CN111833374A (en) Path planning method, system, storage medium and terminal based on video fusion
WO2020224118A1 (en) Lesion determination method and apparatus based on picture conversion, and computer device
CN112785498A (en) Pathological image hyper-resolution modeling method based on deep learning
CN113469877B (en) Object display method, scene display method, device and computer readable medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant