CN114740969A - Interaction system and method for fusing digital twin and virtual reality - Google Patents

Interaction system and method for fusing digital twin and virtual reality Download PDF

Info

Publication number
CN114740969A
CN114740969A CN202210122074.7A CN202210122074A CN114740969A CN 114740969 A CN114740969 A CN 114740969A CN 202210122074 A CN202210122074 A CN 202210122074A CN 114740969 A CN114740969 A CN 114740969A
Authority
CN
China
Prior art keywords
data
value
pixel point
purity
background
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210122074.7A
Other languages
Chinese (zh)
Inventor
刘延锋
黄启东
张宇
徐如�
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Dexin Diantong Technology Co ltd
Original Assignee
Beijing Dexin Diantong Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Dexin Diantong Technology Co ltd filed Critical Beijing Dexin Diantong Technology Co ltd
Priority to CN202210122074.7A priority Critical patent/CN114740969A/en
Publication of CN114740969A publication Critical patent/CN114740969A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/006Mixed reality

Landscapes

  • Engineering & Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Graphics (AREA)
  • Computer Hardware Design (AREA)
  • Software Systems (AREA)
  • Human Computer Interaction (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The invention relates to an interactive system and a method for fusing digital twin and virtual reality, which solve the technical problem of low efficiency and realize the subsystem by adopting a data acquisition management subsystem, a digital twin model modeling and management subsystem and an interactive function; the data acquisition management subsystem is used for acquiring, transmitting and managing data of the interactive terminal and comprises a data acquisition unit, a data transmission unit and a data management unit; the digital twin model modeling and management subsystem is used for carrying out data twin modeling according to the data of the data acquisition management subsystem and mapping a real physical scene sensed by the interactive terminal into a virtual space scene; the interactive function realization subsystem is used for analyzing the interactive process and feeding back and controlling interactive data or strategies; the technical scheme that the data management unit executes the data management program including the following steps is adopted, the problem is well solved, and the method and the device can be used for interaction of digital twins and virtual reality.

Description

Interaction system and method integrating digital twins and virtual reality
Technical Field
The invention relates to the field of virtual reality, in particular to an interactive system and method fusing digital twins and virtual reality.
Background
The digital twin is a simulation process integrating multidisciplinary, multi-physical quantity, multi-scale and multi-probability by fully utilizing data such as a physical model, sensor updating, operation history and the like, and mapping is completed in a virtual space, so that the full life cycle process of corresponding entity equipment is reflected. Digital twinning is an beyond-realistic concept that can be viewed as a digital mapping system of one or more important, interdependent equipment systems. The digital twin is a widely-adapted theoretical technical system, can be applied in various fields, and is more applied in the fields of product design, product manufacturing, medical analysis, engineering construction and the like. The most deep application in China is in the field of engineering construction, the highest attention and the hottest research are in the field of intelligent manufacturing.
The invention provides an interactive system and method for fusing digital twin and virtual reality, which solve the problem that the existing interactive system is poor in real-time performance.
Disclosure of Invention
The invention aims to solve the technical problem of poor real-time performance in the prior art. The novel interactive system fusing the digital twin and the virtual reality is provided, and has the characteristic of high real-time performance.
In order to solve the technical problems, the technical scheme is as follows:
an interactive system fusing digital twin and virtual reality comprises at least 2 interactive terminals which interact with each other, and is characterized in that: the interactive system fusing the digital twin and the virtual reality comprises a data acquisition management subsystem, a digital twin model modeling and management subsystem and an interactive function realization subsystem;
the data acquisition management subsystem is used for acquiring, transmitting and managing data of the interactive terminal and comprises a data acquisition unit, a data transmission unit and a data management unit;
the digital twin model modeling and management subsystem is used for carrying out data twin modeling according to the data of the data acquisition management subsystem and mapping a real physical scene sensed by the interactive terminal into a virtual space scene;
the interactive function realization subsystem is used for analyzing the interactive process and feeding back and controlling interactive data or strategies;
the data management unit executes a data management program including the steps of:
step S1, calculating the change rate of each item of data, and distinguishing the high-speed update data P according to the predefined change rate thresholdnAnd updating the data P at a low speedr
Step S2, for high-speed update data PnCalling a high-speed change data estimation integration program unit to estimate and integrate data;
step S3, for low speed updating data PrCalling the slow data processing integration program unit to calculate and integrate, and directly updating the low-speed data P by the slow data processing integration program unitrCalculating the variation value of (a);
and step S4, if the variation value estimated by the high-speed variation data estimation integration program unit exceeds a predefined threshold or the variation value calculated by the slow-speed data processing integration program unit exceeds a predefined threshold, updating and transmitting the corresponding data to the digital twin model modeling and management subsystem through the data transmission unit.
The working principle of the invention is as follows: the invention effectively distinguishes and processes the data in the digital twin modeling calculation simulation process. According to the change rate, the decision and the method for processing the change rate can be effectively distinguished, the real-time performance and the efficiency are improved, the real-time change rate of each item of data is calculated, and the high-speed updating data P is distinguished according to the predefined change rate threshold valuenAnd updating the data P at a low speedr(ii) a For high speed update data PnCalling a high-speed change data estimation fusion program to carry out data estimation and fusion; for low speed update data PrDirectly calling the low-speed data processing fusion program to calculate and fuse, and simultaneously directly updating the low-speed data PrThe change value of the system is calculated, so that the efficiency is improved, and the system overhead is reduced.
In the above solution, for optimization, the high-speed change data estimation and integration program unit further performs the following steps to perform data estimation and integration:
step A1, define
Figure BDA0003498907850000031
Wherein, { x1,x2,...xk,xKK, j and w are predefined parameters, and w is a parameter that is used for updating K independent data sample observations in the data samples at high speed in history1,w2,...wkA real number set;
step A2, by yk=μ+αtkkμ ═ log (2 γ), and the feature index ∈ and the dispersion coefficient γ are calculated; wherein the content of the first and second substances,
Figure BDA0003498907850000032
εkis a predefined error term coefficient with a mean value of 0, tk=log|wkI, K is the number of historical samples;
step A3, by zk=δwkkCalculating a position parameter delta, where zk= arctan(Im(wk)/Re(wk),εkThe error term coefficients which are predefined and have the mean value of 0 and belong to the same distribution but are independent;
step a4, including the feature index ∈, the dispersion coefficient γ, and the position parameter δ obtained in steps a2 and A3 as phi (w) ═ exp { j δ w- γ | wAnd performing Fourier transform to obtain a probability density function f (x) to update the data P at high speednFitting estimation of (1);
step A5, determine
Figure BDA0003498907850000041
Whether the variation value estimated by the integration program unit as the high-speed variation data estimation exceeds a predefined threshold value TmaxAn index of (1); wherein A is a data value to be estimated and integrated in real time,
Figure BDA0003498907850000042
is made up of historyUpdating the parameters of the data sample estimation at high speed, TmaxIs a predefined detection threshold for the evaluation rate.
The invention further improves the timeliness by integrating, estimating and processing the high-speed change data.
Further, the data acquisition management subsystem comprises a data acquisition unit, a data transmission unit and a data management unit which are arranged in a peer-level redundancy manner, and the data acquisition management subsystem executes the following steps:
step B1, selecting a plurality of data acquisition units, a plurality of data transmission units and a plurality of data management units to form a real-time data acquisition management subsystem;
step B2, optionally, adjacent front and back units, defining the front unit as primary unit and the back unit as secondary unit;
step B3, defining a global potency model as H ═ H1·H2·H3·H4·H5In which H is1To be effective, H2For treatment efficiency, H3Is the system load factor, H4For data processing accuracy, H5Is the system failure rate;
step B4, H4Is predefined, H5Is a system failure rate calculated according to historical conditions and calculated according to the following notations
Figure BDA0003498907850000043
H2=PH21+(1-P)H21H22,H3=(NH31+ NH32)/(N+M);
Wherein W is PW1+(1-P)(W1+W2),T=PT1+(1-P)(T1+T2) T is the total time of the data in the primary unit and the secondary unit, P is the probability of data entering the secondary unit from the primary unit data, the primary unit processing efficiency
Figure BDA0003498907850000051
Secondary unit processing efficiency
Figure BDA0003498907850000052
Primary unit load factor
Figure BDA0003498907850000053
Secondary unit load factor
Figure BDA0003498907850000054
N is the number of primary units, M is the number of secondary units, R is an integer, PRAccording to a predefined average data volume of the primary unit
Figure BDA0003498907850000055
Obtaining QRAccording to a predefined average data volume of secondary units
Figure BDA0003498907850000056
Obtaining W1=L1λ is the average response time of primary unit data, W2= L2/λH21P is the secondary unit response time; t is1=1/μ1Mean service time, T, for primary unit data2=1/μ2Averaging the service time for the secondary unit data; mu.s1And mu2Is an exponentially distributed parameter, λ is a predefined poisson parameter;
and step B5, calculating the total efficiency value of the real-time data acquisition management subsystem, judging the size of the total efficiency value, and returning to the step A1 to reselect and form a new real-time data acquisition management subsystem if the total efficiency value is greater than a predefined threshold value.
The invention carries out real-time efficiency evaluation on the real-time performance of the system, and avoids the occurrence of faults and function degradation by adjusting the system combination.
Furthermore, the digital twin model modeling and management subsystem comprises a cloud computing unit and an edge computing unit, wherein the edge computing unit is arranged at each interactive terminal, and the cloud computing unit is arranged in a cloud server connected with each interactive terminal in a cloud manner;
the edge computing unit provides a physical entity model, the cloud computing unit constructs a virtual entity model according to physical entity data, and constructs a bidirectional interaction correction channel between the physical entity model and the virtual entity model.
Further, the cloud computing unit executes the following steps of interactively fusing the virtual entity models of the interactive terminals;
step R1, selecting 1 virtual entity model as a target space model, defining the rest virtual entity models as source space models, selecting one source space model, and executing the step R2;
step R2, extracting a background region omega from the source space model, and acquiring background contour key point information which comprises a purity value and a two-dimensional plane coordinate value of a contour key point in the background region omega;
step R3, presetting a threshold value of R, G, B components of pixel points, constructing source space scene data, and setting the values of pixel points R, G, B which do not belong to the background region omega to 0; if the R, G, B component value of the pixel point belonging to the background region omega is smaller than the preset threshold, setting the R, G, B component value of the pixel point as the threshold, otherwise, adopting the actual pixel point R, G, B component value;
step R4, receiving source space scene data and contour key point information, defining pixel points in a background area by pixel points with R, G, B values larger than a threshold value, and recovering the source space background area according to the threshold value preset in the step R3;
step R5, establishing a background purity recovery model, and calculating the purity value d 'of pixel points with the same positions as the pixel points in the background region omega in the region to be fused in the target space based on the purity value of the neighborhood pixel point q of the pixel point p in the background region omega'pThe background purity recovery model is:
d′p=limt→∞d′p(t);
Figure BDA0003498907850000071
Figure BDA0003498907850000072
wherein, gpAnd gqRespectively representing gray values, sigma, of pixel points p and neighborhood points q in a target space region to be fusedgT is a constant coefficient based on the standard deviation of the gaussian function,
Figure BDA0003498907850000073
is an eight-adjacent-channel region, d 'of the pixel point p'q(t-1) obtaining a known purity value of a neighborhood point q of a pixel point p in a pixel point in a background region omega;
step R6, partitioning a background region omega of the source space model based on a contour key point A, B, arbitrarily determining a pixel point in the background region omega, if a target space purity value corresponding to the position of the pixel point is out of the purity value range of a background block region to which the pixel point belongs, executing a step R7, otherwise executing a step R8;
purity value range of DAB={d|dmin≤d≤dmaxH, minimum purity value dmin=min(dA,dB) Maximum value of purity dmax=max(dA,dB);
Step R7, if the target space purity value corresponding to the pixel position in the background region omega base is larger than the maximum value of the purity range of the background block region to which the pixel belongs, the pixel position is made to display the background, otherwise, the target space is displayed;
step R8, if there is a pixel point p ∈ omega | (R)1∪R2∪...∪Rqjs) If the pixel point p is defined to be located in the background block region range of the background region omega, establishing a geometric block structure model, and solving the purity value of the pixel point p, if the pixel point p is not in the background block region range, executing the step R9, and if the pixel point p is not in the background block region range, executing the step qjs as the number of background blocks;
step R9, establishing a neighborhood microstructure model, and solving to obtain the purity value of the pixel point p;
r10, determining a display result according to the purity relation of pixel points corresponding to the source space and the target space, and obtaining a space object fusion effect with consistent geometry;
and step R11, reselecting the source space model, updating and defining the fusion space model in the step R1 as a target space model, and returning to execute the step R1 until all the source space models are traversed.
Further, the geometric block structure model includes:
step J1, defining the two-dimensional plane area range corresponding to the background block area composed of the outline key points A, B as RAB={xs,ys,w,h|w=xe-xs,h=ye-ys}, initial abscissa x of two-dimensional plane areas=min(xA,xB) Starting ordinate y of two-dimensional plane areas=min(yA,yB) End of two-dimensional planar area abscissa xe=max(xA,xB) End of two-dimensional planar area ordinate ye=max(yA,yB) Defining the pixel point corresponding to the minimum purity value of the region as pmin
Step J2, calculating the purity value of the pixel point p in the background block area as d'p
Figure BDA0003498907850000081
Further, the neighborhood microstructure model is consistent with the background purity recovery model, and only w is changedpqThe factors are:
Figure BDA0003498907850000082
the invention also provides an interaction method for fusing digital twins and virtual reality, which comprises the following steps:
the data acquisition and management subsystem is used for acquiring, transmitting and managing data of the interactive terminal and executing a data management program comprising the following steps:
step S1, calculating the change rate of each item of data, and distinguishing the high-speed update data P according to the predefined change rate thresholdnAnd updating the data P at a low speedr
Step S2, for high-speed update data PnCalling a high-speed change data estimation integration program unit to estimate and integrate data;
step S3, for low-speed update data PrCalling the slow data processing integration program unit to calculate and integrate, and directly updating the low-speed data P by the slow data processing integration program unitrCalculating the variation value of (a);
step S4, if the variation value estimated by the high speed variation data estimation integration program unit exceeds the predefined threshold or the variation value calculated by the slow speed data processing integration program unit exceeds the predefined threshold, the corresponding data is updated and transmitted to the digital twin model modeling and management subsystem through the data transmission unit
The digital twin model modeling and management subsystem is used for carrying out data twin modeling according to the data of the data acquisition management subsystem and mapping a real physical scene sensed by the interactive terminal into a virtual space scene;
step three, fusing the virtual space scenes sensed by the interactive terminals;
step four, the interactive function realization subsystem is used for collecting and analyzing the interactive process, returning to the execution step two in real time, analyzing the effect of the interactive process and feeding back and controlling interactive data or strategies;
and step five, finishing interaction.
Further, the third step includes:
step R1, selecting 1 virtual entity model as a target space model, defining the rest virtual entity models as source space models, selecting one source space model, and executing the step R2;
step R2, extracting a background region omega from the source space model, and acquiring background contour key point information which comprises a purity value and a two-dimensional plane coordinate value of a contour key point in the background region omega;
step R3, presetting a threshold value of R, G, B components of pixel points, constructing source space scene data, and setting the values of pixel points R, G, B which do not belong to the background region omega to 0; if the R, G, B component value of the pixel point belonging to the background region omega is smaller than the preset threshold, setting the R, G, B component value of the pixel point as the threshold, otherwise, adopting the actual R, G, B component value of the pixel point;
step R4, receiving source space scene data and contour key point information, defining pixels of a background area by pixels of which the R, G, B values are larger than a threshold value, and recovering the source space background area according to the threshold value preset in the step R3;
step R5, establishing a background purity recovery model, and calculating the purity value d 'of the pixel point p in the region to be fused in the target space, which has the same position as the pixel point p in the background region omega, based on the purity value of the neighborhood pixel point q of the pixel point p in the background region omega'pThe background purity recovery model is:
d′p=limt→∞d′p(t);
Figure BDA0003498907850000101
Figure BDA0003498907850000102
wherein, gpAnd gqRespectively representing the gray values, sigma, of the pixel points p and the neighborhood points q in the target space region to be fusedgT is a constant coefficient based on the standard deviation of the gaussian function,
Figure BDA0003498907850000103
is an eight-adjacent-channel region, d 'of the pixel point p'q(t-1) obtaining a known purity value of a neighborhood point q of a pixel point p in a pixel point in a background region omega;
step R6, partitioning a background region omega of the source space model based on a contour key point A, B, arbitrarily determining a pixel point in the background region omega, if a target space purity value corresponding to the position of the pixel point is out of the purity value range of a background block region to which the pixel point belongs, executing a step R7, otherwise executing a step R8;
purity value range of DAB={d|dmin≤d≤dmaxH.minimum purity value dmin=min(dA,dB) Maximum value of purity dmax=max(dA,dB);
Step R7, if the target space purity value corresponding to the pixel position in the background region omega base is larger than the maximum value of the purity range of the background block region to which the pixel belongs, the pixel position is made to display the background, otherwise, the target space is displayed;
step R8, if there is a pixel point p ∈ omega | (R)1∪R2∪...∪Rqjs) If the pixel point p is defined to be located in the background block region range of the background region omega, establishing a geometric block structure model, and solving the purity value of the pixel point p, if the pixel point p is not in the background block region range, executing the step R9, and if the pixel point p is not in the background block region range, executing the step qjs as the number of background blocks;
step R9, establishing a neighborhood microstructure model, and solving to obtain a purity value of a pixel point p;
r10, determining a display result according to the purity relation of pixel points corresponding to the source space and the target space, and obtaining a space object fusion effect with consistent geometry;
and step R11, reselecting the source space model, updating and defining the fusion space model in the step R1 as a target space model, and returning to execute the step R1 until all the source space models are traversed.
Further, the high-speed variation data estimation and integration program unit executes the following steps to estimate and integrate data:
step A1, definition
Figure BDA0003498907850000111
Wherein, { x1,x2,...xk,xKK, j and w are predefined parameters, and w is a parameter that is used for updating K independent data sample observations in the data samples at high speed in history1,w2,...wkIs a real number set;
step A2, by yk=μ+αtkkμ ═ log (2 γ), calculating the feature index ∈ and the dispersion coefficient γ; wherein the content of the first and second substances,
Figure BDA0003498907850000112
εkis a predefined error term coefficient with a mean value of 0, tk=log|wkI, K is the number of historical samples;
step A3, by zk=δwkkCalculating a position parameter delta, where zk= arctan(Im(wk)/Re(wk),εkThe error term coefficients which are predefined and have the mean value of 0 and belong to the same distribution but are independent;
step a4, including the feature index ∈, the dispersion coefficient γ, and the position parameter δ obtained in steps a2 and A3 as phi (w) ═ exp { j δ w- γ | wAnd performing Fourier transform to obtain a probability density function f (x) to update the data P at high speednFitting estimation of (2);
step A5, determine
Figure BDA0003498907850000121
Whether the variation value estimated by the integration program unit as high-speed variation data exceeds a predefined threshold value TmaxThe index of (2); wherein A is a data value to be estimated and integrated in real time,
Figure BDA0003498907850000122
is a parameter, T, that is estimated by historical high-speed update data samplesmaxIs a predefined detection threshold for the evaluation rate.
The invention has the beneficial effects that: the invention effectively distinguishes and processes the data in the digital twin modeling calculation simulation process. According to the change rate, the decision and the method for processing the data can be effectively distinguished, the real-time performance and the efficiency are improved, the real-time change rate of each item of data is calculated, and the high-speed updating data P is distinguished according to the predefined change rate threshold valuenAnd updating the data P at a low speedr(ii) a For high speed update data PnCalling a high-speed change data estimation fusion program to carry out data estimation and fusion; for low speed update data PrDirectly calling the low-speed data processing fusion program to calculate and fuse, and simultaneously directly updating the low-speed data PrChange of (2)And the value is calculated, so that the efficiency is improved, and the system overhead is reduced.
Drawings
The invention is further illustrated with reference to the following figures and examples.
FIG. 1 is a schematic diagram of an interactive system for fusing digital twins and virtual reality.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention is further described in detail with reference to the following embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
Example 1
The embodiment provides an interactive system fusing a digital twin and a virtual reality, as shown in fig. 1, which includes at least 2 interactive terminals interacting with each other, where the interactive system fusing the digital twin and the virtual reality includes a data acquisition management subsystem, a digital twin model modeling and management subsystem, and an interactive function realization subsystem;
the data acquisition management subsystem is used for acquiring, transmitting and managing data of the interactive terminal and comprises a data acquisition unit, a data transmission unit and a data management unit;
the digital twin model modeling and management subsystem is used for carrying out data twin modeling according to the data of the data acquisition management subsystem and mapping a real physical scene sensed by the interactive terminal into a virtual space scene;
the interactive function realization subsystem is used for analyzing the interactive process and feeding back and controlling interactive data or strategies;
the data management unit executes a data management program including the steps of:
step S1, calculating the change rate of each item of data, and distinguishing the high-speed update data P according to the predefined change rate thresholdnAnd updating the data P at a low speedr
Step S2, for high-speed update data PnCalling the high-speed variation data estimation integration program unit to carry out data estimation,Integrating;
step S3, for low speed updating data PrCalling the slow data processing integration program unit to calculate and integrate, and directly updating the low-speed data P by the slow data processing integration program unitrCalculating the variation value of (a);
and step S4, if the variation value estimated by the high-speed variation data estimation integration program unit exceeds a predefined threshold value or the variation value calculated by the slow-speed data processing integration program unit exceeds a predefined threshold value, updating and transmitting the corresponding data to the digital twin model modeling and management subsystem through the data transmission unit.
In the embodiment, data are effectively distinguished and processed in the digital twin modeling calculation simulation process. According to the change rate, the decision and the method for processing the change rate can be effectively distinguished, the real-time performance and the efficiency are improved, the real-time change rate of each item of data is calculated, and the high-speed updating data P is distinguished according to the predefined change rate threshold valuenAnd updating the data P at a low speedr(ii) a For high speed update data PnCalling a high-speed change data estimation fusion program to carry out data estimation and fusion; for low speed update data PrDirectly calling the low-speed data processing fusion program to calculate and fuse, and simultaneously directly updating the low-speed data PrThe change value of the system is calculated, so that the efficiency is improved, and the system overhead is reduced.
Preferably, the high-speed change data estimation and integration program unit executes the following steps to perform data estimation and integration:
step A1, define
Figure BDA0003498907850000141
Wherein, { x1,x2,...xk,xKK, j and w are predefined parameters, and w is a parameter that is used for updating K independent data sample observations in the data samples at high speed in history1,w2,...wkA real number set;
step A2, by yk=μ+αtkkμ ═ log (2 γ), and the feature index ∈ is calculatedAnd a dispersion coefficient γ; wherein the content of the first and second substances,
Figure BDA0003498907850000151
εkis a predefined error term coefficient with a mean value of 0, tk=log|wkI, K is the number of historical samples;
step A3, by zk=δwkkCalculating a position parameter delta, where zk= arctan(Im(wk)/Re(wk),εkThe error term coefficients which are predefined and have the mean value of 0 and belong to the same distribution but are independent;
step a4, including the feature index ∈, the dispersion coefficient γ, and the position parameter δ obtained in steps a2 and A3 as phi (w) ═ exp { j δ w- γ | wAnd performing Fourier transform to obtain a probability density function f (x) to update the data P at high speednFitting estimation of (1);
step A5, determining
Figure BDA0003498907850000152
Whether the variation value estimated by the integration program unit as the high-speed variation data estimation exceeds a predefined threshold value TmaxAn index of (1); wherein A is a data value to be estimated and integrated in real time,
Figure BDA0003498907850000153
is a parameter estimated by historical high-speed update data samples, TmaxIs a predefined detection threshold for the evaluation rate.
The embodiment further improves the timeliness by integrating and estimating the high-speed change data.
Preferably, the data acquisition management subsystem includes a data acquisition unit, a data transmission unit and a data management unit that are provided with the same level of redundancy, and the data acquisition management subsystem executes the following steps:
step B1, selecting a plurality of data acquisition units, a plurality of data transmission units and a plurality of data management units to form a real-time data acquisition management subsystem;
step B2, optionally, adjacent front and back units, defining the front unit as primary unit and the back unit as secondary unit;
step B3, defining a global potency model as H ═ H1·H2·H3·H4·H5In which H is1To be effective, H2For process efficiency, H3Is the system load factor, H4For data processing accuracy, H5Is the system failure rate;
step B4, H4Is predefined, H5Is the system failure rate calculated according to the historical situation, and is calculated according to the following public indication
Figure BDA0003498907850000161
H2=PH21+(1-P)H21H22,H3=(NH31+ NH32)/(N+M);
Wherein W is PW1+(1-P)(W1+W2),T=PT1+(1-P)(T1+T2) T is the total time of data in the primary unit and the secondary unit, P is the probability of data entering the secondary unit from the primary unit data, the primary unit processing efficiency
Figure BDA0003498907850000162
Secondary unit processing efficiency
Figure BDA0003498907850000163
Primary unit load factor
Figure BDA0003498907850000164
Secondary unit load factor
Figure BDA0003498907850000165
N is the number of primary units, M is the number of secondary units, R is an integer, PRAccording to a predefined average data volume of the primary unit
Figure BDA0003498907850000166
To obtain QRAccording to a predefined average data volume of secondary units
Figure BDA0003498907850000167
Obtaining W1=L1λ is the average response time of primary unit data, W2= L2/λH21P is the secondary unit response time; t is1=1/μ1Mean service time, T, for primary unit data2=1/μ2Averaging the service time for the secondary unit data; mu.s1And mu2Is an exponentially distributed parameter, λ is a predefined poisson parameter;
and step B5, calculating the total efficiency value of the real-time data acquisition management subsystem, judging the size of the total efficiency value, and returning to the step A1 to reselect and form a new real-time data acquisition management subsystem if the total efficiency value is greater than a predefined threshold value.
The embodiment carries out real-time efficiency evaluation on the real-time performance of the system, and avoids the occurrence of faults and function degradation by adjusting the system combination.
Preferably, the digital twin model modeling and management subsystem comprises a cloud computing unit and an edge computing unit, wherein the edge computing unit is arranged at each interactive terminal, and the cloud computing unit is arranged in a cloud server connected with each interactive terminal in a cloud manner;
the edge computing unit provides a physical entity model, the cloud computing unit constructs a virtual entity model according to physical entity data, and constructs a bidirectional interactive correction channel between the physical entity model and the virtual entity model.
Preferably, the cloud computing unit executes the following steps of interactively fusing the virtual entity models of the interactive terminals;
step R1, selecting 1 virtual entity model as a target space model, defining the rest virtual entity models as source space models, selecting one source space model, and executing the step R2;
step R2, extracting a background region omega from the source space model, and acquiring background contour key point information which comprises a purity value and a two-dimensional plane coordinate value of a contour key point in the background region omega;
step R3, presetting a threshold value of R, G, B components of pixel points, constructing source space scene data, and setting the values of pixel points R, G, B which do not belong to the background region omega to 0; if the R, G, B component value of the pixel point belonging to the background region omega is smaller than the preset threshold, setting the R, G, B component value of the pixel point as the threshold, otherwise, adopting the actual R, G, B component value of the pixel point;
step R4, receiving source space scene data and contour key point information, defining pixels of a background area by pixels of which the R, G, B values are larger than a threshold value, and recovering the source space background area according to the threshold value preset in the step R3;
step R5, establishing a background purity recovery model, and calculating the purity value d 'of a pixel point with the same position as the pixel point in the background region omega in the region to be fused of the target space based on the purity value of the neighborhood pixel point q of the pixel point p in the background region omega'pThe background purity recovery model is:
d′p=limt→∞d′p(t);
Figure BDA0003498907850000181
Figure BDA0003498907850000182
wherein, gpAnd gqRespectively representing gray values, sigma, of pixel points p and neighborhood points q in a target space region to be fusedgT is a constant coefficient based on the standard deviation of the gaussian function,
Figure BDA0003498907850000183
is an eight-adjacent-channel region, d 'of the pixel point p'q(t-1) obtaining a known purity value of a neighborhood point q of a pixel point p in a pixel point in a background region omega;
step R6, partitioning a background region omega of the source space model based on a contour key point A, B, arbitrarily determining a pixel point in the background region omega, if a target space purity value corresponding to the position of the pixel point is out of the purity value range of a background block region to which the pixel point belongs, executing a step R7, otherwise executing a step R8;
purity value range of DAB={d|dmin≤d≤dmaxH, minimum purity value dmin=min(dA,dB) Maximum value of purity dmax=max(dA,dB);
Step R7, if the target space purity value corresponding to the pixel position in the background region omega base is larger than the maximum value of the purity range of the background block region to which the pixel belongs, the pixel position is made to display the background, otherwise, the target space is displayed;
step R8, if there is a pixel point p ∈ omega | (R)1∪R2∪...∪Rqjs) If the pixel point p is defined to be located in the background block region range of the background region omega, establishing a geometric block structure model, and solving the purity value of the pixel point p, if the pixel point p is not in the background block region range, executing the step R9, and if the pixel point p is not in the background block region range, executing the step qjs as the number of background blocks;
step R9, establishing a neighborhood microstructure model, and solving to obtain a purity value of a pixel point p;
step R10, determining a display result according to the purity relation of pixel points corresponding to the source space and the target space, and obtaining a space object fusion effect with consistent geometry;
and step R11, reselecting the source space model, updating and defining the fusion space model in the step R1 as a target space model, and returning to execute the step R1 until all the source space models are traversed.
Specifically, the geometric block structure model includes:
step J1, defining the two-dimensional plane area range corresponding to the background block area composed of the outline key points A, B as RAB={xs,ys,w,h|w=xe-xs,h=ye-ys}, initial abscissa x of two-dimensional plane areas=min(xA,xB) Initial ordinate y of two-dimensional plane areas=min(yA,yB) End of two-dimensional planar area abscissa xe=max(xA,xB) Two-dimensional planeEnd of area ordinate ye=max(yA,yB) Defining the pixel point corresponding to the minimum purity value of the region as pmin
Step J2, calculating the purity value of the pixel point p in the background block area as d'p
Figure BDA0003498907850000191
Specifically, the neighborhood microstructure model is consistent with the background purity recovery model, and only w is changedpqThe factors are:
Figure BDA0003498907850000201
the embodiment also provides an interaction method for fusing a digital twin and virtual reality, which comprises the following steps:
the data acquisition management subsystem is used for acquiring, transmitting and managing data of the interactive terminal and executing a data management program comprising the following steps:
step S1, calculating the change rate of each item of data, and distinguishing the high-speed update data P according to the predefined change rate thresholdnAnd updating the data P at a low speedr
Step S2, for high-speed update data PnCalling a high-speed change data estimation integration program unit to estimate and integrate data;
step S3, for low speed updating data PrCalling the slow data processing integration program unit to calculate and integrate, and directly updating the low-speed data P by the slow data processing integration program unitrCalculating the variation value of (a);
step S4, if the variation value estimated by the high speed variation data estimation integration program unit exceeds the predefined threshold value or the variation value calculated by the slow speed data processing integration program unit exceeds the predefined threshold value, the corresponding data is updated and transmitted to the digital twin model modeling and management subsystem through the data transmission unit
The digital twin model modeling and management subsystem is used for carrying out data twin modeling according to the data of the data acquisition management subsystem and mapping a real physical scene sensed by the interactive terminal into a virtual space scene;
step three, fusing the virtual space scenes sensed by the interactive terminals;
step four, the interactive function realization subsystem is used for collecting and analyzing the interactive process, returning to the execution step two in real time, analyzing the effect of the interactive process and feeding back and controlling interactive data or strategies;
and step five, finishing the interaction.
Preferably, the model construction required by the virtual reality interaction is realized by adopting an efficient and accurate twin system fusion algorithm. The third step comprises:
step R1, selecting 1 virtual entity model as a target space model, defining the rest virtual entity models as source space models, selecting one source space model, and executing the step R2;
step R2, extracting a background region omega from the source space model, and acquiring background contour key point information which comprises a purity value and a two-dimensional plane coordinate value of a contour key point in the background region omega;
step R3, presetting a threshold value of R, G, B components of pixel points, constructing source space scene data, and setting the values of pixel points R, G, B which do not belong to the background region omega to 0; if the R, G, B component value of the pixel point belonging to the background region omega is smaller than the preset threshold, setting the R, G, B component value of the pixel point as the threshold, otherwise, adopting the actual R, G, B component value of the pixel point;
step R4, receiving source space scene data and contour key point information, defining pixels of a background area by pixels of which the R, G, B values are larger than a threshold value, and recovering the source space background area according to the threshold value preset in the step R3;
step R5, establishing a background purity recovery model, and calculating the purity value d of the pixel point p in the region to be fused in the target space, which has the same position as the pixel point p in the background region omega, based on the purity value of the neighborhood pixel point q of the pixel point p in the background region omega′pThe background purity recovery model is:
d′p=limt→∞d′p(t);
Figure BDA0003498907850000221
Figure BDA0003498907850000222
wherein, gpAnd gqRespectively representing the gray values, sigma, of the pixel points p and the neighborhood points q in the target space region to be fusedgT is a constant coefficient based on the standard deviation of the gaussian function,
Figure BDA0003498907850000223
is an eight-adjacent-channel region, d 'of the pixel point p'q(t-1) obtaining a known purity value of a neighborhood point q of a pixel point p in a pixel point in a background region omega;
step R6, partitioning a background region omega of the source space model based on a contour key point A, B, arbitrarily determining a pixel point in the background region omega, if a target space purity value corresponding to the position of the pixel point is out of the purity value range of a background block region to which the pixel point belongs, executing a step R7, otherwise executing a step R8;
purity value range of DAB={d|dmin≤d≤dmaxH.minimum purity value dmin=min(dA,dB) Maximum value of purity dmax=max(dA,dB);
Step R7, if the purity value of the target space corresponding to the position of the pixel point in the omega base of the background area is larger than the maximum value of the purity range of the background block area to which the pixel point belongs, the pixel point is made to display the background, otherwise, the target space is displayed;
step R8, if a pixel point p belongs to omega | (R)1∪R2∪...∪Rqjs) Defining the pixel point p to be located in the background block region range of the background region omega, and establishing a geometric block structureThe model is used for solving the purity value of the pixel point p, and if the purity value of the pixel point p is not met, the steps R9 and qjs are carried out to obtain the number of background blocks;
step R9, establishing a neighborhood microstructure model, and solving to obtain a purity value of a pixel point p;
r10, determining a display result according to the purity relation of pixel points corresponding to the source space and the target space, and obtaining a space object fusion effect with consistent geometry;
and step R11, reselecting the source space model, updating and defining the fusion space model in the step R1 as a target space model, and returning to execute the step R1 until all the source space models are traversed.
Preferably, the high-speed change data estimation and integration program unit executes the following steps to perform data estimation and integration:
step A1, define
Figure BDA0003498907850000231
Wherein, { x1,x2,...xk,xKK, j and w are predefined parameters, and w is a parameter that is used for updating K independent data sample observations in the data samples at high speed in history1,w2,...wkA real number set;
step A2, by yk=μ+αtkkμ ═ log (2 γ), and the feature index ∈ and the dispersion coefficient γ are calculated; wherein the content of the first and second substances,
Figure BDA0003498907850000232
εkis a predefined error term coefficient with a mean value of 0, tk=log|wkI, K is the number of historical samples;
step A3, by zk=δwkkCalculating a position parameter δ, wherein zk= arctan(Im(wk)/Re(wk),εkThe error term coefficients which are predefined and have the mean value of 0 and belong to the same distribution but are independent;
step A4, substituting the characteristic index ∈, the dispersion coefficient γ, and the position parameter δ obtained in steps A2 and A3 into φ(w)=exp{jδw-γ|w|And performing Fourier transform to obtain a probability density function f (x) to update the data P at high speednFitting estimation of (1);
step A5, determine
Figure BDA0003498907850000233
Whether the variation value estimated by the integration program unit as high-speed variation data exceeds a predefined threshold value TmaxAn index of (1); wherein A is a data value to be estimated and integrated in real time,
Figure BDA0003498907850000241
is a parameter, T, that is estimated by historical high-speed update data samplesmaxIs a predefined detection threshold for the evaluation rate.
In the embodiment, data are effectively distinguished and processed in the digital twin modeling calculation simulation process. According to the change rate, the decision and the method for processing the change rate can be effectively distinguished, the real-time performance and the efficiency are improved, the real-time change rate of each item of data is calculated, and the high-speed updating data P is distinguished according to the predefined change rate threshold valuenAnd updating the data P at a low speedr(ii) a For high speed update data PnCalling a high-speed change data estimation fusion program to carry out data estimation and fusion; for low speed update data PrDirectly calling the low-speed data processing fusion program to calculate and fuse, and simultaneously directly updating the low-speed data PrThe change value of the system is calculated, so that the efficiency is improved, and the system overhead is reduced.
Although the illustrative embodiments of the present invention have been described above to enable those skilled in the art to understand the present invention, the present invention is not limited to the scope of the embodiments, and it is apparent to those skilled in the art that all the inventive concepts using the present invention are protected as long as they can be changed within the spirit and scope of the present invention as defined and defined by the appended claims.

Claims (10)

1. An interactive system fusing digital twin and virtual reality comprises at least 2 interactive terminals which interact with each other, and is characterized in that: the interactive system fusing the digital twins and the virtual reality comprises a data acquisition management subsystem, a digital twins model modeling and management subsystem and an interactive function realization subsystem;
the data acquisition management subsystem is used for acquiring, transmitting and managing data of the interactive terminal and comprises a data acquisition unit, a data transmission unit and a data management unit;
the digital twin model modeling and management subsystem is used for carrying out data twin modeling according to the data of the data acquisition management subsystem and mapping a real physical scene sensed by the interactive terminal into a virtual space scene;
the interactive function realization subsystem is used for analyzing the interactive process and feeding back and controlling interactive data or strategies;
the data management unit executes a data management program including the steps of:
step S1, calculating the change rate of each item of data, and distinguishing the high-speed update data P according to the predefined change rate thresholdnAnd updating the data P at a low speedr
Step S2, for high-speed update data PnCalling a high-speed change data estimation integration program unit to carry out data estimation and integration;
step S3, for low speed updating data PrCalling the slow data processing integration program unit to calculate and integrate, and directly updating the low-speed data P by the slow data processing integration program unitrCalculating the variation value of (a);
and step S4, if the variation value estimated by the high-speed variation data estimation integration program unit exceeds a predefined threshold or the variation value calculated by the slow-speed data processing integration program unit exceeds a predefined threshold, updating and transmitting the corresponding data to the digital twin model modeling and management subsystem through the data transmission unit.
2. The system of claim 1, wherein the system comprises: the high-speed change data estimation and integration program unit executes the following steps to estimate and integrate data:
step A1, define
Figure FDA0003498907840000021
Wherein, { x1,x2,...xk,xKK, j and w are predefined parameters, and w is a parameter that is used for updating K independent data sample observations in the data samples at high speed in history1,w2,...wkA real number set;
step A2, by yk=μ+αtkkμ ═ log (2 γ), calculating the feature index ∈ and the dispersion coefficient γ; wherein the content of the first and second substances,
Figure FDA0003498907840000022
εkis a predefined error term coefficient with a mean value of 0, tk=log|wkI, K is the number of historical samples;
step A3, by zk=δwkkCalculating a position parameter delta, where zk=arctan(Im(wk)/Re(wk),εkThe error term coefficients which are predefined and have the mean value of 0 and belong to the same distribution but are independent;
step a4, including the feature index ∈, the dispersion coefficient γ, and the position parameter δ obtained in steps a2 and A3 as phi (w) ═ exp { j δ w- γ | wAnd performing Fourier transform to obtain a probability density function f (x) to update the data P at high speednFitting estimation of (2);
step A5, determine
Figure FDA0003498907840000023
Whether the variation value estimated by the integration program unit as the high-speed variation data estimation exceeds a predefined threshold value TmaxAn index of (1); wherein A is a data value to be estimated and integrated in real time,
Figure FDA0003498907840000024
is a parameter estimated by historical high-speed update data samples, TmaxIs a predefined detection threshold for the evaluation rate.
3. The system of claim 1, wherein the system comprises: the data acquisition management subsystem comprises a data acquisition unit, a data transmission unit and a data management unit which are arranged in a peer-level redundancy mode, and the data acquisition management subsystem executes the following steps:
step B1, selecting a plurality of data acquisition units, a plurality of data transmission units and a plurality of data management units to form a real-time data acquisition management subsystem;
step B2, optionally, adjacent front and back units, defining the front unit as primary unit and the back unit as secondary unit;
step B3, defining a overall efficacy model as H ═ H1·H2·H3·H4·H5In which H is1To be effective, H2For process efficiency, H3Is the system load factor, H4For data processing accuracy, H5Is the system failure rate;
step B4, H4Is predefined, H5Is the system failure rate calculated according to the historical situation, and is calculated according to the following public indication
Figure FDA0003498907840000031
H2=PH21+(1-P)H21H22,H3=(NH31+NH32)/(N+M);
Wherein W is PW1+(1-P)(W1+W2),T=PT1+(1-P)(T1+T2) T is the total time of data in the primary unit and the secondary unit, P is the probability of data entering the secondary unit from the primary unit data, the primary unit processing efficiency
Figure FDA0003498907840000032
Secondary unit processing efficiency
Figure FDA0003498907840000033
Primary unit load factor
Figure FDA0003498907840000034
Secondary unit load factor
Figure FDA0003498907840000035
N is the number of primary units, M is the number of secondary units, R is an integer, PRAccording to a predefined average data volume of the primary unit
Figure FDA0003498907840000041
Obtaining QRAccording to a predefined average data volume of secondary units
Figure FDA0003498907840000042
Obtaining W1=L1λ is the average response time of primary unit data, W2=L2/λH21P is the secondary unit response time; t is1=1/μ1Mean service time, T, for primary unit data2=1/μ2Averaging the service time for the secondary unit data; mu.s1And mu2Is an exponentially distributed parameter, λ is a predefined poisson parameter;
and step B5, calculating the total efficiency value of the real-time data acquisition management subsystem, judging the size of the total efficiency value, and returning to the step A1 to reselect and form a new real-time data acquisition management subsystem if the total efficiency value is greater than a predefined threshold value.
4. The interactive system for fusing digital twins and virtual reality as claimed in any one of claims 1 to 3, wherein: the digital twin model modeling and management subsystem comprises a cloud computing unit and an edge computing unit, wherein the edge computing unit is arranged at each interactive terminal, and the cloud computing unit is arranged in a cloud server connected with each interactive terminal in a cloud manner;
the edge computing unit provides a physical entity model, the cloud computing unit constructs a virtual entity model according to physical entity data, and constructs a bidirectional interactive correction channel between the physical entity model and the virtual entity model.
5. The system of claim 4, wherein the system comprises: the cloud computing unit executes the following steps of interactively fusing the virtual entity models of the interactive terminals;
step R1, selecting 1 virtual entity model as a target space model, defining the rest virtual entity models as source space models, selecting one source space model, and executing step R2;
step R2, extracting a background region omega from the source space model, and acquiring background contour key point information which comprises a purity value and a two-dimensional plane coordinate value of a contour key point in the background region omega;
step R3, presetting a threshold value of R, G, B components of pixel points, constructing source space scene data, and setting the values of pixel points R, G, B which do not belong to the background region omega to 0; if the R, G, B component value of the pixel point belonging to the background region omega is smaller than the preset threshold, setting the R, G, B component value of the pixel point as the threshold, otherwise, adopting the actual R, G, B component value of the pixel point;
step R4, receiving source space scene data and contour key point information, defining pixel points in a background area by pixel points with R, G, B values larger than a threshold value, and recovering the source space background area according to the threshold value preset in the step R3;
step R5, establishing a background purity recovery model, and calculating the purity value d 'of a pixel point with the same position as the pixel point in the background region omega in the region to be fused of the target space based on the purity value of the neighborhood pixel point q of the pixel point p in the background region omega'pThe background purity recovery model is:
d′p=limt→∞d′p(t);
Figure FDA0003498907840000051
Figure FDA0003498907840000052
wherein, gpAnd gqRespectively representing the gray values, sigma, of the pixel points p and the neighborhood points q in the target space region to be fusedgT is a constant coefficient based on the standard deviation of the gaussian function,
Figure FDA0003498907840000053
is an eight-adjacent-channel region, d 'of the pixel point p'q(t-1) obtaining a known purity value of a neighborhood point q of a pixel point p in a pixel point in a background region omega;
step R6, partitioning a background region omega of the source space model based on a contour key point A, B, arbitrarily determining a pixel point in the background region omega, if a target space purity value corresponding to the position of the pixel point is out of the purity value range of a background block region to which the pixel point belongs, executing a step R7, otherwise executing a step R8;
purity value range of DAB={d|dmin≤d≤dmaxH.minimum purity value dmin=min(dA,dB) Maximum value of purity dmax=max(dA,dB);
Step R7, if the target space purity value corresponding to the pixel position in the background region omega base is larger than the maximum value of the purity range of the background block region to which the pixel belongs, the pixel position is made to display the background, otherwise, the target space is displayed;
step R8, if there is a pixel point p ∈ omega | (R)1∪R2∪...∪Rqjs) If the pixel point p is defined to be located in the range of the background block region of the background region omega, a geometric block structure model is established, the purity value of the pixel point p is solved, and if the pixel point p is not solved, the steps R9 and qjs are executed as the number of the background blocks;
step R9, establishing a neighborhood microstructure model, and solving to obtain a purity value of a pixel point p;
r10, determining a display result according to the purity relation of pixel points corresponding to the source space and the target space, and obtaining a space object fusion effect with consistent geometry;
and step R11, reselecting the source space model, updating and defining the fusion space model in the step R1 as a target space model, and returning to execute the step R1 until all the source space models are traversed.
6. The system of claim 5, wherein: the geometric block structural model comprises:
step J1, defining the two-dimensional plane area range corresponding to the background block area composed of the outline key points A, B as RAB={xs,ys,w,h|w=xe-xs,h=ye-ys}, initial abscissa x of two-dimensional plane areas=min(xA,xB) Starting ordinate y of two-dimensional plane areas=min(yA,yB) End of two-dimensional planar area abscissa xe=max(xA,xB) End of two-dimensional plane area ordinate ye=max(yA,yB) Defining the pixel point corresponding to the minimum purity value of the region as pmin
Step J2, calculating the purity value of the pixel point p in the background block area as d'p
Figure FDA0003498907840000071
7. The system of claim 5, wherein: the neighborhood microstructure model is consistent with the background purity recovery model, and only w is changedpqThe factors are:
Figure FDA0003498907840000072
8. an interaction method fusing digital twins and virtual reality is characterized in that: the interactive method for fusing the digital twin and the virtual reality is based on the interactive system for fusing the digital twin and the virtual reality as claimed in any one of claims 1 to 7, and the method comprises the following steps:
the data acquisition management subsystem is used for acquiring, transmitting and managing data of the interactive terminal and executing a data management program comprising the following steps:
step S1, calculating the change rate of each item of data, and distinguishing the high-speed update data P according to the predefined change rate thresholdnAnd updating the data P at a low speedr
Step S2, for high-speed update data PnCalling a high-speed change data estimation integration program unit to carry out data estimation and integration;
step S3, for low speed updating data PrCalling the slow data processing integration program unit to calculate and integrate, and directly updating the low-speed data P by the slow data processing integration program unitrCalculating the variation value of (a);
step S4, if the variation value estimated by the high speed variation data estimation integration program unit exceeds the predefined threshold value or the variation value calculated by the slow speed data processing integration program unit exceeds the predefined threshold value, the corresponding data is updated and transmitted to the digital twin model modeling and management subsystem through the data transmission unit
The digital twin model modeling and management subsystem is used for carrying out data twin modeling according to the data of the data acquisition management subsystem and mapping a real physical scene sensed by the interactive terminal into a virtual space scene;
step three, fusing the virtual space scenes sensed by the interactive terminals;
step four, the interactive function realization subsystem is used for collecting and analyzing the interactive process, returning to the execution step two in real time, analyzing the effect of the interactive process and feeding back and controlling interactive data or strategies;
and step five, finishing the interaction.
9. The method of claim 8, wherein the method comprises: the third step comprises:
step R1, selecting 1 virtual entity model as a target space model, defining the rest virtual entity models as source space models, selecting one source space model, and executing the step R2;
step R2, extracting a background region omega from the source space model, and obtaining background contour key point information which comprises a purity value and a two-dimensional plane coordinate value of a contour key point in the background region omega;
step R3, presetting a threshold value of R, G, B components of pixel points, constructing source space scene data, and setting the values of the pixel points R, G, B which do not belong to the background region omega to 0; if the R, G, B component value of the pixel point belonging to the background region omega is smaller than the preset threshold, setting the R, G, B component value of the pixel point as the threshold, otherwise, adopting the actual R, G, B component value of the pixel point;
step R4, receiving source space scene data and contour key point information, defining pixel points in a background area by pixel points with R, G, B values larger than a threshold value, and recovering the source space background area according to the threshold value preset in the step R3;
step R5, establishing a background purity recovery model, and calculating the purity value d 'of the pixel p in the target space region to be fused, which has the same position as the pixel p in the background region omega, based on the purity value of the neighborhood pixel q of the pixel p in the background region omega'pThe background purity recovery model is:
d′p=limt→∞d′p(t);
Figure FDA0003498907840000091
Figure FDA0003498907840000092
wherein, gpAnd gqRespectively representing the gray values, sigma, of the pixel points p and the neighborhood points q in the target space region to be fusedgT is a constant coefficient based on the standard deviation of the gaussian function,
Figure FDA0003498907840000093
is an eight-adjacent-channel region, d 'of the pixel point p'q(t-1) obtaining a known purity value of a neighborhood point q of a pixel point p in a pixel point in a background region omega;
step R6, partitioning a background region omega of the source space model based on a contour key point A, B, arbitrarily determining a pixel point in the background region omega, if a target space purity value corresponding to the position of the pixel point is out of the purity value range of a background block region to which the pixel point belongs, executing a step R7, otherwise executing a step R8;
purity value range of DAB={d|dmin≤d≤dmaxH.minimum purity value dmin=min(dA,dB) Maximum value of purity dmax=max(dA,dB);
Step R7, if the target space purity value corresponding to the pixel position in the background region omega base is larger than the maximum value of the purity range of the background block region to which the pixel belongs, the pixel position is made to display the background, otherwise, the target space is displayed;
step R8, if a pixel point p belongs to omega | (R)1∪R2∪...∪Rqjs) If the pixel point p is defined to be located in the background block region range of the background region omega, establishing a geometric block structure model, and solving the purity value of the pixel point p, if the pixel point p is not in the background block region range, executing the step R9, and if the pixel point p is not in the background block region range, executing the step qjs as the number of background blocks;
step R9, establishing a neighborhood microstructure model, and solving to obtain a purity value of a pixel point p;
r10, determining a display result according to the purity relation of pixel points corresponding to the source space and the target space, and obtaining a space object fusion effect with consistent geometry;
and step R11, reselecting the source space model, updating and defining the fusion space model in the step R1 as a target space model, and returning to execute the step R1 until all the source space models are traversed.
10. The method of claim 8, wherein the method comprises:
the high-speed change data estimation and integration program unit executes the following steps to estimate and integrate data:
step A1, define
Figure FDA0003498907840000101
Wherein, { x1,x2,...xk,xKK, j and w are predefined parameters, and w is a parameter that is used for updating K independent data sample observations in the data samples at high speed in history1,w2,...wkA real number set;
step A2, by yk=μ+αtkkμ ═ log (2 γ), and the feature index ∈ and the dispersion coefficient γ are calculated; wherein the content of the first and second substances,
Figure FDA0003498907840000102
εkis a predefined error term coefficient with a mean value of 0, tk=log|wkI, K is the number of historical samples;
step A3, by zk=δwkkCalculating a position parameter delta, where zk=arctan(Im(wk)/Re(wk),εkThe error term coefficients which are predefined and have the mean value of 0 and belong to the same distribution but are independent;
step a4, including the feature index ∈, the dispersion coefficient γ, and the position parameter δ obtained in steps a2 and A3 as phi (w) ═ exp { j δ w- γ | wAnd performing Fourier transform to obtain a probability density function f (x) to update the data P at high speednFitting estimation of (2);
step A5, determine
Figure FDA0003498907840000111
As an integrated program sheet for high-speed change data estimationWhether the variation value of the meta-estimation exceeds a predefined threshold TmaxAn index of (1); wherein A is a data value to be estimated and integrated in real time,
Figure FDA0003498907840000112
is a parameter estimated by historical high-speed update data samples, TmaxIs a predefined detection threshold for the evaluation rate.
CN202210122074.7A 2022-02-09 2022-02-09 Interaction system and method for fusing digital twin and virtual reality Pending CN114740969A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210122074.7A CN114740969A (en) 2022-02-09 2022-02-09 Interaction system and method for fusing digital twin and virtual reality

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210122074.7A CN114740969A (en) 2022-02-09 2022-02-09 Interaction system and method for fusing digital twin and virtual reality

Publications (1)

Publication Number Publication Date
CN114740969A true CN114740969A (en) 2022-07-12

Family

ID=82276034

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210122074.7A Pending CN114740969A (en) 2022-02-09 2022-02-09 Interaction system and method for fusing digital twin and virtual reality

Country Status (1)

Country Link
CN (1) CN114740969A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116957361A (en) * 2023-07-27 2023-10-27 中国舰船研究设计中心 Ship task system health state detection method based on virtual-real combination

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116957361A (en) * 2023-07-27 2023-10-27 中国舰船研究设计中心 Ship task system health state detection method based on virtual-real combination
CN116957361B (en) * 2023-07-27 2024-02-06 中国舰船研究设计中心 Ship task system health state detection method based on virtual-real combination

Similar Documents

Publication Publication Date Title
Zhou et al. Computer vision enabled building digital twin using building information model
CN107274472A (en) A kind of method and apparatus of raising VR play frame rate
CN114740969A (en) Interaction system and method for fusing digital twin and virtual reality
CN114861788A (en) Load abnormity detection method and system based on DBSCAN clustering
CN113012122A (en) Category-level 6D pose and size estimation method and device
CN114266473A (en) Supply chain demand prediction system and method based on data analysis
CN117078048A (en) Digital twinning-based intelligent city resource management method and system
CN109541336B (en) Multidimensional signal detection method for non-invasive load monitoring
CN116522096B (en) Three-dimensional digital twin content intelligent manufacturing method based on motion capture
CN112507789A (en) Construction site safety behavior monitoring working method under block chain network state
CN107452003A (en) A kind of method and device of the image segmentation containing depth information
CN110298490A (en) Time series Combination power load forecasting method and computer readable storage medium based on multiple regression
CN106816871B (en) State similarity analysis method for power system
CN113658274B (en) Automatic individual spacing calculation method for primate population behavior analysis
CN109101778A (en) Wiener process parameter estimation method based on performance degradation data and life data fusion
Casaer et al. Analysing space use patterns by Thiessen polygon and triangulated irregular network interpolation: a non-parametric method for processing telemetric animal fixes
CN110763223B (en) Sliding window based indoor three-dimensional grid map feature point extraction method
CN115114985A (en) Sensor system distributed fusion method based on set theory
CN101339243B (en) Ground cluster object tracking system
CN113312587A (en) Sensor acquisition data missing value processing method based on ARIMA prediction and regression prediction
CN109919828B (en) Method for judging difference between 3D models
CN113312364A (en) Smart cloud service updating method based on block chain and block chain service system
CN110855467A (en) Network comprehensive situation prediction method based on computer vision technology
US20210086352A1 (en) Method, apparatus and system for controlling a robot, and storage medium
CN117094472B (en) BIM-based whole-process engineering consultation integrated management method and system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination