CN114048537A - Indoor environment state prediction method based on BIM and cross sample learning - Google Patents

Indoor environment state prediction method based on BIM and cross sample learning Download PDF

Info

Publication number
CN114048537A
CN114048537A CN202111386090.9A CN202111386090A CN114048537A CN 114048537 A CN114048537 A CN 114048537A CN 202111386090 A CN202111386090 A CN 202111386090A CN 114048537 A CN114048537 A CN 114048537A
Authority
CN
China
Prior art keywords
node
weight
state
nodes
bim
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202111386090.9A
Other languages
Chinese (zh)
Other versions
CN114048537B (en
Inventor
周小平
王佳
陆一昕
郭强
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Bim Winner Beijing Technology Co ltd
Original Assignee
Bim Winner Shanghai Technology Co ltd
Foshan Yingjia Smart Space Technology Co ltd
Jiaxing Wuzhen Yingjia Qianzhen Technology Co ltd
Shenzhen Bim Winner Technology Co ltd
Shenzhen Qianhai Yingjia Data Service Co ltd
Yingjia Internet Beijing Smart Technology Co ltd
Bim Winner Beijing Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Bim Winner Shanghai Technology Co ltd, Foshan Yingjia Smart Space Technology Co ltd, Jiaxing Wuzhen Yingjia Qianzhen Technology Co ltd, Shenzhen Bim Winner Technology Co ltd, Shenzhen Qianhai Yingjia Data Service Co ltd, Yingjia Internet Beijing Smart Technology Co ltd, Bim Winner Beijing Technology Co ltd filed Critical Bim Winner Shanghai Technology Co ltd
Priority to CN202111386090.9A priority Critical patent/CN114048537B/en
Publication of CN114048537A publication Critical patent/CN114048537A/en
Application granted granted Critical
Publication of CN114048537B publication Critical patent/CN114048537B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F30/00Computer-aided design [CAD]
    • G06F30/10Geometric CAD
    • G06F30/13Architectural design, e.g. computer-aided architectural design [CAAD] related to design of buildings, bridges, landscapes, production plants or roads
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F30/00Computer-aided design [CAD]
    • G06F30/20Design optimisation, verification or simulation
    • G06F30/27Design optimisation, verification or simulation using machine learning, e.g. artificial intelligence, neural networks, support vector machines [SVM] or training a model
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2119/00Details relating to the type or aim of the analysis or the optimisation
    • G06F2119/08Thermal analysis or thermal optimisation

Abstract

The invention provides an indoor environment state prediction method based on BIM and cross sample learning, which comprises the steps of establishing a space graph model by extracting space geometric data of a BIM model, designing cross sample learning, adaptively fusing node weights (clue factors) and edge weights of known points on the space graph model to be capable of associating with space characteristics of an indoor environment state, carrying out prediction on the state of the whole indoor space by using an ML-IDW algorithm to simulate the distribution rule of humidity values of the whole indoor space of a building, realizing prediction on the state of the whole indoor space by using a small number of collection nodes, constructing a training model according to measured data, verifying the accuracy of the model by comparing the actual data with the actual data, predicting the humidity distribution of the whole indoor space in an experimental scene, and having short training time and real-time property.

Description

Indoor environment state prediction method based on BIM and cross sample learning
Technical Field
The invention relates to the field of indoor environment state prediction of BIM and cross sample learning, in particular to an indoor environment state prediction method based on BIM and cross sample learning.
Background
Indoor buildings become a main space for human activities, indoor environment problems gradually become a research hotspot, and a great deal of research is carried out, wherein personalized services of the indoor buildings are important research contents and trends of intelligent buildings and smart cities; one key prerequisite for the in-building personalization service is the holistic perception of the building interior.
Taking indoor humidity as an example, the humidity is not only closely related to many health problems, but also has great influence on building energy consumption and durability, and a large number of researches prove that a low-humidity environment can cause sensory stimulation such as dry eye, dry skin, an upper respiratory tract and the like, in order to ensure human health, the relative humidity inside a building must be kept to be more than 10% -30%, a high-humidity environment can induce the growth of mold, the health of the upper respiratory tract is influenced, the risk of mite allergy is increased, the humidity of the building and the mold are related to 30% -50% increase of various health results related to respiration and asthma, and the fact that the moisture accumulation needs to be prevented, and mold pollution caused by excessive indoor humidity can cause very serious remedial measures and cause the deterioration of building materials and energy consumption loss is pointed out.
However, indoor sensors in buildings are usually installed at specific locations, such as walls near doorways, and the measured values do not reflect the actual values of the indoor working area, and this method is used to control the indoor environment, resulting in the indoor reaching no human-set values or production requirements, the influence of indoor environmental parameters such as human activities, equipment, vegetation, the indoor environment of the building, the outdoor environment, and the like. Moreover, the intelligent sensor has limited nodes, specific deployment positions, limited sensing range and complex indoor environment factors, and cannot realize full-space state monitoring, and the real-time prediction of the state of the indoor environment is quite difficult under the interference of various factors.
Disclosure of Invention
The invention aims to: in order to solve the problems of the prior art, the invention provides the following technical scheme: an indoor environment state prediction method based on BIM and cross sample learning is provided to improve the above problems, and the method specifically comprises the following steps of S1, establishing a space map model, and fusing node weights and edge weights; s11, building a space map model by using the building information model; s12, updating fusion weight in real time by a cross sample training algorithm, wherein the fusion weight is the fusion of a clue factor and an edge weight; s2, predicting the indoor environment state; s21, training the state value of the label node across the sample; s22, the state value of the marker node rk is propagated reversely by the gradient; s23, predicting the unmarked node network; s3, setting an experiment, and acquiring an experiment environment and data; s31, error analysis, predicting the state of the whole indoor space; s32, experimental results.
As a preferred technical solution of the present application, in the S1, a BIM model space node O ═ { r1, r2, …, rm } is extracted at a certain time interval T ═ { T1, T2, …, tm }, and the space node is divided into a marked node with a smart sensor coordinate node and an unmarked node without a smart sensor node, and a real-time state value Vm of the unmarked node position is obtained.
As a preferred technical solution of the present application, the building information model in S11 establishes a space diagram model, extracts ifc axis2 plan 3D component position information in a BIM model to determine a local coordinate system, an X-axis normal vector, and a Z-axis normal vector of a component, where the Y-axis normal vector is obtained by an outer product of the X-axis normal vector and the Z-axis normal vector, obtains a rotation and translation matrix in the local coordinate system based on the component position information, where the component position information has inheritance properties, there are multiple local coordinate systems that need to compound the rotation and translation matrices, obtains world coordinates of the component through the BIM model, establishes the space diagram model by extracting spatial geometric data of the BIM model, designs cross sample learning, and adaptively fuses node weights (cue factors) and edge weights of known points on the space diagram model.
As a preferred technical solution of the present application, algorithm optimization is performed on the unmarked nodes of the spatial graph model established by the building information model in S11, and the formula is as follows:
Figure BDA0003367079640000031
vm is the inferred state value of the unmarked node, i is the number of the marked node, eta is the horizontal weight of the marked node and the unmarked node, omega is the vertical weight of the marked node and the unmarked node, V' i is the sampling real value of the marked node with the number i, and the horizontal weight and the vertical weight are jointly determined by the clue factor and the edge weight of the marked node.
As a preferred technical solution of the present application, the S12 cross sample training algorithm updates the fusion weight, the fusion of the cue factor and the edge weight in real time, and the horizontal and vertical distance characteristics of the edge weight unlabeled node rk (xk, yk, zk) and the labeled node ri (xi, yi, zi) are represented as:
Hi=|xi-xk|+|yi-yk|,Si=|zi-zk|,
wherein k and i are serial numbers of nodes, H is a horizontal distance between rk and Ri, S is a vertical distance between rk and Ri, the clue factors set clue factors Ri (α i, β i, γ i, λ i), α and β are respectively a marked node weight coefficient and an edge weight coefficient in the horizontal direction, γ and λ are respectively a marked node weight coefficient and an edge weight coefficient in the vertical direction, the marked node weight coefficient and the edge weight coefficient are fused together, the fusion weight is divided into a horizontal weight and a vertical weight, and the calculation formula is as follows:
Figure BDA0003367079640000032
eta i is a horizontal fusion weight, omega i is a vertical fusion weight, alpha i and gamma i respectively characterize the weight coefficient of the marking node with the number i in the horizontal and vertical directions, and beta i and lambda i respectively characterize the edge weight coefficient of the marking node with the number i in the horizontal and vertical directions.
As a preferred technical solution of the present application, S21, training the state value of the labeled node across samples, labeled node rkThe higher the correlation with other labeled nodes, the closer the state values, and thus, rkThe state values of (1) are:
Figure BDA0003367079640000041
v' i is the real value of the marked node sample with the serial number i, the formula of the fusion weight eta i and the omega i is calculated, the marked node rk is used as a node to be presumed, and the marked node rk state value is deduced by the marked node.
As a preferred technical scheme of the application, the S22 gradient backward propagation marks the state value of the node rk, and the difference between the two state values is measured by the mean square error loss as the formula
Figure BDA0003367079640000042
The gradient formula is:
Figure BDA0003367079640000043
Figure BDA0003367079640000044
Figure BDA0003367079640000045
Figure BDA0003367079640000046
Figure BDA0003367079640000047
Figure BDA0003367079640000051
Figure BDA0003367079640000052
Figure BDA0003367079640000053
v' k is a true value of a marked node sample with a serial number of k, eta i and omega i are obtained by a previous formula, eta i is a horizontal fusion weight, omega i is a vertical fusion weight, alpha i and gamma i respectively characterize a weight coefficient of the marked node with the serial number of i in the horizontal and vertical directions, k and i are serial numbers of the nodes, H is a horizontal distance between rk and ri, S is a vertical distance between rk and ri, each change of the gradient triggers the updating of a clue factor, a new state value is generated based on ML-IDW to be inferred, and iteration is carried out until an error convergence formula is as follows:
Figure BDA0003367079640000054
where Vk and Vk' represent the predicted and true values, respectively, for the node numbered k.
9. As a preferred technical solution of the present application, in the step S23, when predicting the unmarked node network, the clue factor obtains a formula by the correlation between the unmarked node rm and the state value of each marked node as follows:
Figure BDA0003367079640000055
and the eta i and the omega i are calculated by the previous formula, the eta i is a horizontal fusion weight, the omega i is a vertical fusion weight, and the V' i is a true value of the marked node sample with the serial number of i.
As the preferable technical scheme of the application, the size information and data acquisition of the S3 measurement experiment environment monitors indoor humidity parameters by an indoor temperature and humidity sensor, and the sensor adopts a special digital module acquisition technology and temperature and humidity sensing.
As a preferable embodiment of the present invention, the S31 error analysis predicts the state of the entire indoor space, calculates the interpolation data and the measured data,
the accuracy formula for RMSE reflective measurements is:
Figure BDA0003367079640000061
the actual condition formula of the MAE reflecting the error of the predicted value is as follows:
Figure BDA0003367079640000062
the confidence formula that RE reflects the prediction is:
Figure BDA0003367079640000063
v' i is the measured room humidity value, Vi is the predicted room humidity value, and N is the number of control groups.
Compared with the prior art, the invention has the beneficial effects that:
in the scheme of the application:
1. the method comprises the steps of determining a local coordinate system, an X-axis normal vector and a Z-axis normal vector of a component by extracting space geometric data resume space diagram models in BIM models and Ifcaxis2 plan 3D component position information in the BIM models, wherein the Y-axis normal vector is obtained by the outer product of the X-axis normal vector and the Z-axis normal vector, obtaining a rotation and translation matrix under the local coordinate system based on the component position information, wherein the component position information has inheritance properties, a plurality of local coordinate systems need to compound the rotation and translation matrix, and the world coordinates of the component obtained through the BIM models can be associated with the space characteristics of indoor environment states to represent the implicit relations of the indoor states in different areas;
2. by the set ML-IDW cross sample training algorithm, the clue factor can be obtained by the correlation between the state value of a certain mark node rm and the state values of the rest mark nodes
Figure BDA0003367079640000071
VmThe state value of an unmarked node is deduced, i is the number of a marked node, eta is the horizontal weight of the marked node and the unmarked node, omega is the vertical weight of the marked node and the unmarked node, V' i is the sampling real value of the marked node with the number i, the horizontal weight and the vertical weight are jointly determined by the clue factor and the edge weight of the marked node, the coefficient factor of the marked node is learned on a space graph model, and the state of the unmarked node can be predicted;
3. The ML-IDW algorithm is used for predicting the state of the whole indoor space to simulate the distribution rule of the humidity value of the whole indoor space of the building, and the prediction of the state of the whole indoor space is realized through a small number of acquisition nodes. And constructing a training model according to the measured data, and verifying the accuracy of the model by comparing the training model with the actual data. The model can predict the humidity distribution of the whole indoor space in an experimental scene, and is short in training time and real-time.
Description of the drawings:
fig. 1 is a diagram of ML-IDW prediction results under 4 marked nodes and 8 unmarked nodes in an experiment of the indoor environment state prediction method based on BIM and cross sample learning provided by the present application;
fig. 2 is a diagram of ML-IDW prediction results under 5 labeled nodes and 7 unlabeled nodes in an experiment of the indoor environment state prediction method based on BIM and cross sample learning provided by the present application;
fig. 3 is a diagram of ML-IDW prediction results under 6 marked nodes and 6 unmarked nodes in an experiment of the indoor environment state prediction method based on BIM and cross sample learning provided by the present application;
fig. 4 is a diagram of ML-IDW prediction results under 7 labeled nodes and 5 unlabeled nodes in an experiment of the indoor environment state prediction method based on BIM and cross sample learning provided by the present application;
fig. 5 is an IDW diagram of an indoor environment state prediction method based on BIM and cross sample learning provided in the present application;
fig. 6 is a schematic diagram of ML-IDW of an indoor environment state prediction method based on BIM and cross sample learning provided by the present application.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present invention clearer, the technical solutions in the embodiments of the present invention will be described clearly and completely with reference to the accompanying drawings. It is clear that the described embodiment is a specific implementation of the invention and is not limited to all embodiments.
Thus, the following detailed description of the embodiments of the invention is not intended to limit the scope of the invention as claimed, but is merely representative of some embodiments of the invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
It should be noted that the embodiments of the present invention and the features and technical solutions thereof may be combined with each other without conflict.
It should be noted that: like reference numbers and letters refer to like items in the following figures, and thus, once an item is defined in one figure, it need not be further defined and explained in subsequent figures.
Referring to fig. 1 to 6, in an embodiment, a method for predicting an indoor environment state based on BIM and cross sample learning includes the steps of, S1, building a spatial graph model, and fusing a node weight and an edge weight; s11, building a space map model by using the building information model; s12, updating fusion weight in real time by a cross sample training algorithm, wherein the fusion weight is the fusion of a clue factor and an edge weight; s2, predicting the indoor environment state; s21, training the state value of the label node across the sample; s22, gradient back propagation marking node rkA state value of (d); s23, predicting the unmarked node network; s3, setting an experiment, and acquiring an experiment environment and data; s31, error analysis, predicting the state of the whole indoor space; and S32, summarizing the experimental result.
In addition to the above-mentioned method, in a preferred embodiment, S1 further extracts BIM model space nodes O ═ { r1, r2, …, rm }, at a certain time interval T ═ T1, T2, …, tm }, where the space nodes are divided into labeled nodes with smart sensor coordinate nodes and unlabeled nodes without smart sensor nodes, and the real-time state values Vm of the positions of the unlabeled nodes are obtained.
As a preferred embodiment, based on the above-mentioned manner, further, in S11, the building information model establishes a space map model, extracts ifc axis2 platform 3D member position information in the BIM model to determine a local coordinate system, an X-axis normal vector, and a Z-axis normal vector of the member, where the Y-axis normal vector is obtained by an outer product of the X-axis normal vector and the Z-axis normal vector, and obtains a rotation and translation matrix in the local coordinate system based on the member position information, where the member position information has an inheritance attribute, and there are a plurality of local coordinate systems that need to compound the rotation and translation matrix, and obtain the world coordinates of the member through the BIM model.
In addition to the above-mentioned method, further performing algorithm optimization on the unmarked nodes of the spatial graph model established by the building information model in S11, wherein the formula is
Figure BDA0003367079640000091
VmThe state value of the inferred unmarked node, i is the number of the marked node, eta is the horizontal weight of the marked node and the unmarked node, omega is the vertical weight of the marked node and the unmarked node, V' i is the sampling real value of the marked node with the number i, and the horizontal weight and the vertical weight are jointly determined by the clue factor and the edge weight of the marked node.
In a preferred embodiment, based on the above manner, the S12 cross-sample training algorithm further updates the fusion weight, the fusion of the cue factor and the edge weight in real time, and the horizontal and vertical distance features of the edge weight unlabeled node rk (xk, yk, zk) and the labeled node ri (xi, yi, zi) can be represented as Hi=|xi-xk|+|yi-yk|,Si=|zi-zk|,
Wherein k and i are the serial numbers of the nodes, H is the horizontal distance between rk and Ri, S is the vertical distance between rk and Ri, clue factors are provided with clue factors Ri (alpha i, beta i, gamma i, lambda i), alpha and beta are respectively a marked node weight coefficient and an edge weight coefficient in the horizontal direction, gamma and lambda are respectively a marked node weight coefficient and an edge weight coefficient in the vertical direction, and fusion weights are fused together, and are divided into horizontal weights and vertical weights, and the calculation formula thereof is
Figure BDA0003367079640000101
Eta i is a horizontal fusion weight, omega i is a vertical fusion weight, alpha i and gamma i respectively characterize the weight coefficient of the marking node with the number i in the horizontal and vertical directions, and beta i and lambda i respectively characterize the edge weight coefficient of the marking node with the number i in the horizontal and vertical directions.
In a preferred embodiment, in addition to the above-described embodiment, in step S21, the state value of the marker node r is trained across samples, and the marker node rkThe higher the correlation with other labeled nodes, the closer the state values are. Thus, rkState value of
Figure BDA0003367079640000102
V' i is the real value of the marked node sample with the serial number i, and the fusion weight etaiAnd ωiSolving a formula, marking a node rkAs a node to be presumed, a node r is inferred by using a marker nodekA status value.
As a preferred embodiment, based on the above manner, further, the S22 gradient backward propagates the state value of the marker node rk, and the difference between the two state values is measured by the mean square error loss as formula
Figure BDA0003367079640000111
The gradient formula is:
Figure BDA0003367079640000112
Figure BDA0003367079640000113
Figure BDA0003367079640000114
Figure BDA0003367079640000115
Figure BDA0003367079640000116
Figure BDA0003367079640000117
Figure BDA0003367079640000118
Figure BDA0003367079640000121
v' k is a true value of a marked node sample with a serial number of k, eta i and omega i are obtained by a previous formula, eta i is a horizontal fusion weight, omega i is a vertical fusion weight, alpha i and gamma i respectively characterize a weight coefficient of the marked node with the serial number of i in the horizontal and vertical directions, k and i are serial numbers of the nodes, H is a horizontal distance between rk and ri, S is a vertical distance between rk and ri, each change of the gradient triggers the updating of a clue factor, a new state value is generated based on ML-IDW to be inferred, and iteration is carried out until an error convergence formula is as follows:
Figure BDA0003367079640000122
where Vk and Vk' represent the predicted and true values, respectively, for the node numbered k.
As a preferred embodiment, based on the above manner, further, in S23, a network of unmarked nodes is predicted, and the clue factor can be formulated as follows through the correlation between the unmarked nodes rm and the state value of each marked node
Figure BDA0003367079640000123
And the eta i and the omega i are calculated by the previous formula, the eta i is a horizontal fusion weight, the omega i is a vertical fusion weight, and the V' i is a true value of the marked node sample with the serial number of i.
In a preferred embodiment, in addition to the above-mentioned method, S3 further measures the dimensional information and data acquisition of the experimental environment, and monitors the indoor humidity parameters by installing a temperature and humidity sensor indoors, where the sensor employs a dedicated digital module acquisition technology and temperature and humidity sensing.
In a preferred embodiment, in addition to the above-described aspect, the error analysis of S31 predicts the state of the entire indoor space, and calculates the interpolation data and the measured data as:
the accuracy formula for RMSE reflective measurements is:
Figure BDA0003367079640000131
the actual condition formula of the MAE reflecting the error of the predicted value is as follows:
Figure BDA0003367079640000132
the confidence formula that RE reflects the prediction is:
Figure BDA0003367079640000133
v' i is the measured room humidity value, Vi is the predicted room humidity value, and N is the number of control groups. The working principle is as follows: in the using process of the invention, S1, establishing a space graph model, and fusing the node weight and the edge weight; s11, building space map model by using building information model, S1 extracting BIM model space node O ═ mediumr1, r2, …, rm } at a certain time interval T ═ T1, T2, …, tm }, the space nodes are divided into marked nodes with intelligent sensor coordinate nodes and unmarked nodes without intelligent sensor nodes, a space map model is built for the building information model in the unmarked node positions, a space map model is built for the building information model in the real-time state values Vm and S11, Ifcaxis2 plan 3D component position information in the BIM model is extracted to determine the local coordinate system, the X-axis normal vector and the Z-axis normal vector of the component, wherein the Y-axis normal vector is obtained by the outer product of the X-axis normal vector and the Z-axis normal vector, a rotation translation matrix under the local coordinate system is obtained based on the component position information, the component position information has an inheritance attribute, a plurality of local coordinate systems are required to compound the rotation translation matrix, the world coordinates of the component are obtained through the BIM model, algorithm optimization is carried out on the unmarked nodes of the building information model built in the S11, is given by the formula
Figure BDA0003367079640000141
Vm is an inferred unmarked node state value, i is the number of a marked node, eta is the horizontal weight of the marked node and the unmarked node, omega is the vertical weight of the marked node and the unmarked node, V' i is the sampling real value of the marked node with the number i, and the horizontal weight and the vertical weight are jointly determined by the clue factor and the edge weight of the marked node; s12, updating the fusion weight in real time by the cross sample training algorithm, wherein the fusion weight is the fusion of the clue factor and the edge weight, updating the fusion weight in real time by the S12 cross sample training algorithm, wherein the fusion weight is the fusion of the clue factor and the edge weight, and the horizontal and vertical distance characteristics of the unmarked nodes rk (xk, yk, zk) and the marked nodes ri (xi, yi, zi) of the edge weight can be represented as Hi=|xi-xk|+|yi-yk|,Si=|zi-zkL, where k and i are the numbers of the nodes, H is the horizontal distance between rk and Ri, S is the vertical distance between rk and Ri, the clue factors set clue factors Ri (α i, β i, γ i, λ i), α, β are the marked node weight coefficient and the edge weight coefficient in the horizontal direction, respectively, γ, λ are the marked node weight coefficient and the edge weight coefficient in the vertical direction, respectively, and the fusion weight is the marked node weight coefficientFusing with edge weight coefficient, dividing the fused weight into horizontal weight and vertical weight, and calculating formula thereof
Figure BDA0003367079640000142
Eta i is a horizontal fusion weight, omega i is a vertical fusion weight, alpha i and gamma i respectively characterize the weight coefficient of the marking node with the number of i in the horizontal direction and the vertical direction, and beta i and lambda i respectively characterize the edge weight coefficient of the marking node with the number of i in the horizontal direction and the vertical direction; s2, predicting the indoor environment state; s21, training the state values of the labeled nodes across the samples, S21, training the state values of the labeled nodes across the samples, wherein the higher the correlation between the labeled node rk and other labeled nodes is, the closer the state values are. Thus, the state value of rk
Figure BDA0003367079640000143
V' k is a real value sampled by a marking node with the number of k, the real value is obtained by fusing a weight eta i formula and a weight omega i formula, the marking node rk is used as a node to be presumed, and the marking node rk is used for inferring a state value of the node rk; s22, the state value of the gradient back propagation marking node rk, the state value of the S22 gradient back propagation marking node rk, the difference between the two state values is measured by the mean square error loss and is expressed as
Figure BDA0003367079640000151
The gradient formula is
Figure BDA0003367079640000152
Figure BDA0003367079640000153
Figure BDA0003367079640000154
Where each change in gradient triggers an update of the cue factor. This update generates a new state value push based on ML-IDWBroken and iterated until the error converges
Figure BDA0003367079640000155
S23 predicting unmarked node network, S23 predicting unmarked node network, clue factor can be obtained by correlation between the unmarked node rm and the state value of each marked node
Figure BDA0003367079640000156
S3, setting an experiment, acquiring an experiment environment and data, and S3 measuring the size information and the data of the experiment environment to monitor indoor humidity parameters of an indoor temperature and humidity sensor, wherein the sensor adopts a special digital module acquisition technology and temperature and humidity sensing; s31, error analysis for predicting the state of the whole indoor space, S31 error analysis for predicting the state of the whole indoor space, and calculation of the interpolation data and the measured data
Figure BDA0003367079640000157
MAE reflects the actual situation of predicted value error
Figure BDA0003367079640000158
RE may reflect the confidence of the prediction
Figure BDA0003367079640000159
V' i is the measured indoor humidity value. Vi is the predicted indoor humidity value, N is the number of control groups; and S32, summarizing the experimental result.
The above embodiments are only used for illustrating the invention and not for limiting the technical solutions described in the invention, and although the present invention has been described in detail in the present specification with reference to the above embodiments, the present invention is not limited to the above embodiments, and therefore, any modification or equivalent replacement of the present invention is made; all such modifications and variations are intended to be included herein within the scope of this disclosure and the appended claims.

Claims (10)

1. A BIM and cross sample learning-based indoor environment state prediction method is characterized by comprising the following steps of S1, establishing a space diagram model, and fusing node weight and edge weight; s11, building a space map model by using the building information model; s12, updating fusion weight in real time by a cross sample training algorithm, wherein the fusion weight is the fusion of a clue factor and an edge weight; s2, predicting the indoor environment state; s21, training the state value of the label node across the sample; s22, the state value of the marker node rk is propagated reversely by the gradient; s23, predicting the unmarked node network; s3, setting an experiment, and acquiring an experiment environment and data; s31, error analysis, predicting the state of the whole indoor space; s32, experimental results.
2. The indoor environment state prediction method based on BIM and cross sample learning of claim 1, wherein the S1 extracts BIM model spatial nodes o ═ { r1, r2, …, rm } at a certain time interval T ═ T1, T2, …, tm }, where the spatial nodes are divided into labeled nodes with smart sensor coordinate nodes and unlabeled nodes without smart sensor coordinate nodes, and real-time state values Vm of the unlabeled node positions.
3. The method as claimed in claim 1, wherein the building information model in S11 builds a space map model, the ifc axis2 platform 3D component position information in the BIM model is extracted to determine a local coordinate system, an X-axis normal vector and a Z-axis normal vector of the component, wherein the Y-axis normal vector is obtained by an outer product of the X-axis normal vector and the Z-axis normal vector, the rotation and translation matrix in the local coordinate system is obtained based on the component position information, the component position information has inheritance property, the rotation and translation matrix needs to be composited in a plurality of local coordinate systems, and the world coordinate of the component is obtained through the BIM model.
4. The indoor environment state prediction method based on BIM and cross sample learning of claim 1, wherein algorithm optimization is performed on the unmarked nodes of the spatial graph model established by the building information model in S11, and the formula is as follows:
Figure FDA0003367079630000021
vm is the inferred state value of the unmarked node, i is the number of the marked node, eta is the horizontal weight of the marked node and the unmarked node, omega is the vertical weight of the marked node and the unmarked node, V' i is the sampling real value of the marked node with the number i, and the horizontal weight and the vertical weight are jointly determined by the clue factor and the edge weight of the marked node.
5. The indoor environment state prediction method based on BIM and cross sample learning as claimed in claim 1, wherein the S12 cross sample training algorithm updates fusion weights in real time, the fusion weights are fusion of cue factors and edge weights, and horizontal and vertical distance features of an edge weight unlabeled node rk (xk, yk, zk) and a labeled node ri (xi, yi, zi) are represented as:
Hi=|xi-xk|+|yi-yk|,Si=|zi-zk|,
wherein k and i are serial numbers of nodes, H is a horizontal distance between rk and Ri, S is a vertical distance between rk and Ri, the clue factors set clue factors Ri (α i, β i, γ i, λ i), α and β are respectively a marked node weight coefficient and an edge weight coefficient in the horizontal direction, γ and λ are respectively a marked node weight coefficient and an edge weight coefficient in the vertical direction, the marked node weight coefficient and the edge weight coefficient are fused together, the fusion weight is divided into a horizontal weight and a vertical weight, and the calculation formula is as follows:
Figure FDA0003367079630000022
eta i is a horizontal fusion weight, omega i is a vertical fusion weight, alpha i and gamma i respectively characterize the weight coefficient of the marking node with the number i in the horizontal and vertical directions, and beta i and lambda i respectively characterize the edge weight coefficient of the marking node with the number i in the horizontal and vertical directions.
6. The method of claim 1, wherein the method of predicting the state of the indoor environment based on the BIM and the cross-sample learning comprises S21, training the state values of the labeled nodes across the samples, and labeling the node rkThe higher the correlation with other labeled nodes, the closer the state values, and thus, rkThe state values of (1) are:
Figure FDA0003367079630000031
v' i is the real value of the marked node sample with the serial number i, the formula of the fusion weight eta i and the omega i is calculated, the marked node rk is used as a node to be presumed, and the marked node rk state value is deduced by the marked node.
7. The BIM and cross-sample learning-based indoor environment state prediction method as claimed in claim 1, wherein the state value of the S22 gradient back propagation marker node rk, the difference between the two state values is measured by mean square error loss and is formulated as
Figure FDA0003367079630000032
The gradient formula is:
Figure FDA0003367079630000033
Figure FDA0003367079630000034
Figure FDA0003367079630000035
Figure FDA0003367079630000036
Figure FDA0003367079630000041
Figure FDA0003367079630000042
Figure FDA0003367079630000043
Figure FDA0003367079630000044
v' k is a true value of a marked node sample with a serial number of k, eta i and omega i are obtained by a previous formula, eta i is a horizontal fusion weight, omega i is a vertical fusion weight, alpha i and gamma i respectively characterize a weight coefficient of the marked node with the serial number of i in the horizontal and vertical directions, k and i are serial numbers of the nodes, H is a horizontal distance between rk and ri, S is a vertical distance between rk and ri, each change of the gradient triggers the updating of a clue factor, a new state value is generated based on ML-IDW to be inferred, and iteration is carried out until an error convergence formula is as follows:
Figure FDA0003367079630000045
where Vk and Vk' represent the predicted and true values, respectively, for the node numbered k.
8. The method of claim 1, wherein in step S23, the network of unlabeled nodes is predicted, and the cue factor is formulated by the correlation between the state values of the unlabeled nodes rm and each labeled node as follows:
Figure FDA0003367079630000051
and the eta i and the omega i are calculated by the previous formula, the eta i is a horizontal fusion weight, the omega i is a vertical fusion weight, and the V' i is a true value of the marked node sample with the serial number of i.
9. The indoor environment state prediction method based on BIM and cross sample learning of claim 1, wherein S3 measures dimensional information of experimental environment and data acquisition to monitor indoor humidity parameters of indoor temperature and humidity sensors, and the sensors adopt digital module acquisition technology and temperature and humidity sensing.
10. The method of claim 1, wherein the error analysis of S31 predicts the state of the whole indoor space, calculates the difference between the interpolated data and the measured data,
the accuracy formula for RMSE reflective measurements is:
Figure FDA0003367079630000052
the actual condition formula of the MAE reflecting the error of the predicted value is as follows:
Figure FDA0003367079630000053
the confidence formula that RE reflects the prediction is:
Figure FDA0003367079630000054
v' i is the measured room humidity value, Vi is the predicted room humidity value, and N is the number of control groups.
CN202111386090.9A 2021-11-22 2021-11-22 Indoor environment state prediction method based on BIM and cross sample learning Active CN114048537B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111386090.9A CN114048537B (en) 2021-11-22 2021-11-22 Indoor environment state prediction method based on BIM and cross sample learning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111386090.9A CN114048537B (en) 2021-11-22 2021-11-22 Indoor environment state prediction method based on BIM and cross sample learning

Publications (2)

Publication Number Publication Date
CN114048537A true CN114048537A (en) 2022-02-15
CN114048537B CN114048537B (en) 2022-11-25

Family

ID=80210595

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111386090.9A Active CN114048537B (en) 2021-11-22 2021-11-22 Indoor environment state prediction method based on BIM and cross sample learning

Country Status (1)

Country Link
CN (1) CN114048537B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115290139A (en) * 2022-08-01 2022-11-04 中认国证(北京)评价技术服务有限公司 Building outdoor environment performance detection and prediction platform based on big data

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20130059041A (en) * 2011-11-28 2013-06-05 (주)에스알파트너즈 Infrastructure maintenance and management businesssupport system
CN108009674A (en) * 2017-11-27 2018-05-08 上海师范大学 Air PM2.5 concentration prediction methods based on CNN and LSTM fused neural networks
CN110209992A (en) * 2019-05-24 2019-09-06 西北工业大学 A kind of perceived position selection method based on space and cross-domain correlation
CN110309620A (en) * 2019-07-10 2019-10-08 河北省建筑科学研究院有限公司 Based on the underground pipe gallery fire of Internet of Things and BIM explosion monitoring system and implementation method
CN112628955A (en) * 2020-12-23 2021-04-09 杭州电子科技大学 Indoor ventilation control method based on LSTM neural network and krill swarm optimization algorithm
CN113485498A (en) * 2021-07-19 2021-10-08 北京工业大学 Indoor environment comfort level adjusting method and system based on deep learning

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20130059041A (en) * 2011-11-28 2013-06-05 (주)에스알파트너즈 Infrastructure maintenance and management businesssupport system
CN108009674A (en) * 2017-11-27 2018-05-08 上海师范大学 Air PM2.5 concentration prediction methods based on CNN and LSTM fused neural networks
CN110209992A (en) * 2019-05-24 2019-09-06 西北工业大学 A kind of perceived position selection method based on space and cross-domain correlation
CN110309620A (en) * 2019-07-10 2019-10-08 河北省建筑科学研究院有限公司 Based on the underground pipe gallery fire of Internet of Things and BIM explosion monitoring system and implementation method
CN112628955A (en) * 2020-12-23 2021-04-09 杭州电子科技大学 Indoor ventilation control method based on LSTM neural network and krill swarm optimization algorithm
CN113485498A (en) * 2021-07-19 2021-10-08 北京工业大学 Indoor environment comfort level adjusting method and system based on deep learning

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
ALESSANDRO ALIBERTI等: ""A Non-Linear Autoregressive Model for Indoor", 《ELECTRONICS》, vol. 08, no. 09, 2 September 2019 (2019-09-02), pages 0979 *
CHENGLIANG XU等: ""Improving prediction performance for indoor temperature in public buildings based on a novel deep learning method"", 《BUILDING AND ENVIRONMENT》, vol. 148, 2 December 2018 (2018-12-02), pages 128 - 135 *
GUOFENG MA等: ""A Building Information Model (BIM) and Artificial", 《SUSTAINABILITY》, vol. 11, no. 18, 11 September 2019 (2019-09-11), pages 4972 *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115290139A (en) * 2022-08-01 2022-11-04 中认国证(北京)评价技术服务有限公司 Building outdoor environment performance detection and prediction platform based on big data

Also Published As

Publication number Publication date
CN114048537B (en) 2022-11-25

Similar Documents

Publication Publication Date Title
Lee CARBayes: an R package for Bayesian spatial modeling with conditional autoregressive priors
Willems A spatial rainfall generator for small spatial scales
CN105243435B (en) A kind of soil moisture content prediction technique based on deep learning cellular Automation Model
CN108664687A (en) A kind of industrial control system space-time data prediction technique based on deep learning
CN109087329A (en) Human body three-dimensional joint point estimation frame and its localization method based on depth network
CN111639787A (en) Spatio-temporal data prediction method based on graph convolution network
Papadopoulou et al. Optimal sensor placement for time-dependent systems: Application to wind studies around buildings
CN112651437B (en) Spatial non-cooperative target pose estimation method based on deep learning
CN112446559B (en) Large-range ground subsidence space-time prediction method and system based on deep learning
CN109062962A (en) A kind of gating cycle neural network point of interest recommended method merging Weather information
CN109241881A (en) A kind of estimation method of human posture
CN103281779B (en) Based on the radio frequency tomography method base of Background learning
CN108376186B (en) Stored grain temperature field estimation method based on transfer learning
CN114048537B (en) Indoor environment state prediction method based on BIM and cross sample learning
CN109886356A (en) A kind of target tracking method based on three branch's neural networks
CN109388856A (en) A kind of temperature field prediction method based on sensing data fusion
CN108960334A (en) A kind of multi-sensor data Weighted Fusion method
Vernay et al. Augmenting simulations of airflow around buildings using field measurements
CN105787521A (en) Semi-monitoring crowdsourcing marking data integration method facing imbalance of labels
CN109766790A (en) A kind of pedestrian detection method based on self-adaptive features channel
CN112135246A (en) RSSI (received Signal Strength indicator) updating indoor positioning method based on SSD (solid State disk) target detection
CN109544632B (en) Semantic SLAM object association method based on hierarchical topic model
CN109033181B (en) Wind field geographic numerical simulation method for complex terrain area
Li et al. Extraction and modelling application of evacuation movement characteristic parameters in real earthquake evacuation video based on deep learning
CN104778331B (en) A kind of Loads of Long-span Bridges Monitoring Data spatial interpolation methods

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right

Effective date of registration: 20221102

Address after: 100000 1308, 13th floor, Shougang sports building, No. 6, jinyuanzhuang Road, Shijingshan District, Beijing

Applicant after: BIM WINNER (BEIJING) TECHNOLOGY CO.,LTD.

Applicant after: BIM WINNER (SHANGHAI) TECHNOLOGY Co.,Ltd.

Applicant after: SHENZHEN BIM WINNER TECHNOLOGY Co.,Ltd.

Applicant after: Yingjia Internet (Beijing) Smart Technology Co.,Ltd.

Address before: 100000 1308, 13th floor, Shougang sports building, No. 6, jinyuanzhuang Road, Shijingshan District, Beijing

Applicant before: BIM WINNER (BEIJING) TECHNOLOGY CO.,LTD.

Applicant before: BIM WINNER (SHANGHAI) TECHNOLOGY Co.,Ltd.

Applicant before: SHENZHEN BIM WINNER TECHNOLOGY Co.,Ltd.

Applicant before: Yingjia Internet (Beijing) Smart Technology Co.,Ltd.

Applicant before: JIAXING WUZHEN YINGJIA QIANZHEN TECHNOLOGY Co.,Ltd.

Applicant before: SHENZHEN QIANHAI YINGJIA DATA SERVICE Co.,Ltd.

Applicant before: Foshan Yingjia Smart Space Technology Co.,Ltd.

TA01 Transfer of patent application right
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right

Effective date of registration: 20231221

Address after: 1308, 13th Floor, Shougang Sports Building, No. 6 Jinyuanzhuang Road, Shijingshan District, Beijing, 100043

Patentee after: BIM WINNER (BEIJING) TECHNOLOGY CO.,LTD.

Address before: 100000 1308, 13th floor, Shougang sports building, No. 6, jinyuanzhuang Road, Shijingshan District, Beijing

Patentee before: BIM WINNER (BEIJING) TECHNOLOGY CO.,LTD.

Patentee before: BIM WINNER (SHANGHAI) TECHNOLOGY Co.,Ltd.

Patentee before: SHENZHEN BIM WINNER TECHNOLOGY Co.,Ltd.

Patentee before: Yingjia Internet (Beijing) Smart Technology Co.,Ltd.

TR01 Transfer of patent right