US20210383236A1 - Sensor Fusion Quality Of Data Determination - Google Patents
Sensor Fusion Quality Of Data Determination Download PDFInfo
- Publication number
- US20210383236A1 US20210383236A1 US17/336,640 US202117336640A US2021383236A1 US 20210383236 A1 US20210383236 A1 US 20210383236A1 US 202117336640 A US202117336640 A US 202117336640A US 2021383236 A1 US2021383236 A1 US 2021383236A1
- Authority
- US
- United States
- Prior art keywords
- values
- neuron
- test
- neural network
- neurons
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 230000004927 fusion Effects 0.000 title description 16
- 210000002569 neuron Anatomy 0.000 claims abstract description 204
- 238000013528 artificial neural network Methods 0.000 claims abstract description 115
- 238000012360 testing method Methods 0.000 claims description 114
- 238000000034 method Methods 0.000 claims description 56
- 230000006870 function Effects 0.000 claims description 29
- 238000004891 communication Methods 0.000 claims description 16
- 238000003860 storage Methods 0.000 claims description 15
- 239000011159 matrix material Substances 0.000 claims description 10
- 238000010801 machine learning Methods 0.000 claims description 9
- 230000004069 differentiation Effects 0.000 claims description 5
- 238000003973 irrigation Methods 0.000 claims description 4
- 238000010792 warming Methods 0.000 claims description 4
- 230000002262 irrigation Effects 0.000 claims description 3
- 238000003062 neural network model Methods 0.000 claims description 3
- 238000004886 process control Methods 0.000 claims description 3
- 238000010586 diagram Methods 0.000 description 18
- 238000012545 processing Methods 0.000 description 14
- 238000004590 computer program Methods 0.000 description 5
- 239000000463 material Substances 0.000 description 5
- 230000000694 effects Effects 0.000 description 4
- 230000001537 neural effect Effects 0.000 description 4
- 230000003287 optical effect Effects 0.000 description 4
- 230000008569 process Effects 0.000 description 4
- 230000004913 activation Effects 0.000 description 3
- 230000008901 benefit Effects 0.000 description 3
- 230000008859 change Effects 0.000 description 2
- 238000005516 engineering process Methods 0.000 description 2
- 238000009413 insulation Methods 0.000 description 2
- 230000007246 mechanism Effects 0.000 description 2
- 230000000644 propagated effect Effects 0.000 description 2
- 238000012546 transfer Methods 0.000 description 2
- 230000001052 transient effect Effects 0.000 description 2
- 230000009471 action Effects 0.000 description 1
- 230000015572 biosynthetic process Effects 0.000 description 1
- 210000004027 cell Anatomy 0.000 description 1
- 230000001413 cellular effect Effects 0.000 description 1
- 238000006243 chemical reaction Methods 0.000 description 1
- 238000001816 cooling Methods 0.000 description 1
- 230000001186 cumulative effect Effects 0.000 description 1
- 238000001514 detection method Methods 0.000 description 1
- 238000010438 heat treatment Methods 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 238000005457 optimization Methods 0.000 description 1
- 238000011176 pooling Methods 0.000 description 1
- 230000001902 propagating effect Effects 0.000 description 1
- 238000001303 quality assessment method Methods 0.000 description 1
- 230000008707 rearrangement Effects 0.000 description 1
- 230000004044 response Effects 0.000 description 1
- 238000003786 synthesis reaction Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q10/00—Administration; Management
- G06Q10/06—Resources, workflows, human or project management; Enterprise or organisation planning; Enterprise or organisation modelling
- G06Q10/063—Operations research, analysis or management
- G06Q10/0631—Resource planning, allocation, distributing or scheduling for enterprises or organisations
- G06Q10/06313—Resource planning in a project environment
-
- F—MECHANICAL ENGINEERING; LIGHTING; HEATING; WEAPONS; BLASTING
- F24—HEATING; RANGES; VENTILATING
- F24F—AIR-CONDITIONING; AIR-HUMIDIFICATION; VENTILATION; USE OF AIR CURRENTS FOR SCREENING
- F24F11/00—Control or safety arrangements
- F24F11/62—Control or safety arrangements characterised by the type of control or by internal processing, e.g. using fuzzy logic, adaptive control or estimation of values
- F24F11/63—Electronic processing
- F24F11/64—Electronic processing using pre-stored data
-
- F—MECHANICAL ENGINEERING; LIGHTING; HEATING; WEAPONS; BLASTING
- F24—HEATING; RANGES; VENTILATING
- F24F—AIR-CONDITIONING; AIR-HUMIDIFICATION; VENTILATION; USE OF AIR CURRENTS FOR SCREENING
- F24F11/00—Control or safety arrangements
- F24F11/62—Control or safety arrangements characterised by the type of control or by internal processing, e.g. using fuzzy logic, adaptive control or estimation of values
- F24F11/63—Electronic processing
- F24F11/65—Electronic processing for selecting an operating mode
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05B—CONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
- G05B13/00—Adaptive control systems, i.e. systems automatically adjusting themselves to have a performance which is optimum according to some preassigned criterion
- G05B13/02—Adaptive control systems, i.e. systems automatically adjusting themselves to have a performance which is optimum according to some preassigned criterion electric
- G05B13/0265—Adaptive control systems, i.e. systems automatically adjusting themselves to have a performance which is optimum according to some preassigned criterion electric the criterion being a learning criterion
- G05B13/027—Adaptive control systems, i.e. systems automatically adjusting themselves to have a performance which is optimum according to some preassigned criterion electric the criterion being a learning criterion using neural networks only
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05B—CONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
- G05B13/00—Adaptive control systems, i.e. systems automatically adjusting themselves to have a performance which is optimum according to some preassigned criterion
- G05B13/02—Adaptive control systems, i.e. systems automatically adjusting themselves to have a performance which is optimum according to some preassigned criterion electric
- G05B13/04—Adaptive control systems, i.e. systems automatically adjusting themselves to have a performance which is optimum according to some preassigned criterion electric involving the use of models or simulators
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05B—CONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
- G05B15/00—Systems controlled by a computer
- G05B15/02—Systems controlled by a computer electric
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F17/00—Digital computing or data processing equipment or methods, specially adapted for specific functions
- G06F17/10—Complex mathematical operations
- G06F17/16—Matrix or vector computation, e.g. matrix-matrix or matrix-vector multiplication, matrix factorization
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F30/00—Computer-aided design [CAD]
- G06F30/10—Geometric CAD
- G06F30/18—Network design, e.g. design based on topological or interconnect aspects of utility systems, piping, heating ventilation air conditioning [HVAC] or cabling
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F30/00—Computer-aided design [CAD]
- G06F30/20—Design optimisation, verification or simulation
- G06F30/27—Design optimisation, verification or simulation using machine learning, e.g. artificial intelligence, neural networks, support vector machines [SVM] or training a model
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/30—Arrangements for executing machine instructions, e.g. instruction decode
- G06F9/30003—Arrangements for executing specific machine instructions
- G06F9/30007—Arrangements for executing specific machine instructions to perform operations on data operands
- G06F9/30036—Instructions to perform operations on packed data, e.g. vector, tile or matrix operations
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/047—Probabilistic or stochastic networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/048—Activation functions
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/06—Physical realisation, i.e. hardware implementation of neural networks, neurons or parts of neurons
- G06N3/063—Physical realisation, i.e. hardware implementation of neural networks, neurons or parts of neurons using electronic means
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q10/00—Administration; Management
- G06Q10/06—Resources, workflows, human or project management; Enterprise or organisation planning; Enterprise or organisation modelling
- G06Q10/067—Enterprise or organisation modelling
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q50/00—Systems or methods specially adapted for specific business sectors, e.g. utilities or tourism
- G06Q50/06—Electricity, gas or water supply
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q50/00—Systems or methods specially adapted for specific business sectors, e.g. utilities or tourism
- G06Q50/10—Services
- G06Q50/16—Real estate
- G06Q50/163—Property management
-
- F—MECHANICAL ENGINEERING; LIGHTING; HEATING; WEAPONS; BLASTING
- F24—HEATING; RANGES; VENTILATING
- F24F—AIR-CONDITIONING; AIR-HUMIDIFICATION; VENTILATION; USE OF AIR CURRENTS FOR SCREENING
- F24F2120/00—Control inputs relating to users or occupants
- F24F2120/10—Occupancy
-
- F—MECHANICAL ENGINEERING; LIGHTING; HEATING; WEAPONS; BLASTING
- F24—HEATING; RANGES; VENTILATING
- F24F—AIR-CONDITIONING; AIR-HUMIDIFICATION; VENTILATION; USE OF AIR CURRENTS FOR SCREENING
- F24F2120/00—Control inputs relating to users or occupants
- F24F2120/20—Feedback from users
-
- F—MECHANICAL ENGINEERING; LIGHTING; HEATING; WEAPONS; BLASTING
- F24—HEATING; RANGES; VENTILATING
- F24F—AIR-CONDITIONING; AIR-HUMIDIFICATION; VENTILATION; USE OF AIR CURRENTS FOR SCREENING
- F24F2140/00—Control inputs relating to system states
- F24F2140/50—Load
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05B—CONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
- G05B19/00—Programme-control systems
- G05B19/02—Programme-control systems electric
- G05B19/04—Programme control other than numerical control, i.e. in sequence controllers or logic controllers
- G05B19/042—Programme control other than numerical control, i.e. in sequence controllers or logic controllers using digital processors
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05B—CONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
- G05B2219/00—Program-control systems
- G05B2219/20—Pc systems
- G05B2219/26—Pc applications
- G05B2219/2614—HVAC, heating, ventillation, climate control
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F2119/00—Details relating to the type or aim of the analysis or the optimisation
- G06F2119/06—Power analysis or power optimisation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F2119/00—Details relating to the type or aim of the analysis or the optimisation
- G06F2119/08—Thermal analysis or thermal optimisation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N20/00—Machine learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
- G06N3/084—Backpropagation, e.g. using gradient descent
Definitions
- the present disclosure relates to neural network methods for describing system topologies. More specifically the present disclosure relates to determining unknown values in a model neuron, determining where in the model those neuron values were derived from, and determining the quality of data within the model representation compared to a physical location.
- Data fusion is combining disparate data sets to pull the magic trick of seeming to get more information out than was put in. More specifically, it entails combining data from different sources and analyzing it such that the different data sets and data views allow one to more fully understand what is being observed than any single data set allows.
- a method for computing neuron accuracy implemented by one or more computers comprising: running a neural network with test neurons and a target neuron using known sensor values at test neurons for a cost function to produce modeled test neuron values and a modeled value of the target neuron; comparing modeled test values to known sensor values, to determine quality of test neuron values; calculating connection strengths of each test value relative to the target neuron; and calculating accuracy of the target neuron using: quality of the test neuron values, and connection strengths between the target neuron and the test neurons.
- running the neural network comprises using state time series values as input into the neural network for a running period.
- the state time series values are weather values affecting a controlled space.
- the cost function compares the known sensor values to the modeled test values.
- calculating connection strength comprises using automatic differentiated vector gradients.
- calculating accuracy of the target neuron comprises matrix multiplying the quality of test neuron values by connection strengths between the target neuron and the test neurons.
- running the neural network comprises using machine learning techniques to determine connection strengths between the target neuron and the test neurons comprises using automatic differentiation to backpropagate from the target neuron to the test neurons.
- the neural network is a heterogenous neural network.
- At least one test neuron has an accuracy and an associated sensor, and where the test neuron accuracy relates to accuracy of the associated sensor.
- the neural network has internal values, and further comprising warming up the neural network using at least a portion of an initial state time series values to modify the neural network internal values.
- the neural network is warmed up by pre-running the neural network using successively larger portions of an input wave form until a goal state is reached.
- the neural network models a controlled system, and wherein the controlled system comprises a controlled building system, a process control system, an HVAC system, an energy system, or an irrigation system.
- a system for computing neuron accuracy comprising: a processor; a memory in operational communication with the processor; a neural network which resides at least partially in the memory, the neural network comprising test neurons with test values and at least one target neuron with a target neuron value; a neural network optimizer that optimizes the neural network using known sensor values and test values for a cost function to produce a solved neural network with modeled test values; a determiner that determines quality of the test neuron values by comparing test neuron values in the solved neural network to corresponding actual values; a machine learner that uses machine learning techniques to calculate connection strengths between the test neurons and the at least one target neuron; and a function calculator that calculates accuracy of the at least one target neuron value using: quality of the test neuron values, and connection strengths between the target neurons and the at least one test neuron.
- the function calculator comprises matrix multiplying the quality of test neuron values by connection strengths between the target neuron and the test neurons.
- At least one corresponding actual value comprises a sensor state value.
- the sensor state value is derived from a sensor in a controlled space.
- an initializer which uses state time series values as input into the neural network for a running period.
- At least one of the machine learning techniques uses automatic differentiation to calculate connection strengths.
- a computer-readable storage medium configured with data and instructions, which upon execution by a processor perform a method for computing neuron accuracy, the method comprising: initializing values for at least some test neurons in a neural network, the test neurons representing corresponding actual values; specifying a target neuron in the neural network; optimizing the neural network using the actual values producing a solved neural network with a target neuron value and test neuron values; using machine learning techniques to determine connection strengths between the target neuron and the test neurons; determining quality of the test neuron values by comparing test neuron values in the solved neural network to corresponding actual neuron values; and calculating accuracy of the target neuron using: quality of the test neuron values, and connection strengths between the target neuron and the at least one test neuron.
- the corresponding actual values are sensor values that correspond to test neuron locations.
- FIG. 1 depicts a computing system in accordance with one or more embodiments.
- FIG. 2 depicts a distributed computing system in accordance with one or more embodiments.
- FIG. 2A depicts an exemplary system configured to determine quality of data using sensor fusion accordance with one or more embodiments.
- FIG. 3 depicts an exemplary system configured to determine quality of data using sensor fusion accordance with one or more embodiments.
- FIG. 4 is a functional block diagram that illustrates an exemplary compute function with which described embodiments can be implemented.
- FIG. 5 is a diagram showing an exemplary sensor fusion and quality of data neural network system in conjunction with which described embodiments can be implemented.
- FIG. 6 is a diagram showing an exemplary neural network sensor fusion and quality of data system with computed neurons in conjunction with which described embodiments can be implemented.
- FIG. 7 is a diagram showing an exemplary neural network sensor fusion and quality of data system with component vector propagation in conjunction with which described embodiments can be implemented.
- FIG. 8 is a diagram showing an exemplary neural network sensor fusion and quality of data system with fused data computation in conjunction with which described embodiments can be implemented.
- FIG. 9 is a table showing an exemplary quality of data computation method in conjunction with which described embodiments can be implemented.
- FIG. 10 is a block diagram showing types of neural networks with which described embodiments can be implemented.
- FIG. 11 is a diagram showing data streams with which described embodiments can be implemented.
- FIG. 12 is a diagram showing an exemplary time series data with which described embodiments can be implemented.
- Disclosed below are representative embodiments of methods, computer-readable media, and systems having particular applicability to systems and methods for building neural networks that describe physical structures. Described embodiments implement one or more of the described technologies.
- Embodiments in accordance with the present embodiments may be implemented as an apparatus, method, or computer program product. Accordingly, the present embodiments may take the form of an entirely hardware embodiment, an entirely software embodiment (including firmware, resident software, micro-code, etc.), or an embodiment combining software and hardware aspects. Furthermore, the present embodiments may take the form of a computer program product embodied in any tangible medium of expression having computer-usable program code embodied in the medium.
- a computer-readable medium may include one or more of a portable computer diskette, a hard disk, a random access memory (RAM) device, a read-only memory (ROM) device, an erasable programmable read-only memory (EPROM or Flash memory) device, a portable compact disc read-only memory (CDROM), an optical storage device, and a magnetic storage device.
- Computer program code for carrying out operations of the present embodiments may be written in any combination of one or more programming languages.
- Embodiments may be implemented in edge computing environments where the computing is done within a network which, in some implementations, may not be connected to an outside internet, although the edge computing environment may be connected with an internal internet. In these implementations the space is much safer, and is much easier to secure from ransomeware attacks and the like. This internet may be wired, wireless, or a combination of both. Embodiments may also be implemented in cloud computing environments.
- a cloud model can be composed of various characteristics (e.g., on-demand self-service, broad network access, resource pooling, rapid elasticity, measured service, etc.), service models (e.g., Software as a Service (“SaaS”), Platform as a Service (“PaaS”), Infrastructure as a Service (“IaaS”), and deployment models (e.g., private cloud, community cloud, public cloud, hybrid cloud, etc.).
- service models e.g., Software as a Service (“SaaS”), Platform as a Service (“PaaS”), Infrastructure as a Service (“IaaS”)
- deployment models e.g., private cloud, community cloud, public cloud, hybrid cloud, etc.
- each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s).
- each block of the block diagrams and/or flowchart illustrations, and combinations of blocks in the block diagrams and/or flowchart illustrations may be implemented by general or special purpose hardware-based systems that perform the specified functions or acts, or combinations of general and special purpose hardware and computer instructions.
- These computer program instructions may also be stored in a computer-readable medium that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable medium produce an article of manufacture including instruction means which implement the function/act specified in the flowchart and/or block diagram block or blocks.
- the terms “comprises,” “comprising,” “includes,” “including,” “has,” “having,” or any other variation thereof, are intended to cover a non-exclusive inclusion.
- a process, article, or apparatus that comprises a list of elements is not necessarily limited to only those elements but may include other elements not expressly listed or inherent to such process, article, or apparatus.
- any examples or illustrations given herein are not to be regarded in any way as restrictions on, limits to, or express definitions of any term or terms with which they are utilized. Instead, these examples or illustrations are to be regarded as being described with respect to one particular embodiment and as being illustrative only. Those of ordinary skill in the art will appreciate that any term or terms with which these examples or illustrations are utilized will encompass other embodiments which may or may not be given therewith or elsewhere in the specification and all such embodiments are intended to be included within the scope of that term or terms. Language designating such non-limiting examples and illustrations includes, but is not limited to: “for example,” “for instance,” “e.g.,” and “in one embodiment.”
- Deep physics networks are structured similarly, but not identically, to neural networks. Unlike the homogeneous activation functions of neural nets, each neuron comprises unique physical characteristics representing functions in a thermodynamic system. Once configured, known sensors values are fed into their corresponding neurons in the network. Once the network is trained, any location in the thermodynamic system can be introspected to extract fused data. The process provides powerful generalized data fusion, data synthesis, and quality assessment through inference even where no sensors exist—for any thermodynamic system. The same mechanism enables model optimization, and the time series can then be used for real-time sequence generation and fault detection.
- a neuron model system comprises heterogenous neural networks. with activation functions that comprise neurons that represent individual material layers of a building and various values, such as their resistance and capacitance. These neurons are formed into parallel and branchless neural network strings that propagate heat (or other state values) through them.
- FIG. 1 at 100 shows an exemplary embodiment overview that can be used to create and discretize such a neuron model system.
- This neuron model system may be described as a thermodynamic model.
- a missing sensor value can be determined, within an error range.
- Existing sensors can be checked to determine how accurate they are, within an error range. Areas that do not have specific sensors can have their sensor values determined within an error range.
- FIG. 1 illustrates a generalized example of a suitable computing environment 100 in which described embodiments may be implemented.
- the computing environment 100 is not intended to suggest any limitation as to scope of use or functionality of the disclosure, as the present disclosure may be implemented in diverse general-purpose or special-purpose computing environments.
- the core processing is indicated by the core processing unit 130 .
- the computing environment 100 includes at least one central processing unit 110 and memory 120 .
- the central processing unit 110 executes computer-executable instructions and may be a real or a virtual processor. It may also comprise a vector processor 112 , which allows same-length neuron strings to be processed rapidly. In a multi-processing system, multiple processing units execute computer-executable instructions to increase processing power and as such the vector processor 112 , GPU 115 , and CPU can be running simultaneously.
- the memory 120 may be volatile memory (e.g., registers, cache, RAM), non-volatile memory (e.g., ROM, EEPROM, flash memory, etc.), or some combination of the two.
- the memory 120 stores software 185 implementing the described methods and systems of sensor fusion quality of data determination.
- a computing environment may have additional features.
- the computing environment 100 includes storage 140 , one or more input devices 150 , one or more output devices 155 , one or more network connections (e.g., wired, wireless, etc.) 160 as well as other communication connections 170 .
- An interconnection mechanism such as a bus, controller, or network interconnects the components of the computing environment 100 .
- operating system software provides an operating environment for other software executing in the computing environment 100 , and coordinates activities of the components of the computing environment 100 .
- the computing system may also be distributed; running portions of the software 185 on different CPUs.
- the storage 140 may be removable or non-removable, and includes magnetic disks, magnetic tapes or cassettes, CD-ROMs, CD-RWs, DVDs, flash drives, or any other medium which can be used to store information and which can be accessed within the computing environment 100 .
- the storage 140 stores instructions for the software, such as software 185 to implement methods of sensor fusion utilizing neural networks.
- the input device(s) 150 may be a device that allows a user or another device to communicate with the computing environment 100 , such as a touch input device such as a keyboard, video camera, a microphone, mouse, pen, or trackball, and a scanning device, touchscreen, or another device that provides input to the computing environment 100 .
- a touch input device such as a keyboard, video camera, a microphone, mouse, pen, or trackball
- a scanning device such as a keyboard, video camera, a microphone, mouse, pen, or trackball
- the input device(s) 150 may be a sound card or similar device that accepts audio input in analog or digital form, or a CD-ROM reader that provides audio samples to the computing environment.
- the output device(s) 155 may be a display, printer, speaker, CD-writer, or another device that provides output from the computing environment 100 .
- the communication connection(s) 170 enable communication over a communication medium to another computing entity.
- the communication medium conveys information such as computer-executable instructions, compressed graphics information, or other data in a modulated data signal.
- Communication connections 170 may comprise input devices 150 , output devices 155 , and input/output devices that allows a client device to communicate with another device over network 160 .
- a communication device may include one or more wireless transceivers for performing wireless communication and/or one or more communication ports for performing wired communication. These connections may include network connections, which may be a wired or wireless network such as the Internet, an intranet, a LAN, a WAN, a cellular network or another type of network. It will be understood that network 160 may be a combination of multiple different kinds of wired or wireless networks.
- the network 160 may be a distributed network, with multiple computers, which might be building controllers, acting in tandem.
- a computing connection 170 may be a portable communications device such as a wireless handheld device, a cell phone device, and so on
- Computer-readable media are any available non-transient tangible media that can be accessed within a computing environment.
- computer-readable media include memory 120 , storage 140 , communication media, and combinations of any of the above.
- Computer readable storage media 165 which may be used to store computer readable media comprises instructions 175 and data 180 .
- Data Sources may be computing devices, such as general hardware platform servers configured to receive and transmit information over the communications connections 170 .
- the computing environment 100 may be an electrical controller that is directly connected to various resources, such as HVAC resources, and which has CPU 110 , a GPU 115 , Memory, 120 , input devices 150 , communication connections 170 , and/or other features shown in the computing environment 100 .
- the computing environment 100 may be a series of distributed computers. These distributed computers may comprise a series of connected electrical controllers.
- data produced from any of the disclosed methods can be created, updated, or stored on tangible computer-readable media (e.g., tangible computer-readable media, such as one or more CDs, volatile memory components (such as DRAM or SRAM), or nonvolatile memory components (such as hard drives) using a variety of different data structures or formats.
- tangible computer-readable media e.g., tangible computer-readable media, such as one or more CDs, volatile memory components (such as DRAM or SRAM), or nonvolatile memory components (such as hard drives) using a variety of different data structures or formats.
- Such data can be created or updated at a local computer or over a network (e.g., by a server computer), or stored and accessed in a cloud computing environment.
- FIG. 2 depicts a distributed computing system 200 with which embodiments disclosed herein may be implemented.
- Two or more computerized controllers 205 may incorporate all or part of a computing environment 100 , 210 . These computerized controllers 205 may be connected 215 to each other using wired or wireless connections.
- the controllers may be within a controlled space 220 .
- a controlled space 220 may be a space that has a resource, sensor, or other equipment that can modify or determine one or more states state of the space, such as a a sensor (to determine space state), a heater, an air conditioner (to modify temperature); a speaker (to modify noise), locks, lights, etc.
- a controlled space may be divided into zones, which might have a sensor, or no sensor.
- Controlled spaces might be, e.g., an automated building, a process control system, an HVAC system, an energy system, an irrigation system, a building—irrigation system, etc.
- Computerized controllers 205 may comprise a distributed system that can run without using connections (such as internet connections) outside of the computing system 200 itself. This allows the system to run with low latency, and with other benefits of edge computing systems. The system may also run without access to an outside internet connection, which may make the system much more invulnerable to outside security threats.
- FIG. 2A depicts an exemplary room-sensor-neural network system 200 A brief overview that can be used to perform sensor fusion.
- the system 200 A is not intended to suggest any limitation as to scope of use or functionality of the disclosure, as the present disclosure may be implemented in many different systems.
- there are three rooms (Room 1 205 A, Room 2 210 A, and room 3 215 A).
- Two of the rooms (Room 1 205 A and Room 2 210 A) have state sensors (sensor 1 220 A and sensor 2 225 A), while Room 3 215 A does not.
- the neural network will fuse the sensor values from sensor 1 220 A and sensor 2 225 A to determine a probable sensor value for Room 3 215 A.
- the neural network 230 A among other neurons, has a neuron 235 A that represents sensor 1 220 A (and may be considered associated with sensor 1 in that the Sensor 1 220 A value is represented in the neural network by neuron 235 A) and a neuron 240 A that represents sensor 2 225 A. These may be called test neurons.
- the neural network 230 A also has a neuron 245 A that represents the sensor value for the nonexistent sensor in Room 3 215 A. This may be called a target neuron.
- the values for sensor 1 220 A and sensor 2 225 A may be collected for a period of time.
- the neural network 245 A may then be solved for the collected values of sensor 1 220 A and sensor 2 225 A for neurons 235 A and 240 A (the test neurons).
- the value in neuron 245 A (the target neuron) may be considered the first pass at a state value in Room 3 215 A.
- the value in the neuron 245 A would be a temperature value.
- the solved known state values are then compared to the actual sensor values to determine how good the test values are.
- the degree each test neuron was used to determine the value of the target neuron is determined.
- a combination of the test values, how close the test values are to the actual values, and the percent of the test values used to determine the target value are used to determine a final, fused, test value.
- FIG. 3 depicts an exemplary system 300 for fusing sensor data.
- the system may include at least one processor 310 residing within a controller 307 , which may be part of a computerized controller system 200 .
- the controller 307 may be in a controlled space 305 .
- Memory 312 may comprise a neural network 315 .
- the neural network may reside partially in memory.
- the neural network may thermodynamically model a controlled space, e.g., 220 . This neural network may thermodynamically represent the controlled space in some way. It may represent the controlled space 220 as a single space, or may break the controlled space up into different zones, which thermodynamically effect each other.
- the neural network 315 may comprise target neurons 320 and at least one test neuron 325 that may represent individual material layers of a physical space and how they change state, e.g., their resistance, capacitance, and/or other values that describe how state flows though the section of the controlled space 220 that is being modeled.
- other neural structures are used.
- structure models other than neural networks are used.
- There may also be a target neuron value 330 and a test neuron value 335 that represent some state stored in the target 330 and/or test neuron 335 .
- An initializer 340 may be included that initializes a neural network. Such an initializer is described in patent application Ser. No. 17/308,294, filed May 5, 2021, and incorporated by reference in its entirety.
- a neural network optimizer 345 may be included that optimizes the neural network using known sensor values and test values for a cost function to produce modeled test values.
- a determiner 355 determines quality of the test neuron values by comparing test neuron values in the solved trained neural network to corresponding actual values, which may be known sensor values.
- a machine learner 360 uses machine learning techniques to calculate connection strengths between the target neurons and the at least one test neuron.
- a function calculator 365 calculates accuracy of the target neuron using quality of the test neuron values, and connection strengths between the target neuron and at least one test neuron. Known sensor values may be used in the function calculator to calculate the accuracy of the test values.
- FIG. 4 depicts an exemplary method 400 for initializing neural networks.
- the operations of method 400 presented below are intended to be illustrative. In some embodiments, method 400 may be accomplished with one or more additional operations not described, and/or without one or more of the operations discussed. Additionally, the order in which the operations of method 400 are illustrated in FIG. 4 and described below is not intended to be limiting.
- method 400 may be implemented in one or more processing devices, such as shown with reference to FIGS. 1 and 2 .
- the one or more processing devices may include one or more devices executing some or all of the operations of method 400 in response to instructions stored electronically on an electronic storage medium.
- the one or more processing devices may include one or more devices configured through hardware, firmware, and/or software to be specifically designed for execution of one or more of the operations of method 400 .
- the neural network may reside partially in memory.
- the neural network may thermodynamically model a controlled space, e.g., 220 . It may represent the controlled space 220 as a single space, or may break the controlled space up into different zones, which thermodynamically effect each other.
- the neural network 315 may comprise neurons 320 that represent individual material layers of a physical space and how they change state, e.g., their resistance, capacitance, and/or other values that describe how state flows though the section of the controlled space 220 that is being modeled.
- neurons 320 (which may represent material layers) are formed into parallel and branchless neural network strings that propagate heat (and/or other state values) through them.
- neural structures are used.
- structure models other than neural networks are used. More information on neural networks can be found with reference to patent app. Ser. No. 17/143,796, filed on Jan. 7, 2021, and hereby incorporated by reference in its entirety.
- values are initialized for at least some test neurons in a neural network representing corresponding actual values 305 .
- This may entail gathering known sensor data.
- This sensor data e.g., sensor 1 220 A and sensor 2 225 A
- This sensor data may be from locations in a controlled space (e.g. Rooms 1 205 A and Room 2 210 A). More information is given with reference to FIG. 11 .
- a target neuron is specified. This may be a neuron that represents a building zone that does not have a sensor in it, whose value we are looking for, as shown in FIG. 2 A at 215 A.
- the test neuron is the neuron 245 A in the neural network 230 A that represents the location (e.g., Room 3 215 A) where information is not known
- a neural network is run using known sensor values at test neurons for a cost function to produce modeled test neuron values and a modeled target neuron value.
- This comprises running the neural net and propagating the test neuron state (e.g., temperature, humidity, etc.) to neurons in the network.
- the neural network cost function measures the difference between the known sensor data and the simulated sensor data at the test neuron locations.
- the target neuron and the test neurons (those neurons that represent zones the building being modeled with actual sensors) all have simulated values.
- cost function Even though it is called a cost function here, it is intended to be synonymous with “error function” and “loss function.”
- the cost function produces a cost, which may be a value, a vector, or something else, that shows the difference between the target neuron values and the sensor values.
- FIG. 5 discloses a simple thermodynamic model with neurons 500 that are trained for data fusion.
- Network neurons 530 (checked) have been connected to test neurons that represent building sensors (with values 21.0 505 , 25.0 510 , 29.0 515 , and 19.0 520 ).
- the target neuron 525 has been given the value 26.0 by the neural network model.
- modeled test values are compared to to known sensor values, to determine quality of test neuron values.
- the Quality of Data is assessed for each connected virtual node relative to each sensor that is connected to that node. The result is as follows: if corresponding deep physics node has a computed error of 0.9%, then its QOD is assigned as 99.1%; the 1—the computed error.
- the known sensor values are 21.84 (for neuron 505 ), 25.25 (for neuron 510 ), 30.45 (for neuron 515 ), and 19.57 (for neuron 520 ).
- Neuron value 505 is 96% accurate 605 , when compared to the actual sensor that it is being compared to.
- Neuron 510 has an accuracy of 99% 610
- neuron 515 95% as shown at 615 and 520 has an accuracy of 97% a shown at 620 .
- connection strengths of each test value relative to the target neuron is calculated. That is, how much of the test value was used to determine the target value. This may be calculated by using the component vector gradient from the test nodes to the target node, by using the weights (e.g., the cumulative weights) from the test nodes to the target node, etc. These connection strengths may also be determined by determining values by backpropagating from the target neuron to the test neurons.
- FIG. 7 at 700 shows a neural network with component contribution 705 of the sensors relative to the target data fusion node 710 . The connection strengths found are subtracted from 100% to give the Quality of Data (QoD) values.
- QoD Quality of Data
- the fused node 710 QoD value may be computed by matrix multiplication of the list of sensor originating node's QoD values by each of these component weights. This allows the target neuron to calculate which percentage of its data values were derived from the test sensors.
- FIG. 8 at 800 discloses a neural net matrix with the computed value for the quality of data of the test node. Those percentages after subtraction from 100% are 95%, 96%, 99%, and 97%.
- accuracy of the target neuron is calculated using the quality of the test neuron values, and connection strengths between the target neuron and the test neurons. Once the connection strengths are known, the target neuron accuracy can be determined by matrix multiplying the quality of test neuron values by connection strengths between the target neuron and the test neurons. Here, matrix multiplication works just like the dot product.
- FIG. 9 at 900 discloses an example of such a method to determine target neuron accuracy.
- the connection strengths are listed in the table as vector components 910 .
- the accuracy of each test neuron 905 is matrix multiplied by the connection strength 910 of the test neurons, giving a result 915 for each test neuron, which are summed together to give the final value of 96.3, which is the target neuron accuracy, as seen in FIG. 8 at 805 .
- FIG. 10 is a block diagram 1000 showing types of neural networks 1005 with which described embodiments can be implemented.
- a heterogenous neural network 1010 may be used at the neural network descried herein.
- the neural network 1005 may be composed of neurons that model thermodynamic characteristics of a controlled space 220 , 305 using state transfer nodes with physics equations in activation functions that determine how state transfers between and/or through the various structure and pieces of structures (e.g., windows, floors, ceilings, air) and values that specify parameters for specific structures; i.e., an inner wall with no insulation will behave differently than an outer wall with considerable insulation.
- State enters one or more neurons and then propagates throughout the neuron structure.
- the neurons are branchless and parallel, allowing fast processing on vector processing machines. Automatic differentiation may be used to calculate backward propagation through the neural network.
- Heterogenous neural networks are described in U.S. Utility patent application Ser. No. 17/143,796, filed on Jan. 7, 2021, and hereby incorporated by reference in its entirety
- FIG. 11 is a diagram 1100 showing collecting sensor data and running a neural network with which described embodiments can be implemented.
- a controlled space 1120 may have a sensor in it 1125
- the controlled space may be subject to weather (or other state) 1105 for a time period t(n) to t(0) 1115 which may be saved as a state time series values (temperature, for example, over time.)
- the sensor 1125 may have data 1130 gathered for the same time 1125 .
- the state curve 1110 may be used as input into a neural network 315 .
- a sensor value collected during this time 1125 (such as the last value at time t(0)) may be used as the target neuron value 330 .
- FIG. 12 at 1200 discloses diagram that illustrates a time-series that can be used to warm up the the neural network by pre-running the neural network using successively larger portions of an input wave form until a goal state is reached.
- the idea of warming up a neural network is, generally, that the neural network has neurons that represent various bits of a controlled space, such as walls and rooms, or bigger or smaller bits. These neurons may start out with values that do not represent actual values in a structure, such as all the temperature values (or values that are used to derive temperatures) are at 0. This does not represent the actual values in the controlled space 1120 that is bring modeled. To bring the neural network up to a reasonable state value, state values may be propagated forward through the network.
- neural network 315 may represent some controlled space 1120 .
- This controlled space 1120 may have a sensor, e.g., 125 that records state of the space 1120 .
- State that affects the space 1120 such as a weather value 1105
- a weather value may be a state value that can be derived from the weather affecting a controlled space, such as temperature, humidity, wind speed, cloudiness, dew point, etc.
- the neural network my be run with this state data 1115 as input to give the neural network interior values reasonable starting values before the neural network is run to determine sensor fusion values to train the neural network.
- a variable in the neural network representing the controlled space 1120 with the sensor 1125 matches the sensor data (or hits a threshold or comes within a certain value of a threshold value, etc.)
- the neural network may be considered to be warmed up.
- a threshold may be the magnitude or intensity that must be exceeded for a certain reaction, phenomenon, result, or condition to occur or be manifested, as it is commonly defined.
- exemplary time series data is shown, with the timesteps running from t(n) 1235 to t(0) 1205 .
- a a set of the time series data may be chosen.
- a set of time series data may be chosen (e.g., from k(index) to 0).
- the time series data may be divided into x sections, each section with some number of timesteps. In some embodiments, each section may have the same number, e.g., k, timesteps 1210 .
- the data runs from a value within the time series to the last value taken, t(0) 1205 . In some embodiments, the data may have a different ending point, or in a different direction.
- the first time a neural network is run the time series data may be run from k 1220 to 0 1210 . If a goal state is not reached, the second time the neural network is run, it my be run from k(2) 1225 to 0 1215 , up to k(x) 1230 . In some embodiments, there may be a variable number of timesteps per section.
- the chosen time series data is propagated through the neural network 315 . This may be done using a neural network optimizer 345 or through a different method. Then, the value of a neuron variable may be determined. It then may be determined if the goal state has been reached.
- the goal state may comprise the neuron variable value reaching a threshold value or similar, an internal neuron value reaching a state (such as temperature) within some range of a sensor 1125 in a controlled space, an index value being greater than x, reaching the limit of the time series data, e.g., 1230 , reaching a neural network running time limit, or reaching an error state.
- the program stops and the neural network may be considered trained. If the stopping state has been reached, in some embodiments, the program stops and the neural network may be considered trained. If the goal state has not been reached, then another set of time series data may be chosen (e.g., k(index+1) 1220 , and the process continues.
- some embodiments include a configured computer-readable storage medium 165 .
- Medium 165 may include disks (magnetic, optical, or otherwise), RAM, EEPROMS or other ROMs, and/or other configurable memory, including computer-readable media (not directed to a manufactured transient phenomenon, such as an electrical, optical, or acoustical signal).
- the storage medium which is configured may be a removable storage medium 165 such as a CD, DVD, or flash memory.
- a general-purpose memory (which may be primary, such as RAM, ROM, CMOS, or flash; or may be secondary, such as a CD, a hard drive, an optical disk, or a removable flash drive), can be configured into an embodiment using the computing environment 100 , the computerized controller 205 , or the controller 307 , or any combination of the above, in the form of data 180 and instructions 175 , read from a source, such as a removable medium output device 155 , to form a configured medium with data and instructions which upon execution by a processor perform a method for computing neuron accuracy,.
- the configured medium 165 is capable of causing a computer system to perform actions as related herein.
- Some embodiments provide or utilize a computer-readable storage medium 165 configured with software 185 which upon execution by at least a central processing unit 110 performs methods and systems described herein.
Abstract
An unknown state value in a structure neuron value in a neural network, in one embodiment, is determined by using the difference between known values and output at an equivalent model location. The accuracy of model produced values with known values are determined compared to the known values. How much the known model produced locations were used to determine the unknown state value is determined. These amounts and accuracy of the model produced values are used to determine accuracy of the model produced value of the unknown state value.
Description
- The present application hereby incorporates by reference the entirety of, and claims priority to, U.S. provisional patent application Ser. No. 62/704,976, filed Jun. 5, 2020.
- The present disclosure relates to neural network methods for describing system topologies. More specifically the present disclosure relates to determining unknown values in a model neuron, determining where in the model those neuron values were derived from, and determining the quality of data within the model representation compared to a physical location.
- Data fusion is combining disparate data sets to pull the magic trick of seeming to get more information out than was put in. More specifically, it entails combining data from different sources and analyzing it such that the different data sets and data views allow one to more fully understand what is being observed than any single data set allows.
- Building models often have sparse data sets; there are only so many sensors in a building; large spaces generally only measure temperature (or other state values) near walls. This leads to buildings whose heating and cooling is very difficult to control, as large portions of the building do not have simple ways of measuring state values, and if they are not measured, it is very difficult to alter them. Who has not been in an office where the thermostat is next door? No matter the temperature of your office, since it is not measured with a sensor, you are at the mercy of the person the next office over, and their desired temperature setting.
- This summary is provided to introduce a selection of concepts in a simplified form that are further described below in the Detailed Description section. This summary does not identify required or essential features of the claimed subject matter.
- In embodiments, a method for computing neuron accuracy implemented by one or more computers is disclosed, comprising: running a neural network with test neurons and a target neuron using known sensor values at test neurons for a cost function to produce modeled test neuron values and a modeled value of the target neuron; comparing modeled test values to known sensor values, to determine quality of test neuron values; calculating connection strengths of each test value relative to the target neuron; and calculating accuracy of the target neuron using: quality of the test neuron values, and connection strengths between the target neuron and the test neurons.
- In embodiments, running the neural network comprises using state time series values as input into the neural network for a running period.
- In embodiments, the state time series values are weather values affecting a controlled space.
- In embodiments, the cost function compares the known sensor values to the modeled test values.
- In embodiments, calculating connection strength comprises using automatic differentiated vector gradients.
- In embodiments, calculating accuracy of the target neuron comprises matrix multiplying the quality of test neuron values by connection strengths between the target neuron and the test neurons.
- In embodiments, running the neural network comprises using machine learning techniques to determine connection strengths between the target neuron and the test neurons comprises using automatic differentiation to backpropagate from the target neuron to the test neurons.
- In embodiments, the neural network is a heterogenous neural network.
- In embodiments, at least one test neuron has an accuracy and an associated sensor, and where the test neuron accuracy relates to accuracy of the associated sensor.
- In embodiments, the neural network has internal values, and further comprising warming up the neural network using at least a portion of an initial state time series values to modify the neural network internal values.
- In embodiments, the neural network is warmed up by pre-running the neural network using successively larger portions of an input wave form until a goal state is reached.
- In embodiments, the neural network models a controlled system, and wherein the controlled system comprises a controlled building system, a process control system, an HVAC system, an energy system, or an irrigation system.
- In embodiments, a system for computing neuron accuracy is disclosed, comprising: a processor; a memory in operational communication with the processor; a neural network which resides at least partially in the memory, the neural network comprising test neurons with test values and at least one target neuron with a target neuron value; a neural network optimizer that optimizes the neural network using known sensor values and test values for a cost function to produce a solved neural network with modeled test values; a determiner that determines quality of the test neuron values by comparing test neuron values in the solved neural network to corresponding actual values; a machine learner that uses machine learning techniques to calculate connection strengths between the test neurons and the at least one target neuron; and a function calculator that calculates accuracy of the at least one target neuron value using: quality of the test neuron values, and connection strengths between the target neurons and the at least one test neuron.
- In embodiments, the function calculator comprises matrix multiplying the quality of test neuron values by connection strengths between the target neuron and the test neurons.
- In embodiments, at least one corresponding actual value comprises a sensor state value.
- In embodiments, the sensor state value is derived from a sensor in a controlled space.
- In embodiments, an initializer is disclosed, which uses state time series values as input into the neural network for a running period.
- In embodiments, at least one of the machine learning techniques uses automatic differentiation to calculate connection strengths.
- In embodiments, a computer-readable storage medium configured with data and instructions is disclosed, which upon execution by a processor perform a method for computing neuron accuracy, the method comprising: initializing values for at least some test neurons in a neural network, the test neurons representing corresponding actual values; specifying a target neuron in the neural network; optimizing the neural network using the actual values producing a solved neural network with a target neuron value and test neuron values; using machine learning techniques to determine connection strengths between the target neuron and the test neurons; determining quality of the test neuron values by comparing test neuron values in the solved neural network to corresponding actual neuron values; and calculating accuracy of the target neuron using: quality of the test neuron values, and connection strengths between the target neuron and the at least one test neuron.
- In embodiments, the corresponding actual values are sensor values that correspond to test neuron locations.
- Additional features and advantages will become apparent from the following detailed description of illustrated embodiments, which proceeds with reference to accompanying drawings.
- Non-limiting and non-exhaustive embodiments of the present embodiments are described with reference to the following FIGURES, wherein like reference numerals refer to like parts throughout the various views unless otherwise specified.
-
FIG. 1 depicts a computing system in accordance with one or more embodiments. -
FIG. 2 depicts a distributed computing system in accordance with one or more embodiments. -
FIG. 2A depicts an exemplary system configured to determine quality of data using sensor fusion accordance with one or more embodiments. -
FIG. 3 depicts an exemplary system configured to determine quality of data using sensor fusion accordance with one or more embodiments. -
FIG. 4 is a functional block diagram that illustrates an exemplary compute function with which described embodiments can be implemented. -
FIG. 5 is a diagram showing an exemplary sensor fusion and quality of data neural network system in conjunction with which described embodiments can be implemented. -
FIG. 6 is a diagram showing an exemplary neural network sensor fusion and quality of data system with computed neurons in conjunction with which described embodiments can be implemented. -
FIG. 7 is a diagram showing an exemplary neural network sensor fusion and quality of data system with component vector propagation in conjunction with which described embodiments can be implemented. -
FIG. 8 is a diagram showing an exemplary neural network sensor fusion and quality of data system with fused data computation in conjunction with which described embodiments can be implemented. -
FIG. 9 is a table showing an exemplary quality of data computation method in conjunction with which described embodiments can be implemented. -
FIG. 10 is a block diagram showing types of neural networks with which described embodiments can be implemented. -
FIG. 11 is a diagram showing data streams with which described embodiments can be implemented. -
FIG. 12 is a diagram showing an exemplary time series data with which described embodiments can be implemented. - Corresponding reference characters indicate corresponding components throughout the several views of the drawings. Skilled artisans will appreciate that elements in the FIGURES are illustrated for simplicity and clarity and have not necessarily been drawn to scale. For example, the dimensions of some of the elements in the figures may be exaggerated relative to other elements to help to improve understanding of various embodiments. Also, common but well-understood elements that are useful or necessary in a commercially feasible embodiment are often not depicted in order to facilitate a less obstructed view of these various embodiments.
- Disclosed below are representative embodiments of methods, computer-readable media, and systems having particular applicability to systems and methods for building neural networks that describe physical structures. Described embodiments implement one or more of the described technologies.
- In the following description, numerous specific details are set forth in order to provide a thorough understanding of the present embodiments. It will be apparent, however, to one having ordinary skill in the art that the specific detail need not be employed to practice the present embodiments. In other instances, well-known materials or methods have not been described in detail in order to avoid obscuring the present embodiments.
- Reference throughout this specification to “one embodiment”, “an embodiment”, “one example”, or “an example” means that a particular feature, structure or characteristic described in connection with the embodiment or example is included in at least one embodiment of the present embodiments. Thus, appearances of the phrases “in one embodiment”, “in an embodiment”, “one example” or “an example” in various places throughout this specification are not necessarily all referring to the same embodiment or example. Furthermore, the particular features, structures or characteristics may be combined in any suitable combinations and/or sub-combinations in one or more embodiments or examples.
- Embodiments in accordance with the present embodiments may be implemented as an apparatus, method, or computer program product. Accordingly, the present embodiments may take the form of an entirely hardware embodiment, an entirely software embodiment (including firmware, resident software, micro-code, etc.), or an embodiment combining software and hardware aspects. Furthermore, the present embodiments may take the form of a computer program product embodied in any tangible medium of expression having computer-usable program code embodied in the medium.
- Any combination of one or more computer-usable or computer-readable media may be utilized. For example, a computer-readable medium may include one or more of a portable computer diskette, a hard disk, a random access memory (RAM) device, a read-only memory (ROM) device, an erasable programmable read-only memory (EPROM or Flash memory) device, a portable compact disc read-only memory (CDROM), an optical storage device, and a magnetic storage device. Computer program code for carrying out operations of the present embodiments may be written in any combination of one or more programming languages.
- Embodiments may be implemented in edge computing environments where the computing is done within a network which, in some implementations, may not be connected to an outside internet, although the edge computing environment may be connected with an internal internet. In these implementations the space is much safer, and is much easier to secure from ransomeware attacks and the like. This internet may be wired, wireless, or a combination of both. Embodiments may also be implemented in cloud computing environments. A cloud model can be composed of various characteristics (e.g., on-demand self-service, broad network access, resource pooling, rapid elasticity, measured service, etc.), service models (e.g., Software as a Service (“SaaS”), Platform as a Service (“PaaS”), Infrastructure as a Service (“IaaS”), and deployment models (e.g., private cloud, community cloud, public cloud, hybrid cloud, etc.).
- The flowchart and block diagrams in the flow diagrams illustrate the architecture, functionality, and operation of possible implementations of systems, methods, and computer program products according to various embodiments of the present embodiments. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It will also be noted that each block of the block diagrams and/or flowchart illustrations, and combinations of blocks in the block diagrams and/or flowchart illustrations, may be implemented by general or special purpose hardware-based systems that perform the specified functions or acts, or combinations of general and special purpose hardware and computer instructions. These computer program instructions may also be stored in a computer-readable medium that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable medium produce an article of manufacture including instruction means which implement the function/act specified in the flowchart and/or block diagram block or blocks.
- As used herein, the terms “comprises,” “comprising,” “includes,” “including,” “has,” “having,” or any other variation thereof, are intended to cover a non-exclusive inclusion. For example, a process, article, or apparatus that comprises a list of elements is not necessarily limited to only those elements but may include other elements not expressly listed or inherent to such process, article, or apparatus.
- Further, unless expressly stated to the contrary, “or” refers to an inclusive or and not to an exclusive or. For example, a condition A or B is satisfied by any one of the following: A is true (or present) and B is false (or not present), A is false (or not present) and B is true (or present), and both A and B are true (or present).
- Additionally, any examples or illustrations given herein are not to be regarded in any way as restrictions on, limits to, or express definitions of any term or terms with which they are utilized. Instead, these examples or illustrations are to be regarded as being described with respect to one particular embodiment and as being illustrative only. Those of ordinary skill in the art will appreciate that any term or terms with which these examples or illustrations are utilized will encompass other embodiments which may or may not be given therewith or elsewhere in the specification and all such embodiments are intended to be included within the scope of that term or terms. Language designating such non-limiting examples and illustrations includes, but is not limited to: “for example,” “for instance,” “e.g.,” and “in one embodiment.”
- Various alternatives to the implementations described herein are possible. For example, embodiments described with reference to flowchart diagrams can be altered, such as, for example, by changing the ordering of stages shown in the flowcharts, or by repeating or omitting certain stages.
- Deep physics networks are structured similarly, but not identically, to neural networks. Unlike the homogeneous activation functions of neural nets, each neuron comprises unique physical characteristics representing functions in a thermodynamic system. Once configured, known sensors values are fed into their corresponding neurons in the network. Once the network is trained, any location in the thermodynamic system can be introspected to extract fused data. The process provides powerful generalized data fusion, data synthesis, and quality assessment through inference even where no sensors exist—for any thermodynamic system. The same mechanism enables model optimization, and the time series can then be used for real-time sequence generation and fault detection.
- In an exemplary environment, a neuron model system comprises heterogenous neural networks. with activation functions that comprise neurons that represent individual material layers of a building and various values, such as their resistance and capacitance. These neurons are formed into parallel and branchless neural network strings that propagate heat (or other state values) through them.
FIG. 1 at 100 shows an exemplary embodiment overview that can be used to create and discretize such a neuron model system. This neuron model system may be described as a thermodynamic model. In a building or in a model of a building, a missing sensor value can be determined, within an error range. Existing sensors can be checked to determine how accurate they are, within an error range. Areas that do not have specific sensors can have their sensor values determined within an error range. This makes modifying state much easier, as changes in the system can be checked, and thus be determined if such changes had the desired effect. This can able be instrumental during commissioning, as a much more thorough state of a controlled space can be determined. Other benefits will be obvious to those of skill in the art. -
FIG. 1 illustrates a generalized example of asuitable computing environment 100 in which described embodiments may be implemented. Thecomputing environment 100 is not intended to suggest any limitation as to scope of use or functionality of the disclosure, as the present disclosure may be implemented in diverse general-purpose or special-purpose computing environments. - With reference to
FIG. 1 , the core processing is indicated by thecore processing unit 130. Thecomputing environment 100 includes at least onecentral processing unit 110 andmemory 120. Thecentral processing unit 110 executes computer-executable instructions and may be a real or a virtual processor. It may also comprise avector processor 112, which allows same-length neuron strings to be processed rapidly. In a multi-processing system, multiple processing units execute computer-executable instructions to increase processing power and as such thevector processor 112,GPU 115, and CPU can be running simultaneously. Thememory 120 may be volatile memory (e.g., registers, cache, RAM), non-volatile memory (e.g., ROM, EEPROM, flash memory, etc.), or some combination of the two. Thememory 120 stores software 185 implementing the described methods and systems of sensor fusion quality of data determination. - A computing environment may have additional features. For example, the
computing environment 100 includesstorage 140, one ormore input devices 150, one ormore output devices 155, one or more network connections (e.g., wired, wireless, etc.) 160 as well asother communication connections 170. An interconnection mechanism (not shown) such as a bus, controller, or network interconnects the components of thecomputing environment 100. Typically, operating system software (not shown) provides an operating environment for other software executing in thecomputing environment 100, and coordinates activities of the components of thecomputing environment 100. The computing system may also be distributed; running portions of the software 185 on different CPUs. - The
storage 140 may be removable or non-removable, and includes magnetic disks, magnetic tapes or cassettes, CD-ROMs, CD-RWs, DVDs, flash drives, or any other medium which can be used to store information and which can be accessed within thecomputing environment 100. Thestorage 140 stores instructions for the software, such as software 185 to implement methods of sensor fusion utilizing neural networks. - The input device(s) 150 may be a device that allows a user or another device to communicate with the
computing environment 100, such as a touch input device such as a keyboard, video camera, a microphone, mouse, pen, or trackball, and a scanning device, touchscreen, or another device that provides input to thecomputing environment 100. For audio, the input device(s) 150 may be a sound card or similar device that accepts audio input in analog or digital form, or a CD-ROM reader that provides audio samples to the computing environment. The output device(s) 155 may be a display, printer, speaker, CD-writer, or another device that provides output from thecomputing environment 100. - The communication connection(s) 170 enable communication over a communication medium to another computing entity. The communication medium conveys information such as computer-executable instructions, compressed graphics information, or other data in a modulated data signal.
Communication connections 170 may compriseinput devices 150,output devices 155, and input/output devices that allows a client device to communicate with another device overnetwork 160. A communication device may include one or more wireless transceivers for performing wireless communication and/or one or more communication ports for performing wired communication. These connections may include network connections, which may be a wired or wireless network such as the Internet, an intranet, a LAN, a WAN, a cellular network or another type of network. It will be understood thatnetwork 160 may be a combination of multiple different kinds of wired or wireless networks. Thenetwork 160 may be a distributed network, with multiple computers, which might be building controllers, acting in tandem. Acomputing connection 170 may be a portable communications device such as a wireless handheld device, a cell phone device, and so on. - Computer-readable media are any available non-transient tangible media that can be accessed within a computing environment. By way of example, and not limitation, with the
computing environment 100, computer-readable media includememory 120,storage 140, communication media, and combinations of any of the above. Computer readable storage media 165 which may be used to store computer readable media comprisesinstructions 175 anddata 180. Data Sources may be computing devices, such as general hardware platform servers configured to receive and transmit information over thecommunications connections 170. Thecomputing environment 100 may be an electrical controller that is directly connected to various resources, such as HVAC resources, and which hasCPU 110, aGPU 115, Memory, 120,input devices 150,communication connections 170, and/or other features shown in thecomputing environment 100. Thecomputing environment 100 may be a series of distributed computers. These distributed computers may comprise a series of connected electrical controllers. - Although the operations of some of the disclosed methods are described in a particular, sequential order for convenient presentation, it should be understood that this manner of description encompasses rearrangement, unless a particular ordering is required by specific language set forth below. For example, operations described sequentially can be rearranged or performed concurrently. Moreover, for the sake of simplicity, the attached figures may not show the various ways in which the disclosed methods, apparatus, and systems can be used in conjunction with other methods, apparatus, and systems. Additionally, the description sometimes uses terms like “determine,” “build,” and “identify” to describe the disclosed technology. These terms are high-level abstractions of the actual operations that are performed. The actual operations that correspond to these terms will vary depending on the particular implementation and are readily discernible by one of ordinary skill in the art.
- Further, data produced from any of the disclosed methods can be created, updated, or stored on tangible computer-readable media (e.g., tangible computer-readable media, such as one or more CDs, volatile memory components (such as DRAM or SRAM), or nonvolatile memory components (such as hard drives) using a variety of different data structures or formats. Such data can be created or updated at a local computer or over a network (e.g., by a server computer), or stored and accessed in a cloud computing environment.
-
FIG. 2 depicts a distributedcomputing system 200 with which embodiments disclosed herein may be implemented. Two or morecomputerized controllers 205 may incorporate all or part of acomputing environment 100, 210. Thesecomputerized controllers 205 may be connected 215 to each other using wired or wireless connections. The controllers may be within a controlledspace 220. A controlledspace 220 may be a space that has a resource, sensor, or other equipment that can modify or determine one or more states state of the space, such as a a sensor (to determine space state), a heater, an air conditioner (to modify temperature); a speaker (to modify noise), locks, lights, etc. A controlled space may be divided into zones, which might have a sensor, or no sensor. Controlled spaces might be, e.g., an automated building, a process control system, an HVAC system, an energy system, an irrigation system, a building—irrigation system, etc.Computerized controllers 205 may comprise a distributed system that can run without using connections (such as internet connections) outside of thecomputing system 200 itself. This allows the system to run with low latency, and with other benefits of edge computing systems. The system may also run without access to an outside internet connection, which may make the system much more invulnerable to outside security threats. -
FIG. 2A depicts an exemplary room-sensor-neural network system 200A brief overview that can be used to perform sensor fusion. Thesystem 200A is not intended to suggest any limitation as to scope of use or functionality of the disclosure, as the present disclosure may be implemented in many different systems. In this simplified environment, there are three rooms (Room 1 205A,Room 2 210A, androom 3 215A). Two of the rooms (Room 1 205A andRoom 2 210A) have state sensors (sensor 1 220A andsensor 2 225A), whileRoom 3 215A does not. The neural network will fuse the sensor values fromsensor 1 220A andsensor 2 225A to determine a probable sensor value forRoom 3 215A. Theneural network 230A, among other neurons, has aneuron 235A that representssensor 1 220A (and may be considered associated withsensor 1 in that theSensor 1 220A value is represented in the neural network byneuron 235A) and aneuron 240A that representssensor 2 225A. These may be called test neurons. Theneural network 230A also has aneuron 245A that represents the sensor value for the nonexistent sensor inRoom 3 215A. This may be called a target neuron. The values forsensor 1 220A andsensor 2 225A may be collected for a period of time. Theneural network 245A may then be solved for the collected values ofsensor 1 220A andsensor 2 225A forneurons neuron 245A (the target neuron) may be considered the first pass at a state value inRoom 3 215A. For example, if the known sensors were temperature, the value in theneuron 245A would be a temperature value. The solved known state values are then compared to the actual sensor values to determine how good the test values are. The degree each test neuron was used to determine the value of the target neuron is determined. Then, a combination of the test values, how close the test values are to the actual values, and the percent of the test values used to determine the target value are used to determine a final, fused, test value. -
FIG. 3 depicts anexemplary system 300 for fusing sensor data. The system may include at least oneprocessor 310 residing within acontroller 307, which may be part of acomputerized controller system 200. Thecontroller 307 may be in a controlledspace 305. Memory 312 may comprise aneural network 315. In some embodiments, the neural network may reside partially in memory. In some embodiments, the neural network may thermodynamically model a controlled space, e.g., 220. This neural network may thermodynamically represent the controlled space in some way. It may represent the controlledspace 220 as a single space, or may break the controlled space up into different zones, which thermodynamically effect each other. Theneural network 315 may comprisetarget neurons 320 and at least onetest neuron 325 that may represent individual material layers of a physical space and how they change state, e.g., their resistance, capacitance, and/or other values that describe how state flows though the section of the controlledspace 220 that is being modeled. In some embodiments, other neural structures are used. In some embodiments, structure models other than neural networks are used. There may also be atarget neuron value 330 and atest neuron value 335 that represent some state stored in thetarget 330 and/ortest neuron 335. - An
initializer 340 may be included that initializes a neural network. Such an initializer is described in patent application Ser. No. 17/308,294, filed May 5, 2021, and incorporated by reference in its entirety. Aneural network optimizer 345 may be included that optimizes the neural network using known sensor values and test values for a cost function to produce modeled test values. Adeterminer 355 determines quality of the test neuron values by comparing test neuron values in the solved trained neural network to corresponding actual values, which may be known sensor values. Amachine learner 360 uses machine learning techniques to calculate connection strengths between the target neurons and the at least one test neuron. Afunction calculator 365 calculates accuracy of the target neuron using quality of the test neuron values, and connection strengths between the target neuron and at least one test neuron. Known sensor values may be used in the function calculator to calculate the accuracy of the test values. -
FIG. 4 depicts anexemplary method 400 for initializing neural networks. The operations ofmethod 400 presented below are intended to be illustrative. In some embodiments,method 400 may be accomplished with one or more additional operations not described, and/or without one or more of the operations discussed. Additionally, the order in which the operations ofmethod 400 are illustrated inFIG. 4 and described below is not intended to be limiting. - In some embodiments,
method 400 may be implemented in one or more processing devices, such as shown with reference toFIGS. 1 and 2 . The one or more processing devices may include one or more devices executing some or all of the operations ofmethod 400 in response to instructions stored electronically on an electronic storage medium. The one or more processing devices may include one or more devices configured through hardware, firmware, and/or software to be specifically designed for execution of one or more of the operations ofmethod 400. - In some embodiments, the neural network may reside partially in memory. In some embodiments, the neural network may thermodynamically model a controlled space, e.g., 220. It may represent the controlled
space 220 as a single space, or may break the controlled space up into different zones, which thermodynamically effect each other. Theneural network 315 may compriseneurons 320 that represent individual material layers of a physical space and how they change state, e.g., their resistance, capacitance, and/or other values that describe how state flows though the section of the controlledspace 220 that is being modeled. In someneural networks 315, neurons 320 (which may represent material layers) are formed into parallel and branchless neural network strings that propagate heat (and/or other state values) through them. In some embodiments, other neural structures are used. In some embodiments, structure models other than neural networks are used. More information on neural networks can be found with reference to patent app. Ser. No. 17/143,796, filed on Jan. 7, 2021, and hereby incorporated by reference in its entirety. - At
operation 405, values are initialized for at least some test neurons in a neural network representing correspondingactual values 305. This may entail gathering known sensor data. This sensor data (e.g.,sensor 1 220A andsensor 2 225A) may be from locations in a controlled space (e.g. Rooms 1 205A andRoom 2 210A). More information is given with reference toFIG. 11 . - At
operation 410, a target neuron is specified. This may be a neuron that represents a building zone that does not have a sensor in it, whose value we are looking for, as shown in FIG. 2A at 215A. Here, the test neuron is theneuron 245A in theneural network 230A that represents the location (e.g.,Room 3 215A) where information is not known - At
operation 415, a neural network is run using known sensor values at test neurons for a cost function to produce modeled test neuron values and a modeled target neuron value. This comprises running the neural net and propagating the test neuron state (e.g., temperature, humidity, etc.) to neurons in the network. The neural network cost function measures the difference between the known sensor data and the simulated sensor data at the test neuron locations. After the model has run to a state with sufficient accuracy (e.g., the cost function is at a desired value indicating that the neural network is producing results where the test neurons are within a certain value of the known sensor values), the target neuron and the test neurons (those neurons that represent zones the building being modeled with actual sensors) all have simulated values. Even though it is called a cost function here, it is intended to be synonymous with “error function” and “loss function.” The cost function produces a cost, which may be a value, a vector, or something else, that shows the difference between the target neuron values and the sensor values. -
FIG. 5 discloses a simple thermodynamic model withneurons 500 that are trained for data fusion. Network neurons 530 (checked) have been connected to test neurons that represent building sensors (with values 21.0 505, 25.0 510, 29.0 515, and 19.0 520). Thetarget neuron 525 has been given the value 26.0 by the neural network model. - At
operation 420, modeled test values are compared to to known sensor values, to determine quality of test neuron values. Once the network has been trained, producing fused data, the Quality of Data is assessed for each connected virtual node relative to each sensor that is connected to that node. The result is as follows: if corresponding deep physics node has a computed error of 0.9%, then its QOD is assigned as 99.1%; the 1—the computed error. To describe in more detail, and with reference toFIG. 5 , let us assume the known sensor values are 21.84 (for neuron 505), 25.25 (for neuron 510), 30.45 (for neuron 515), and 19.57 (for neuron 520).FIG. 6 at 600 discloses a computation of the quality of the simulated data, by percent, relative to the neuron's sensor points.Neuron value 505 is 96% accurate 605, when compared to the actual sensor that it is being compared to.Neuron 510 has an accuracy of 99% 610,neuron 515 95% as shown at 615, and 520 has an accuracy of 97% a shown at 620. - At
operation 425 connection strengths of each test value relative to the target neuron is calculated. That is, how much of the test value was used to determine the target value. This may be calculated by using the component vector gradient from the test nodes to the target node, by using the weights (e.g., the cumulative weights) from the test nodes to the target node, etc. These connection strengths may also be determined by determining values by backpropagating from the target neuron to the test neurons.FIG. 7 at 700 shows a neural network withcomponent contribution 705 of the sensors relative to the targetdata fusion node 710. The connection strengths found are subtracted from 100% to give the Quality of Data (QoD) values. Those of skill in the art will be aware of other methods as well. The fusednode 710 QoD value may be computed by matrix multiplication of the list of sensor originating node's QoD values by each of these component weights. This allows the target neuron to calculate which percentage of its data values were derived from the test sensors.FIG. 8 at 800 discloses a neural net matrix with the computed value for the quality of data of the test node. Those percentages after subtraction from 100% are 95%, 96%, 99%, and 97%. - At
operation 430 accuracy of the target neuron is calculated using the quality of the test neuron values, and connection strengths between the target neuron and the test neurons. Once the connection strengths are known, the target neuron accuracy can be determined by matrix multiplying the quality of test neuron values by connection strengths between the target neuron and the test neurons. Here, matrix multiplication works just like the dot product. -
FIG. 9 at 900 discloses an example of such a method to determine target neuron accuracy. The connection strengths are listed in the table asvector components 910. The accuracy of eachtest neuron 905 is matrix multiplied by theconnection strength 910 of the test neurons, giving aresult 915 for each test neuron, which are summed together to give the final value of 96.3, which is the target neuron accuracy, as seen inFIG. 8 at 805. -
FIG. 10 is a block diagram 1000 showing types ofneural networks 1005 with which described embodiments can be implemented. In some implementations a heterogenous neural network 1010 may be used at the neural network descried herein. Theneural network 1005 may be composed of neurons that model thermodynamic characteristics of a controlledspace -
FIG. 11 is a diagram 1100 showing collecting sensor data and running a neural network with which described embodiments can be implemented. A controlledspace 1120 may have a sensor in it 1125 The controlled space may be subject to weather (or other state) 1105 for a time period t(n) to t(0) 1115 which may be saved as a state time series values (temperature, for example, over time.) Thesensor 1125 may havedata 1130 gathered for thesame time 1125. Thestate curve 1110 may be used as input into aneural network 315. A sensor value collected during this time 1125 (such as the last value at time t(0)) may be used as thetarget neuron value 330. -
FIG. 12 at 1200 discloses diagram that illustrates a time-series that can be used to warm up the the neural network by pre-running the neural network using successively larger portions of an input wave form until a goal state is reached. The idea of warming up a neural network is, generally, that the neural network has neurons that represent various bits of a controlled space, such as walls and rooms, or bigger or smaller bits. These neurons may start out with values that do not represent actual values in a structure, such as all the temperature values (or values that are used to derive temperatures) are at 0. This does not represent the actual values in the controlledspace 1120 that is bring modeled. To bring the neural network up to a reasonable state value, state values may be propagated forward through the network. - In more detail,
neural network 315 may represent some controlledspace 1120. This controlledspace 1120 may have a sensor, e.g., 125 that records state of thespace 1120. State that affects thespace 1120, such as aweather value 1105, may be gathered (e.g., from t(n) to t(0) 1115), producing a state curve, during the same time that data is being collected from asensor 1125. A weather value may be a state value that can be derived from the weather affecting a controlled space, such as temperature, humidity, wind speed, cloudiness, dew point, etc. The neural network my be run with thisstate data 1115 as input to give the neural network interior values reasonable starting values before the neural network is run to determine sensor fusion values to train the neural network. When a variable in the neural network representing the controlledspace 1120 with thesensor 1125 matches the sensor data (or hits a threshold or comes within a certain value of a threshold value, etc.), at t(0), the neural network may be considered to be warmed up. A threshold may be the magnitude or intensity that must be exceeded for a certain reaction, phenomenon, result, or condition to occur or be manifested, as it is commonly defined. - At 1200, exemplary time series data is shown, with the timesteps running from t(n) 1235 to t(0) 1205. Initially, a a set of the time series data may be chosen. Then, a set of time series data may be chosen (e.g., from k(index) to 0). The time series data may be divided into x sections, each section with some number of timesteps. In some embodiments, each section may have the same number, e.g., k,
timesteps 1210. In some embodiments, the data runs from a value within the time series to the last value taken, t(0) 1205. In some embodiments, the data may have a different ending point, or in a different direction. The first time a neural network is run, the time series data may be run fromk 1220 to 0 1210. If a goal state is not reached, the second time the neural network is run, it my be run from k(2) 1225 to 0 1215, up to k(x) 1230. In some embodiments, there may be a variable number of timesteps per section. - The chosen time series data is propagated through the
neural network 315. This may be done using aneural network optimizer 345 or through a different method. Then, the value of a neuron variable may be determined. It then may be determined if the goal state has been reached. The goal state may comprise the neuron variable value reaching a threshold value or similar, an internal neuron value reaching a state (such as temperature) within some range of asensor 1125 in a controlled space, an index value being greater than x, reaching the limit of the time series data, e.g., 1230, reaching a neural network running time limit, or reaching an error state. - If the stopping state has been reached, in some embodiments, the program stops and the neural network may be considered trained. If the goal state has not been reached, then another set of time series data may be chosen (e.g., k(index+1) 1220, and the process continues.
- With reference to
FIGS. 1, 2 and 3 , some embodiments include a configured computer-readable storage medium 165. Medium 165 may include disks (magnetic, optical, or otherwise), RAM, EEPROMS or other ROMs, and/or other configurable memory, including computer-readable media (not directed to a manufactured transient phenomenon, such as an electrical, optical, or acoustical signal). The storage medium which is configured may be a removable storage medium 165 such as a CD, DVD, or flash memory. A general-purpose memory (which may be primary, such as RAM, ROM, CMOS, or flash; or may be secondary, such as a CD, a hard drive, an optical disk, or a removable flash drive), can be configured into an embodiment using thecomputing environment 100, thecomputerized controller 205, or thecontroller 307, or any combination of the above, in the form ofdata 180 andinstructions 175, read from a source, such as a removablemedium output device 155, to form a configured medium with data and instructions which upon execution by a processor perform a method for computing neuron accuracy,. The configured medium 165 is capable of causing a computer system to perform actions as related herein. - Some embodiments provide or utilize a computer-readable storage medium 165 configured with software 185 which upon execution by at least a
central processing unit 110 performs methods and systems described herein. - In view of the many possible embodiments to which the principles of the disclosed invention may be applied, it should be recognized that the illustrated embodiments are only preferred examples of the invention and should not be taken as limiting the scope of the invention. Rather, the scope of the invention is defined by the following claims. We therefore claim as our invention all that comes within the scope and spirit of these claims.
Claims (20)
1. A method for computing neuron accuracy implemented by one or more computers comprising:
running a neural network with test neurons and a target neuron using known sensor values at test neurons for a cost function to produce modeled test neuron values and a modeled value of the target neuron;
comparing modeled test values to known sensor values, to determine quality of test neuron values;
calculating connection strengths of each test value relative to the target neuron; and
calculating accuracy of the target neuron using:
quality of the test neuron values, and
connection strengths between the target neuron and the test neurons.
2. The method of claim 1 , wherein running the neural network comprises using state time series values as input into the neural network for a running period.
3. The method of claim 2 , wherein the state time series values are weather values affecting a controlled space.
4. The method of claim 3 , wherein the cost function compares the known sensor values to the modeled test values.
5. The method of claim 4 , wherein calculating connection strength comprises using automatic differentiated vector gradients.
6. The method of claim 5 , wherein calculating accuracy of the target neuron comprises matrix multiplying the quality of test neuron values by connection strengths between the target neuron and the test neurons. (be sure to mention that matrix multiplication works just like the dot product here.)
7. The method of claim 1 , wherein running the neural network comprises using machine learning techniques to determine connection strengths between the target neuron and the test neurons comprises using automatic differentiation to backpropagate from the target neuron to the test neurons.
8. The method of claim 1 , wherein the neural network is a heterogenous neural network.
9. The method of claim 1 , wherein at least one test neuron has an accuracy and an associated sensor, and where the test neuron accuracy relates to accuracy of the associated sensor.
10. The method of claim 1 , wherein the neural network has internal values, and further comprising warming up the neural network using at least a portion of an initial state time series values to modify the neural network internal values.
11. The method of claim 10 , further comprising warming up the the neural network by pre-running the neural network using successively larger portions of an input wave form until a goal state is reached.
12. The method of claim 1 , wherein the neural network models a controlled system, and wherein the controlled system comprises a controlled building system, a process control system, an HVAC system, an energy system, or an irrigation system.
13. A system for computing neuron accuracy comprising: a processor; a memory in operational communication with the processor;
a neural network which resides at least partially in the memory, the neural network comprising test neurons with test values and at least one target neuron with a target neuron value;
a neural network optimizer that optimizes the neural network using known sensor values and test values for a cost function to produce a solved neural network with modeled test values;
a determiner that determines quality of the test neuron values by comparing test neuron values in the solved neural network to corresponding actual values;
a machine learner that uses machine learning techniques to calculate connection strengths between the test neurons and the at least one target neuron; and
a function calculator that calculates accuracy of the at least one target neuron value using:
quality of the test neuron values, and
connection strengths between the target neurons and the at least one test neuron.
14. The system of claim 13 , wherein the function calculator comprises matrix multiplying the quality of test neuron values by connection strengths between the target neuron and the test neurons.
15. The system of claim 13 , wherein at least one corresponding actual value comprises a sensor state value.
16. The system of claim 15 , wherein the sensor state value is derived from a sensor in a controlled space.
17. The system of claim 16 , further comprises an initializer, which uses state time series values as input into the neural network for a running period.
18. The system of claim 17 , wherein at least one of the machine learning techniques uses automatic differentiation to calculate connection strengths.
19. A computer-readable storage medium configured with data and instructions which upon execution by a processor perform a method for computing neuron accuracy, the method comprising: initializing values for at least some test neurons in a neural network, the test neurons representing corresponding actual values;
specifying a target neuron in the neural network;
optimizing the neural network using the actual values producing a solved neural network with a target neuron value and test neuron values;
using machine learning techniques to determine connection strengths between the target neuron and the test neurons;
determining quality of the test neuron values by comparing test neuron values in the solved neural network to corresponding actual neuron values; and
calculating accuracy of the target neuron using:
quality of the test neuron values, and
connection strengths between the target neuron and the at least one test neuron.
20. The computer-readable storage medium of claim 19 , wherein the corresponding actual values are sensor values that correspond to test neuron locations.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US17/336,640 US20210383236A1 (en) | 2020-06-05 | 2021-06-02 | Sensor Fusion Quality Of Data Determination |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US202062704976P | 2020-06-05 | 2020-06-05 | |
US17/336,640 US20210383236A1 (en) | 2020-06-05 | 2021-06-02 | Sensor Fusion Quality Of Data Determination |
Publications (1)
Publication Number | Publication Date |
---|---|
US20210383236A1 true US20210383236A1 (en) | 2021-12-09 |
Family
ID=78817218
Family Applications (10)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US17/009,713 Pending US20210383200A1 (en) | 2020-06-05 | 2020-09-01 | Neural Network Methods for Defining System Topology |
US17/177,391 Pending US20210381712A1 (en) | 2020-06-05 | 2021-02-17 | Determining demand curves from comfort curves |
US17/177,285 Pending US20210383235A1 (en) | 2020-06-05 | 2021-02-17 | Neural networks with subdomain training |
US17/193,179 Active 2041-05-05 US11861502B2 (en) | 2020-06-05 | 2021-03-05 | Control sequence generation system and methods |
US17/208,036 Pending US20210383041A1 (en) | 2020-06-05 | 2021-03-22 | In-situ thermodynamic model training |
US17/228,119 Active 2041-11-11 US11915142B2 (en) | 2020-06-05 | 2021-04-12 | Creating equipment control sequences from constraint data |
US17/308,294 Pending US20210383219A1 (en) | 2020-06-05 | 2021-05-05 | Neural Network Initialization |
US17/336,779 Abandoned US20210381711A1 (en) | 2020-06-05 | 2021-06-02 | Traveling Comfort Information |
US17/336,640 Pending US20210383236A1 (en) | 2020-06-05 | 2021-06-02 | Sensor Fusion Quality Of Data Determination |
US18/467,627 Pending US20240005168A1 (en) | 2020-06-05 | 2023-09-14 | Control sequence generation system and methods |
Family Applications Before (8)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US17/009,713 Pending US20210383200A1 (en) | 2020-06-05 | 2020-09-01 | Neural Network Methods for Defining System Topology |
US17/177,391 Pending US20210381712A1 (en) | 2020-06-05 | 2021-02-17 | Determining demand curves from comfort curves |
US17/177,285 Pending US20210383235A1 (en) | 2020-06-05 | 2021-02-17 | Neural networks with subdomain training |
US17/193,179 Active 2041-05-05 US11861502B2 (en) | 2020-06-05 | 2021-03-05 | Control sequence generation system and methods |
US17/208,036 Pending US20210383041A1 (en) | 2020-06-05 | 2021-03-22 | In-situ thermodynamic model training |
US17/228,119 Active 2041-11-11 US11915142B2 (en) | 2020-06-05 | 2021-04-12 | Creating equipment control sequences from constraint data |
US17/308,294 Pending US20210383219A1 (en) | 2020-06-05 | 2021-05-05 | Neural Network Initialization |
US17/336,779 Abandoned US20210381711A1 (en) | 2020-06-05 | 2021-06-02 | Traveling Comfort Information |
Family Applications After (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US18/467,627 Pending US20240005168A1 (en) | 2020-06-05 | 2023-09-14 | Control sequence generation system and methods |
Country Status (1)
Country | Link |
---|---|
US (10) | US20210383200A1 (en) |
Cited By (48)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20220376944A1 (en) | 2019-12-31 | 2022-11-24 | Johnson Controls Tyco IP Holdings LLP | Building data platform with graph based capabilities |
US11699903B2 (en) | 2017-06-07 | 2023-07-11 | Johnson Controls Tyco IP Holdings LLP | Building energy optimization system with economic load demand response (ELDR) optimization and ELDR user interfaces |
US11704311B2 (en) | 2021-11-24 | 2023-07-18 | Johnson Controls Tyco IP Holdings LLP | Building data platform with a distributed digital twin |
US11709965B2 (en) | 2017-09-27 | 2023-07-25 | Johnson Controls Technology Company | Building system with smart entity personal identifying information (PII) masking |
US11714930B2 (en) | 2021-11-29 | 2023-08-01 | Johnson Controls Tyco IP Holdings LLP | Building data platform with digital twin based inferences and predictions for a graphical building model |
US11726632B2 (en) | 2017-07-27 | 2023-08-15 | Johnson Controls Technology Company | Building management system with global rule library and crowdsourcing framework |
US11727738B2 (en) | 2017-11-22 | 2023-08-15 | Johnson Controls Tyco IP Holdings LLP | Building campus with integrated smart environment |
US11735021B2 (en) | 2017-09-27 | 2023-08-22 | Johnson Controls Tyco IP Holdings LLP | Building risk analysis system with risk decay |
US11733663B2 (en) | 2017-07-21 | 2023-08-22 | Johnson Controls Tyco IP Holdings LLP | Building management system with dynamic work order generation with adaptive diagnostic task details |
US11741165B2 (en) | 2020-09-30 | 2023-08-29 | Johnson Controls Tyco IP Holdings LLP | Building management system with semantic model integration |
US11754982B2 (en) | 2012-08-27 | 2023-09-12 | Johnson Controls Tyco IP Holdings LLP | Syntax translation from first syntax to second syntax based on string analysis |
US11755604B2 (en) | 2017-02-10 | 2023-09-12 | Johnson Controls Technology Company | Building management system with declarative views of timeseries data |
US11762343B2 (en) | 2019-01-28 | 2023-09-19 | Johnson Controls Tyco IP Holdings LLP | Building management system with hybrid edge-cloud processing |
US11762362B2 (en) | 2017-03-24 | 2023-09-19 | Johnson Controls Tyco IP Holdings LLP | Building management system with dynamic channel communication |
US11763266B2 (en) | 2019-01-18 | 2023-09-19 | Johnson Controls Tyco IP Holdings LLP | Smart parking lot system |
US11764991B2 (en) | 2017-02-10 | 2023-09-19 | Johnson Controls Technology Company | Building management system with identity management |
US11762886B2 (en) | 2017-02-10 | 2023-09-19 | Johnson Controls Technology Company | Building system with entity graph commands |
US11762351B2 (en) | 2017-11-15 | 2023-09-19 | Johnson Controls Tyco IP Holdings LLP | Building management system with point virtualization for online meters |
US11762356B2 (en) | 2017-09-27 | 2023-09-19 | Johnson Controls Technology Company | Building management system with integration of data into smart entities |
US11761653B2 (en) | 2017-05-10 | 2023-09-19 | Johnson Controls Tyco IP Holdings LLP | Building management system with a distributed blockchain database |
US11769066B2 (en) | 2021-11-17 | 2023-09-26 | Johnson Controls Tyco IP Holdings LLP | Building data platform with digital twin triggers and actions |
US11768004B2 (en) | 2016-03-31 | 2023-09-26 | Johnson Controls Tyco IP Holdings LLP | HVAC device registration in a distributed building management system |
US11768826B2 (en) | 2017-09-27 | 2023-09-26 | Johnson Controls Tyco IP Holdings LLP | Web services for creation and maintenance of smart entities for connected devices |
US11770020B2 (en) | 2016-01-22 | 2023-09-26 | Johnson Controls Technology Company | Building system with timeseries synchronization |
US11774930B2 (en) | 2017-02-10 | 2023-10-03 | Johnson Controls Technology Company | Building system with digital twin based agent processing |
US11774922B2 (en) | 2017-06-15 | 2023-10-03 | Johnson Controls Technology Company | Building management system with artificial intelligence for unified agent based control of building subsystems |
US11778030B2 (en) | 2017-02-10 | 2023-10-03 | Johnson Controls Technology Company | Building smart entity system with agent based communication and control |
US11774920B2 (en) | 2016-05-04 | 2023-10-03 | Johnson Controls Technology Company | Building system with user presentation composition based on building context |
US11782407B2 (en) | 2017-11-15 | 2023-10-10 | Johnson Controls Tyco IP Holdings LLP | Building management system with optimized processing of building system data |
US11792039B2 (en) | 2017-02-10 | 2023-10-17 | Johnson Controls Technology Company | Building management system with space graphs including software components |
US11796974B2 (en) | 2021-11-16 | 2023-10-24 | Johnson Controls Tyco IP Holdings LLP | Building data platform with schema extensibility for properties and tags of a digital twin |
US11874809B2 (en) | 2020-06-08 | 2024-01-16 | Johnson Controls Tyco IP Holdings LLP | Building system with naming schema encoding entity type and entity relationships |
US11874635B2 (en) | 2015-10-21 | 2024-01-16 | Johnson Controls Technology Company | Building automation system with integrated building information model |
US11880677B2 (en) | 2020-04-06 | 2024-01-23 | Johnson Controls Tyco IP Holdings LLP | Building system with digital network twin |
US11894944B2 (en) | 2019-12-31 | 2024-02-06 | Johnson Controls Tyco IP Holdings LLP | Building data platform with an enrichment loop |
US11892180B2 (en) | 2017-01-06 | 2024-02-06 | Johnson Controls Tyco IP Holdings LLP | HVAC system with automated device pairing |
US11902375B2 (en) | 2020-10-30 | 2024-02-13 | Johnson Controls Tyco IP Holdings LLP | Systems and methods of configuring a building management system |
US11900287B2 (en) | 2017-05-25 | 2024-02-13 | Johnson Controls Tyco IP Holdings LLP | Model predictive maintenance system with budgetary constraints |
US11899723B2 (en) | 2021-06-22 | 2024-02-13 | Johnson Controls Tyco IP Holdings LLP | Building data platform with context based twin function processing |
US11921481B2 (en) | 2021-03-17 | 2024-03-05 | Johnson Controls Tyco IP Holdings LLP | Systems and methods for determining equipment energy waste |
US11920810B2 (en) | 2017-07-17 | 2024-03-05 | Johnson Controls Technology Company | Systems and methods for agent based building simulation for optimal control |
US11927925B2 (en) | 2018-11-19 | 2024-03-12 | Johnson Controls Tyco IP Holdings LLP | Building system with a time correlated reliability data stream |
US11934966B2 (en) | 2021-11-17 | 2024-03-19 | Johnson Controls Tyco IP Holdings LLP | Building data platform with digital twin inferences |
US11941238B2 (en) | 2018-10-30 | 2024-03-26 | Johnson Controls Technology Company | Systems and methods for entity visualization and management with an entity node editor |
US11947785B2 (en) | 2016-01-22 | 2024-04-02 | Johnson Controls Technology Company | Building system with a building graph |
US11954478B2 (en) | 2017-04-21 | 2024-04-09 | Tyco Fire & Security Gmbh | Building management system with cloud management of gateway configurations |
US11954154B2 (en) | 2020-09-30 | 2024-04-09 | Johnson Controls Tyco IP Holdings LLP | Building management system with semantic model integration |
US11954713B2 (en) | 2018-03-13 | 2024-04-09 | Johnson Controls Tyco IP Holdings LLP | Variable refrigerant flow system with electricity consumption apportionment |
Families Citing this family (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US11553618B2 (en) | 2020-08-26 | 2023-01-10 | PassiveLogic, Inc. | Methods and systems of building automation state load and user preference via network systems activity |
US11644212B2 (en) * | 2020-11-12 | 2023-05-09 | International Business Machines Corporation | Monitoring and optimizing HVAC system |
US20230214555A1 (en) * | 2021-12-30 | 2023-07-06 | PassiveLogic, Inc. | Simulation Training |
Family Cites Families (42)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP0480654B1 (en) * | 1990-10-10 | 1998-03-04 | Honeywell Inc. | Process system identification |
US5224648A (en) | 1992-03-27 | 1993-07-06 | American Standard Inc. | Two-way wireless HVAC system and thermostat |
JPH07200512A (en) | 1993-09-13 | 1995-08-04 | Ezel Inc | 1optimization problems solving device |
US6119125A (en) | 1998-04-03 | 2000-09-12 | Johnson Controls Technology Company | Software components for a building automation system based on a standard object superclass |
IL134943A0 (en) * | 2000-03-08 | 2001-05-20 | Better T V Technologies Ltd | Method for personalizing information and services from various media sources |
EP1354446B1 (en) | 2001-01-12 | 2006-04-12 | Novar Marketing Inc. | Small building automation control system |
US7756804B2 (en) * | 2002-05-10 | 2010-07-13 | Oracle International Corporation | Automated model building and evaluation for data mining system |
US6967565B2 (en) | 2003-06-27 | 2005-11-22 | Hx Lifespace, Inc. | Building automation system |
US7447664B2 (en) | 2003-08-28 | 2008-11-04 | Boeing Co | Neural network predictive control cost function designer |
US7620613B1 (en) * | 2006-07-28 | 2009-11-17 | Hewlett-Packard Development Company, L.P. | Thermal management of data centers |
US20080082183A1 (en) | 2006-09-29 | 2008-04-03 | Johnson Controls Technology Company | Building automation system with automated component selection for minimum energy consumption |
US20080277486A1 (en) | 2007-05-09 | 2008-11-13 | Johnson Controls Technology Company | HVAC control system and method |
US20100025483A1 (en) | 2008-07-31 | 2010-02-04 | Michael Hoeynck | Sensor-Based Occupancy and Behavior Prediction Method for Intelligently Controlling Energy Consumption Within a Building |
US9020647B2 (en) | 2009-03-27 | 2015-04-28 | Siemens Industry, Inc. | System and method for climate control set-point optimization based on individual comfort |
US9258201B2 (en) | 2010-02-23 | 2016-02-09 | Trane International Inc. | Active device management for use in a building automation system |
US8626700B1 (en) * | 2010-04-30 | 2014-01-07 | The Intellisis Corporation | Context aware device execution for simulating neural networks in compute unified device architecture |
US9664400B2 (en) | 2011-11-17 | 2017-05-30 | Trustees Of Boston University | Automated technique of measuring room air change rates in HVAC system |
US9557750B2 (en) | 2012-05-15 | 2017-01-31 | Daikin Applied Americas Inc. | Cloud based building automation systems |
US9791872B2 (en) | 2013-03-14 | 2017-10-17 | Pelco, Inc. | Method and apparatus for an energy saving heating, ventilation, and air conditioning (HVAC) control system |
US9910449B2 (en) * | 2013-04-19 | 2018-03-06 | Google Llc | Generating and implementing thermodynamic models of a structure |
US9298197B2 (en) | 2013-04-19 | 2016-03-29 | Google Inc. | Automated adjustment of an HVAC schedule for resource conservation |
US10222277B2 (en) * | 2013-12-08 | 2019-03-05 | Google Llc | Methods and systems for generating virtual smart-meter data |
US9857238B2 (en) | 2014-04-18 | 2018-01-02 | Google Inc. | Thermodynamic model generation and implementation using observed HVAC and/or enclosure characteristics |
US9092741B1 (en) | 2014-04-21 | 2015-07-28 | Amber Flux Private Limited | Cognitive platform and method for energy management for enterprises |
US9869484B2 (en) * | 2015-01-14 | 2018-01-16 | Google Inc. | Predictively controlling an environmental control system |
US10094586B2 (en) | 2015-04-20 | 2018-10-09 | Green Power Labs Inc. | Predictive building control system and method for optimizing energy use and thermal comfort for a building or network of buildings |
US9798336B2 (en) | 2015-04-23 | 2017-10-24 | Johnson Controls Technology Company | Building management system with linked thermodynamic models for HVAC equipment |
KR102042077B1 (en) | 2016-09-26 | 2019-11-07 | 주식회사 엘지화학 | Intelligent fuel cell system |
US10013644B2 (en) | 2016-11-08 | 2018-07-03 | International Business Machines Corporation | Statistical max pooling with deep learning |
WO2018106969A1 (en) * | 2016-12-09 | 2018-06-14 | Hsu Fu Chang | Three-dimensional neural network array |
US10571143B2 (en) | 2017-01-17 | 2020-02-25 | International Business Machines Corporation | Regulating environmental conditions within an event venue |
US10247438B2 (en) | 2017-03-20 | 2019-04-02 | International Business Machines Corporation | Cognitive climate control based on individual thermal-comfort-related data |
US11371739B2 (en) * | 2017-04-25 | 2022-06-28 | Johnson Controls Technology Company | Predictive building control system with neural network based comfort prediction |
US11209184B2 (en) | 2018-01-12 | 2021-12-28 | Johnson Controls Tyco IP Holdings LLP | Control system for central energy facility with distributed energy storage |
US10140544B1 (en) | 2018-04-02 | 2018-11-27 | 12 Sigma Technologies | Enhanced convolutional neural network for image segmentation |
KR102212663B1 (en) * | 2018-05-22 | 2021-02-05 | 주식회사 석영시스템즈 | An apparatus for hvac system input power control based on target temperature and method thereof |
US10845815B2 (en) | 2018-07-27 | 2020-11-24 | GM Global Technology Operations LLC | Systems, methods and controllers for an autonomous vehicle that implement autonomous driver agents and driving policy learners for generating and improving policies based on collective driving experiences of the autonomous driver agents |
KR102198817B1 (en) * | 2018-09-12 | 2021-01-05 | 주식회사 석영시스템즈 | A method for creating demand response determination model for hvac system and a method for demand response |
US10896679B1 (en) * | 2019-03-26 | 2021-01-19 | Amazon Technologies, Inc. | Ambient device state content display |
US20210182660A1 (en) | 2019-12-16 | 2021-06-17 | Soundhound, Inc. | Distributed training of neural network models |
US11525596B2 (en) * | 2019-12-23 | 2022-12-13 | Johnson Controls Tyco IP Holdings LLP | Methods and systems for training HVAC control using simulated and real experience data |
US11573540B2 (en) * | 2019-12-23 | 2023-02-07 | Johnson Controls Tyco IP Holdings LLP | Methods and systems for training HVAC control using surrogate model |
-
2020
- 2020-09-01 US US17/009,713 patent/US20210383200A1/en active Pending
-
2021
- 2021-02-17 US US17/177,391 patent/US20210381712A1/en active Pending
- 2021-02-17 US US17/177,285 patent/US20210383235A1/en active Pending
- 2021-03-05 US US17/193,179 patent/US11861502B2/en active Active
- 2021-03-22 US US17/208,036 patent/US20210383041A1/en active Pending
- 2021-04-12 US US17/228,119 patent/US11915142B2/en active Active
- 2021-05-05 US US17/308,294 patent/US20210383219A1/en active Pending
- 2021-06-02 US US17/336,779 patent/US20210381711A1/en not_active Abandoned
- 2021-06-02 US US17/336,640 patent/US20210383236A1/en active Pending
-
2023
- 2023-09-14 US US18/467,627 patent/US20240005168A1/en active Pending
Cited By (63)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US11754982B2 (en) | 2012-08-27 | 2023-09-12 | Johnson Controls Tyco IP Holdings LLP | Syntax translation from first syntax to second syntax based on string analysis |
US11899413B2 (en) | 2015-10-21 | 2024-02-13 | Johnson Controls Technology Company | Building automation system with integrated building information model |
US11874635B2 (en) | 2015-10-21 | 2024-01-16 | Johnson Controls Technology Company | Building automation system with integrated building information model |
US11947785B2 (en) | 2016-01-22 | 2024-04-02 | Johnson Controls Technology Company | Building system with a building graph |
US11894676B2 (en) | 2016-01-22 | 2024-02-06 | Johnson Controls Technology Company | Building energy management system with energy analytics |
US11770020B2 (en) | 2016-01-22 | 2023-09-26 | Johnson Controls Technology Company | Building system with timeseries synchronization |
US11768004B2 (en) | 2016-03-31 | 2023-09-26 | Johnson Controls Tyco IP Holdings LLP | HVAC device registration in a distributed building management system |
US11927924B2 (en) | 2016-05-04 | 2024-03-12 | Johnson Controls Technology Company | Building system with user presentation composition based on building context |
US11774920B2 (en) | 2016-05-04 | 2023-10-03 | Johnson Controls Technology Company | Building system with user presentation composition based on building context |
US11892180B2 (en) | 2017-01-06 | 2024-02-06 | Johnson Controls Tyco IP Holdings LLP | HVAC system with automated device pairing |
US11762886B2 (en) | 2017-02-10 | 2023-09-19 | Johnson Controls Technology Company | Building system with entity graph commands |
US11792039B2 (en) | 2017-02-10 | 2023-10-17 | Johnson Controls Technology Company | Building management system with space graphs including software components |
US11755604B2 (en) | 2017-02-10 | 2023-09-12 | Johnson Controls Technology Company | Building management system with declarative views of timeseries data |
US11809461B2 (en) | 2017-02-10 | 2023-11-07 | Johnson Controls Technology Company | Building system with an entity graph storing software logic |
US11774930B2 (en) | 2017-02-10 | 2023-10-03 | Johnson Controls Technology Company | Building system with digital twin based agent processing |
US11778030B2 (en) | 2017-02-10 | 2023-10-03 | Johnson Controls Technology Company | Building smart entity system with agent based communication and control |
US11764991B2 (en) | 2017-02-10 | 2023-09-19 | Johnson Controls Technology Company | Building management system with identity management |
US11762362B2 (en) | 2017-03-24 | 2023-09-19 | Johnson Controls Tyco IP Holdings LLP | Building management system with dynamic channel communication |
US11954478B2 (en) | 2017-04-21 | 2024-04-09 | Tyco Fire & Security Gmbh | Building management system with cloud management of gateway configurations |
US11761653B2 (en) | 2017-05-10 | 2023-09-19 | Johnson Controls Tyco IP Holdings LLP | Building management system with a distributed blockchain database |
US11900287B2 (en) | 2017-05-25 | 2024-02-13 | Johnson Controls Tyco IP Holdings LLP | Model predictive maintenance system with budgetary constraints |
US11699903B2 (en) | 2017-06-07 | 2023-07-11 | Johnson Controls Tyco IP Holdings LLP | Building energy optimization system with economic load demand response (ELDR) optimization and ELDR user interfaces |
US11774922B2 (en) | 2017-06-15 | 2023-10-03 | Johnson Controls Technology Company | Building management system with artificial intelligence for unified agent based control of building subsystems |
US11920810B2 (en) | 2017-07-17 | 2024-03-05 | Johnson Controls Technology Company | Systems and methods for agent based building simulation for optimal control |
US11733663B2 (en) | 2017-07-21 | 2023-08-22 | Johnson Controls Tyco IP Holdings LLP | Building management system with dynamic work order generation with adaptive diagnostic task details |
US11726632B2 (en) | 2017-07-27 | 2023-08-15 | Johnson Controls Technology Company | Building management system with global rule library and crowdsourcing framework |
US11741812B2 (en) | 2017-09-27 | 2023-08-29 | Johnson Controls Tyco IP Holdings LLP | Building risk analysis system with dynamic modification of asset-threat weights |
US11768826B2 (en) | 2017-09-27 | 2023-09-26 | Johnson Controls Tyco IP Holdings LLP | Web services for creation and maintenance of smart entities for connected devices |
US11709965B2 (en) | 2017-09-27 | 2023-07-25 | Johnson Controls Technology Company | Building system with smart entity personal identifying information (PII) masking |
US11762353B2 (en) | 2017-09-27 | 2023-09-19 | Johnson Controls Technology Company | Building system with a digital twin based on information technology (IT) data and operational technology (OT) data |
US11762356B2 (en) | 2017-09-27 | 2023-09-19 | Johnson Controls Technology Company | Building management system with integration of data into smart entities |
US11735021B2 (en) | 2017-09-27 | 2023-08-22 | Johnson Controls Tyco IP Holdings LLP | Building risk analysis system with risk decay |
US11782407B2 (en) | 2017-11-15 | 2023-10-10 | Johnson Controls Tyco IP Holdings LLP | Building management system with optimized processing of building system data |
US11762351B2 (en) | 2017-11-15 | 2023-09-19 | Johnson Controls Tyco IP Holdings LLP | Building management system with point virtualization for online meters |
US11727738B2 (en) | 2017-11-22 | 2023-08-15 | Johnson Controls Tyco IP Holdings LLP | Building campus with integrated smart environment |
US11954713B2 (en) | 2018-03-13 | 2024-04-09 | Johnson Controls Tyco IP Holdings LLP | Variable refrigerant flow system with electricity consumption apportionment |
US11941238B2 (en) | 2018-10-30 | 2024-03-26 | Johnson Controls Technology Company | Systems and methods for entity visualization and management with an entity node editor |
US11927925B2 (en) | 2018-11-19 | 2024-03-12 | Johnson Controls Tyco IP Holdings LLP | Building system with a time correlated reliability data stream |
US11763266B2 (en) | 2019-01-18 | 2023-09-19 | Johnson Controls Tyco IP Holdings LLP | Smart parking lot system |
US11769117B2 (en) | 2019-01-18 | 2023-09-26 | Johnson Controls Tyco IP Holdings LLP | Building automation system with fault analysis and component procurement |
US11775938B2 (en) | 2019-01-18 | 2023-10-03 | Johnson Controls Tyco IP Holdings LLP | Lobby management system |
US11762343B2 (en) | 2019-01-28 | 2023-09-19 | Johnson Controls Tyco IP Holdings LLP | Building management system with hybrid edge-cloud processing |
US11777756B2 (en) | 2019-12-31 | 2023-10-03 | Johnson Controls Tyco IP Holdings LLP | Building data platform with graph based communication actions |
US11777757B2 (en) | 2019-12-31 | 2023-10-03 | Johnson Controls Tyco IP Holdings LLP | Building data platform with event based graph queries |
US11894944B2 (en) | 2019-12-31 | 2024-02-06 | Johnson Controls Tyco IP Holdings LLP | Building data platform with an enrichment loop |
US11824680B2 (en) | 2019-12-31 | 2023-11-21 | Johnson Controls Tyco IP Holdings LLP | Building data platform with a tenant entitlement model |
US11968059B2 (en) | 2019-12-31 | 2024-04-23 | Johnson Controls Tyco IP Holdings LLP | Building data platform with graph based capabilities |
US11777758B2 (en) | 2019-12-31 | 2023-10-03 | Johnson Controls Tyco IP Holdings LLP | Building data platform with external twin synchronization |
US11777759B2 (en) | 2019-12-31 | 2023-10-03 | Johnson Controls Tyco IP Holdings LLP | Building data platform with graph based permissions |
US11770269B2 (en) | 2019-12-31 | 2023-09-26 | Johnson Controls Tyco IP Holdings LLP | Building data platform with event enrichment with contextual information |
US20220376944A1 (en) | 2019-12-31 | 2022-11-24 | Johnson Controls Tyco IP Holdings LLP | Building data platform with graph based capabilities |
US11880677B2 (en) | 2020-04-06 | 2024-01-23 | Johnson Controls Tyco IP Holdings LLP | Building system with digital network twin |
US11874809B2 (en) | 2020-06-08 | 2024-01-16 | Johnson Controls Tyco IP Holdings LLP | Building system with naming schema encoding entity type and entity relationships |
US11741165B2 (en) | 2020-09-30 | 2023-08-29 | Johnson Controls Tyco IP Holdings LLP | Building management system with semantic model integration |
US11954154B2 (en) | 2020-09-30 | 2024-04-09 | Johnson Controls Tyco IP Holdings LLP | Building management system with semantic model integration |
US11902375B2 (en) | 2020-10-30 | 2024-02-13 | Johnson Controls Tyco IP Holdings LLP | Systems and methods of configuring a building management system |
US11921481B2 (en) | 2021-03-17 | 2024-03-05 | Johnson Controls Tyco IP Holdings LLP | Systems and methods for determining equipment energy waste |
US11899723B2 (en) | 2021-06-22 | 2024-02-13 | Johnson Controls Tyco IP Holdings LLP | Building data platform with context based twin function processing |
US11796974B2 (en) | 2021-11-16 | 2023-10-24 | Johnson Controls Tyco IP Holdings LLP | Building data platform with schema extensibility for properties and tags of a digital twin |
US11934966B2 (en) | 2021-11-17 | 2024-03-19 | Johnson Controls Tyco IP Holdings LLP | Building data platform with digital twin inferences |
US11769066B2 (en) | 2021-11-17 | 2023-09-26 | Johnson Controls Tyco IP Holdings LLP | Building data platform with digital twin triggers and actions |
US11704311B2 (en) | 2021-11-24 | 2023-07-18 | Johnson Controls Tyco IP Holdings LLP | Building data platform with a distributed digital twin |
US11714930B2 (en) | 2021-11-29 | 2023-08-01 | Johnson Controls Tyco IP Holdings LLP | Building data platform with digital twin based inferences and predictions for a graphical building model |
Also Published As
Publication number | Publication date |
---|---|
US20210381712A1 (en) | 2021-12-09 |
US20210383200A1 (en) | 2021-12-09 |
US20210383235A1 (en) | 2021-12-09 |
US20210382445A1 (en) | 2021-12-09 |
US20210383041A1 (en) | 2021-12-09 |
US20210383042A1 (en) | 2021-12-09 |
US11915142B2 (en) | 2024-02-27 |
US11861502B2 (en) | 2024-01-02 |
US20210381711A1 (en) | 2021-12-09 |
US20240005168A1 (en) | 2024-01-04 |
US20210383219A1 (en) | 2021-12-09 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US20210383236A1 (en) | Sensor Fusion Quality Of Data Determination | |
US20230214555A1 (en) | Simulation Training | |
CN109684714B (en) | Building design method based on machine learning and BIM technology | |
CN106326346A (en) | Text classification method and terminal device | |
Zhou et al. | Comparison of different deep neural network architectures for isothermal indoor airflow prediction | |
CN112270122A (en) | Inversion evaluation method for fire source parameters of building fire | |
Bagheri et al. | Modeling of epistemic uncertainty in reliability analysis of structures using a robust genetic algorithm | |
TWI761834B (en) | Intelligent method for testing sensed data and system thereof | |
Zhuang et al. | Active-learning-based nonintrusive model order reduction | |
JP2023544028A (en) | Automatic generation of machine learning models from computational simulation data | |
JP6895334B2 (en) | Operation rule extraction device, operation rule extraction system and operation rule extraction method | |
Ling et al. | Predicting the temperature dynamics of scaled model and real-world IoT-enabled smart homes | |
CN107622301A (en) | A kind of Forecasting Methodology of the vacant parking position number in parking lot | |
Wang et al. | Concrete compression test data estimation based on a wavelet neural network model | |
CN114139937A (en) | Indoor thermal comfort data generation method, system, equipment and medium | |
Zhang et al. | Intelligent fire location detection approach for extrawide immersed tunnels | |
Wang et al. | Neural network and PSO-based structural approximation analysis for blade of wind turbine | |
US20230252205A1 (en) | Simulation Warmup | |
Fernandez et al. | Parameter identification of a Round-Robin test box model using a deterministic and probabilistic methodology | |
CN113254435B (en) | Data enhancement method and system | |
US20240095427A1 (en) | Apparatus and method of synthetic data generation using environmental models | |
CN112560362B (en) | Aging diagnosis method and system for water supply pipeline of old community | |
CN113722975B (en) | Network model training method, room temperature prediction method, device, equipment and medium | |
KR102076419B1 (en) | APPARATUS AND METHOD FOR ESTIMATING OCCUPANCY USING IoT INFORMATION | |
Zhang et al. | Intelligent HVAC System Control Method Based on Digital Twin |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AS | Assignment |
Owner name: PASSIVELOGIC, INC., UTAH Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:HARVEY, TROY AARON;FILLINGIM, JEREMY DAVID;REEL/FRAME:056414/0155 Effective date: 20210602 |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: DOCKETED NEW CASE - READY FOR EXAMINATION |