EP4168903A1 - Spatial and context aware software applications using digital enclosures bound to physical spaces - Google Patents

Spatial and context aware software applications using digital enclosures bound to physical spaces

Info

Publication number
EP4168903A1
EP4168903A1 EP21850733.3A EP21850733A EP4168903A1 EP 4168903 A1 EP4168903 A1 EP 4168903A1 EP 21850733 A EP21850733 A EP 21850733A EP 4168903 A1 EP4168903 A1 EP 4168903A1
Authority
EP
European Patent Office
Prior art keywords
digital
enclosures
enclosure
physical
spatial
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
EP21850733.3A
Other languages
German (de)
French (fr)
Inventor
Shaojie LIU
Xiao HAN
David Ku
Qi Lu
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Pine Field Holding Inc
Original Assignee
Pine Field Holding Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Pine Field Holding Inc filed Critical Pine Field Holding Inc
Publication of EP4168903A1 publication Critical patent/EP4168903A1/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N5/00Computing arrangements using knowledge-based models
    • G06N5/02Knowledge representation; Symbolic representation
    • G06N5/022Knowledge engineering; Knowledge acquisition

Definitions

  • This disclosure relates generally to systems and application development environments to enable, support and accelerate the development and execution of software applications that can perform tasks and services using the context of a physical environment.
  • spatial applications are challenging because significant effort is required to accurately model the physical environment and the context within.
  • spatial applications require a much richer and more expressive model of the spatial and semantic context in which they operate.
  • a common approach to developing context-aware applications is to create customized solutions that are tailored to a particular physical environment and application context. These custom solutions are designed to work for a specific set of sensors and hardware devices, configured and deployed in a particular way for a specific physical environment, and run a customized set of AI models and software modules over a particular topology of computing resources.
  • One downside of this approach is that the resulting application is brittle and hard to evolve, including changes in the selection of hardware devices, AI models and application logic.
  • Another downside is that since the modelled context is internal to each solution and restricted to a narrow slice of the physical environment, interoperability across multiple custom solutions operating within the same physical environment is difficult due to the lack of shared and orchestrated context.
  • a system in one aspect, includes a plurality of software-defined digital enclosures (digital containers) that are bound to corresponding physical spaces.
  • a digital enclosure includes an enclosure context that captures the spatial, temporal and semantic states of the corresponding physical space and the dynamics thereof. The binding to the physical space updates the enclosure context based on sensor data captured by sensors that monitor the physical space.
  • the system also includes an application programming interface (API) that provides programmatic access to the digital enclosures to a plurality of software applications. The software applications access the digital enclosures via the API and utilize the enclosure contexts captured in the digital enclosures.
  • API application programming interface
  • FIG. 1 shows an overall system view, with components organized in a layered architecture.
  • FIGS. 2 and 3 show the architecture and components of the Enclosure Operating System of FIG. 1.
  • FIG. 4 shows a Spatial Object Model (SOM) .
  • FIG. 5 shows examples of the Spatial Object Model to illustrate the structure of entities, entity classes, components, attributes and relations in the model.
  • FIG. 6 shows the architecture and components of a SOM Extension Framework.
  • FIG. 7 shows the architecture and components of a Spatial Application Programming Layer.
  • the underlying technology typically must first capture the who, what, when and where within a physical environment over time and then model these to develop the higher-level contexts. This involves in-depth knowledge of the spatial structure and temporal dynamics of the people, places, things, activities and relationships within the physical environment, and also the common semantics that are specific to the domain of the physical environment through which to make sense of the roles, intentions, tasks and outcomes based on the spatial and temporal context. For example, a person resting has different semantics and is interpreted differently in a factory setting versus a hospital setting. In order for a context-aware software application to operate at this level, the fact that a person in the physical environment is resting must be captured from the physical environment, but the semantics of that event for the specific domain of interest (factory versus hospital) must also be known.
  • Semantics typically describe what happens in a given physical context. For example, in a hospital setting, people inside take on different roles. Some are doctors, some are nurses, patients, family members, visitors, etc. There are equipments that perform various functions that are pertinent to the hospital setting, such as MRI, health pressure monitor, wearables, carts, nursing stations, etc. There are also events, actions and activities that take place inside hospitals, such as new patient registering, doctor’s daily round, ICU trauma procedures, take blood pressure, check on patient, perform surgery, etc.
  • a set of semantics that define what is happening inside a given physical environment may be overlaid in a way that captures the semantics of the dynamics of the physical environment.
  • domain semantics -hospitals, factories, schools, etc.
  • semantics may be supported in the following ways.
  • these semantics may be captured and represented in a spatial object model (as described below) , by extending and building upon the foundational abstractions of people, places, things and relations.
  • programmatic binding from low-level data and knowledge to these higher-level, semantic representations may be enabled through extensible software modules that plug into digital enclosures.
  • a more systematic approach to developing context-aware applications for physical environments by refactoring the modeling of context from individual applications to form a shared model of the spatial, temporal and semantic context for a physical environment, can dramatically accelerate the pace of innovation and development for context-aware spatial applications.
  • the context for a physical space is represented by a digitized model of the spatial, temporal and semantic state of the physical space and its dynamics, inclusive of the physical objects (e.g., people, places, things) , attributes, actions and events within.
  • This context may form the core of a container, referred to as a digital enclosure, which is then a basic building block upon which context-aware software applications may be built. Because the context is contained in a digital enclosure, it may also be referred to as an enclosure context.
  • the digital enclosures are software-defined. As a result, real world behavior or situations can be semantically defined and programmatically modelled and driven through software, which drives computation and also potentially hardware (controls, IOT, actuators) .
  • the digital enclosures are also bound to the corresponding physical spaces, meaning that the enclosure contexts are updated based on sensor data captured by sensors that monitor the physical spaces.
  • the updates may be real-time, so that the enclosure contexts provide a current description of the corresponding physical space.
  • the binding process may involve creating a digital model of the physical space (e.g., 3D scanning, CAD mapping) , deploying sensors/IOTs in the physical space and connecting these digital input/output to the digital enclosure.
  • a digital enclosure may represent a real-time, connected digital replica of the real world within that space, against which developers can programmatically interact.
  • the contexts within a digital enclosure may be represented using a Spatial Object Model (SOM) .
  • SOM Spatial Object Model
  • the SOM may be created through a digitization platform (e.g., digitizers or digitization modules described below) using spatially calibrated hardware devices, AI models and software modules deployed into the physical environment and/or accessible by the digital enclosure.
  • the SOM may be extended with domain-specific knowledge and semantics that model the roles, tasks, activities and outcomes that are appropriate for the physical environment.
  • Spatial applications then build on the SOM to deliver context-aware services and experiences to multiple application target endpoints, from mobile-based end-user experiences over apps, augmented reality, or voice assistants, to devices and actuators within the physical environment, and to simulation environments used by developers.
  • An API provides programmatic access to a set of data and service artifacts associated with the SOM and the digital enclosure.
  • the spatial applications are portable across different physical environments, using the SOM to abstract the underlying digitization implementation for each physical environment.
  • Portable spatial applications may also be published to an app store, which can be discovered and readily deployed by operators to systematically elevate the value of their physical environments.
  • FIG. 1 describes the architecture of an embodiment for a novel system and application development environment to enable efficient development of spatial and context-aware software applications for physical environments.
  • An Enclosure Operating System (EOS) is deployed to a physical environment to systematically digitize and model the spatial and semantic context 120 of the people, places, things, activities and relationships within the environment over time, using the Spatial Object Model (SOM) as the representation of the context of the physical environment.
  • the context 120 is contained in a digital enclosure 110, which includes additional components to support the development (and possibly enrichment) of the context.
  • the SOM representation can be extended and annotated with domain-specific semantics and knowledge (SOM Extensions 130) through the SOM Extension Framework. Spatial applications can then build upon the SOM representation to create and deliver spatial and context-aware tasks in situ to the enclosures and the entities therein.
  • System 1 is the Enclosure Operating System (EOS) which, through calibrated sensors and models, creates a representation of the context of the people, places, things, activities and relationships within a corresponding physical space over time.
  • EOS Enclosure Operating System
  • the EOS is where descriptors are bound to the state of various entities or relations to some ontology, which defines the domain-specific real world artifacts, actions, relations in a common way (aka: through Semantic Web or Resource Descriptor Framework) , which can then be used to create richer, more semantically meaningful analysis, services or optimizations.
  • the context 120 captures the context of the corresponding physical space: the physical objects (people, places, things) in the physical space; attributes of the physical objects; actions, activities and behaviors of the physical objects and events occurring in the physical space.
  • This context 120 forms the core of a container, referred to as a digital enclosure 110.
  • the digital enclosure is a basic building block upon which context-aware software applications may be built.
  • a digital enclosure 110 may include additional components: references to resources accessible by/to the digital enclosure, connectors to transfer data between digital enclosures, relations that describe the relations between the physical spaces corresponding to different digital enclosures, services/applications that are internal to the digital enclosure, and roles or permissions that define access to the digital enclosure and its components.
  • Examples of resources include the following: sensors that capture data about the physical space; controls that effect changes in the physical space; digitizers or digitization modules that form a stack from sensors to the digital enclosure; other services, analysis or other components that are external to the digital enclosure; and data sources that are exterior to the physical space (i.e., not sensors) .
  • the term "artifacts" is used to refer to any components of a digital enclosure, including the context and its components.
  • a digital enclosure is a container that contains a set of artifacts. They are programmable in the sense that artifacts within the enclosure may be programmatically added, deleted, or otherwise modified. Its content and configuration can be programmatically changed. Digital enclosures may also be extensible, meaning that its artifacts can be logically extended to new types and artifacts. This allows the building up a repository of building block artifacts, which can be used to describe the rich real world of people, places and things. If these extensions may be done programmatically, then the digital enclosures are programmatically extensible.
  • Connectors allow digital enclosures to share data.
  • Examples of connectors include projections and tunneling.
  • One enclosure (or some portion within) may be projected into another enclosure, such that all the context/artifacts of the projected enclosure is accessible to the target enclosure. This supports the creation of layered enclosures, where each enclosure builds upon another enclosure, extending it with some new information, artifacts and behaviors. Projections can be unidirectional or bidirectional.
  • the Enclosure Operating System is a system for maintaining, managing and using digital enclosures, as further described in FIGS. 2 and 3.
  • the SOM 120 is a particular way of organizing the objects, attributes, events and behaviors, that enables access and manipulation, as well as analytics and extensions.
  • SOM 120 may be thought of as an ontologically organized knowledge graph that describes the spatial, temporal and semantic state and dynamics within the digital enclosure and corresponding physical space.
  • SOM is a digital representation of a slice of space x time x semantics, described in terms of entities, events, components and relations.
  • the SOM 120 is analogous to the Document Object Model (DOM) , but instead of representing digitized Web page content, the SOM 120 represents the who, what, when and where of the entities and dynamics inside a physical environment (the context of the physical environment) .
  • the DOM is an application programming interface for HTML and XML documents.
  • the DOM defines the logical structure of the document and the way a document is accessed and manipulated by users and applications, allowing developers to create, navigate the structure, and add, delete, modify elements of the content within a document.
  • the DOM is hosted by a Web browser, which manages the runtime state of the DOM and provides interfaces to interact with end users and applications. As the DOM is changed, events are emitted, which can be processed by a set of registered event handlers, often written in JavaScript or other programming language bindings, to respond to and inject changes back to the document through the DOM.
  • the SOM plays an analogous role but with respect to the context of the physical space, rather than with respect to a Web document.
  • the SOM is an application programming interface (API) that provides programmatic access to the digital enclosures and their contexts.
  • API application programming interface
  • the SOM defines the logical structure of the context and the way the context is accessed and manipulated by users and applications, allowing developers to create, navigate the structure, and add, delete, modify elements of the context.
  • the SOM is hosted by an operator associated with the physical environment, which manages the runtime state of the SOM and provides interfaces to interact with end users and applications.
  • events are emitted, which can be processed by a set of registered event handlers, to respond to and inject changes back to the context through the SOM.
  • SEF SOM Extension Framework
  • System 3 is the Spatial Application Programming Layer, which provides the programming and runtime interfaces for spatial applications 140 to build upon SOM 120 to perform in-situ and context-aware tasks and services.
  • These software applications can deliver changes to the physical environment (through controls) , or deliver user experiences to users through in-situ devices (such as displays or monitors) , or to mobile devices that are used by occupants. “In-situ” real world experiences may be delivered and experienced in the context of the physical space by users inside the physical space.
  • FIGS. 2 and 3 show the architecture and components of the Enclosure Operating System (System 1 of FIG. 1) .
  • the EOS includes five layers organized in a tiered fashion, starting from the infrastructure layer 210 that provides computing resources, to the spatial structure layer 220 that models the time-invariant spatial topology of the physical space, to the digitization layer 230 that connects data streams from sensors and actuators deployed within the physical space as required by the corresponding digital enclosures, to the intelligence layer 240 that systematically distills knowledge about various aspects of the physical space and digital enclosures and its dynamics to produce a time-varying model of spatial, temporal and semantic context represented by the Spatial Object Model.
  • the SOM is then used by spatial applications through the application layer 250 to gain spatial and context-awareness of the physical environment by accessing the corresponding digital enclosures.
  • the Enclosure Operating System is a layered stack with five layers, which is deployed to physical environments to create a digitized model of the spatial and temporal dynamics, which developers can program against.
  • the Enclosure Operating System manages digital enclosures, their bindings to physical environments (through digitization modules/digitizers) , and applications running in and/or accessing the digital enclosures.
  • the EOS may act as a real-time engine that connects the flows of data and events from both the physical and digital world (applications) , to ensure digital enclosures are coherent and operating, with the right events and right services being called at the right time, while managing permissions and security and availability, etc.
  • the infrastructure layer 210 provides the digital computing resources required to power all the data and computations within the digital enclosure.
  • the spatial structure layer 220 models the time-invariant spatial structure and topology of the physical environment and its contents, represented as a 3D point cloud and/or 3D CAD model in some embodiments, to form the underlying anchor spatial representation against which all data and extracted information within the digital enclosure are indexed against.
  • the digitization layer 230 is the edge between the physical space and the digital world. It may use spatially calibrated sensors deployed within the digital enclosure to acquire spatially anchored information about the physical environment. It may also deliver spatially anchored information and services to occupants and actuators in the physical environment.
  • the intelligence layer 240 processes the acquired data streams by fusing them across different sensors and AI models to create a time-varying representation of the spatial, temporal and semantic context in the form of Spatial Object Model.
  • the application layer 250 enables applications and domain frameworks to build upon and extend the SOM, along with the computing and information resources accessible by the digital enclosure, to create context-aware applications across multiple application endpoints that can sense, reason and interact with people and things within the physical environment.
  • the infrastructure layer 210 may include edge computing components located within the physical environment, along with cloud computing components for large-scale analytics and heavier computations.
  • the edge computing component supports the low-latency localized storage and processing requirements for high bandwidth sensor and AI modeling workloads, such as computer vision models or wireless localization algorithms.
  • the infrastructure layer is fully contained within the physical environment, capable of operating in standalone mode without the cloud. In this embedded mode, the infrastructure supports physical security measures that result in stronger privacy and security control.
  • the spatial structure layer 220 includes a two-step process.
  • the physical space is 3D scanned using normal (RGB) or depth (RGB-D) cameras to create an integrated and geometrically calibrated 3D point cloud of the enclosure, using variants of Simultaneous Localization and Mapping (SLAM) .
  • SLAM Simultaneous Localization and Mapping
  • the 3D point cloud is then processed by a second phase, where AI models are applied to systematically segment, recognize and map the detected objects from the 3D point cloud into their corresponding CAD models from a CAD library. Any object with detected physical form factors can be recognized and mapped, including but not restricted to furniture, windows, doors, fixtures, tables, machines, lighting and appliances.
  • the resulting 3D CAD model of the physical environment represents (1) the time-invariant 3D spatial structure and floorplan, and (2) the placed and oriented 3D CAD models of the detected objects.
  • This 3D spatial structure provides the anchor representation that binds everything (entity, event, data) within the digital enclosure to a particular point or region in the 3D space x time within the physical environment.
  • the spatial structure layer 220 may support the import of floorplan maps and building structural models, such as Building Information Model (BIM) or Industry Foundation Classes (IFC) , that are increasingly common for modern buildings. BIM models can be directly mapped to the 3D CAD model formats used by the system.
  • BIM Building Information Model
  • IFC Industry Foundation Classes
  • the digitization layer 230 manages the configuration and interaction with devices in the physical environment that are spatially anchored in the digital enclosure.
  • the devices can be sensors and/or actuators, and they may be stationary (such as lighting control or wall-mounted cameras) or mobile (such as smart tags, smart phones and wearables attached to occupants) .
  • Once the devices are registered and connected through wired or wireless gateways, they send data streams or receive commands to and from the system, intermediated through device drivers that provide uniform programming and configuration interfaces to enable interoperability across heterogeneous device types and vendors.
  • the data from the digitization layer 230 flows into a streaming and distributed data platform, which provides storage, processing, querying and organization capabilities for the data lake.
  • the data platform virtualizes the data across the distributed infrastructure layer, across on-premise and cloud-based computing resources.
  • a variety of indexing strategies are deployed to provide efficient spatial, temporal and graph-based query and storage capabilities to the system.
  • the digitization layer 230 includes a Distributed Task Engine (DTE) , which provides a dataflow-based computing abstraction, operating over high-dimensional data streams, with parallel execution across stateless and stateful tasks. Computation programmed in this abstraction can be efficiently executed over the distributed computing resources of the infrastructure layer, through the DTE. The ability to deploy generalized computations across hardware, models and software over a distributed computing fabric can greatly enhance the extensibility of the system.
  • the Distributed Task Engine can be implemented using a system similar to the Ray platform from UC Berkeley’s RISE laboratory.
  • the intelligence layer 240 includes modular digitization containers (digitizers or digitization modules) .
  • Each digitizer configures and processes the data flows from a set of deployed and configured devices, through a set of computations that may involve AI models or software modules, to distill knowledge about some aspect of the physical environment, with instruction on how to leverage and update the SOM with the distilled knowledge.
  • digitizers may model the physical environment (temperature, humidity, airflow) , or they may model people, state and movements (location, pose, gestures, expressions, mask wearing, etc. ) , or they may model acoustics and speech.
  • Digitizers can also use information relevant to the physical space, such as weather, nearby congestion, etc., where the information is provided by external sources.
  • Multiple digitizers can be deployed into the same digital enclosure, where each digitizer distills a different aspect of the physical environment, and together they compose to create a multi-dimensional and multi-faceted time-variant model of the physical environment and the dynamics within.
  • a digitizer is a configurable module with four parts: (1) the device manifest that describes the deployment and configuration of devices within the digital enclosure, (2) the distributed data-flow computations that connect from device drivers to generate distilled knowledge representations, (3) the interface specification that describes how to update the SOM with the distilled knowledge, and (4) an optional configuration application that provides an interactive interface for developers to configure and deploy the digitizer to a digital enclosure. Digitizers are packaged and deployed as software containers, through container systems such as Docker or Kubernetes.
  • digitizers are organized into an extensible library to enable systematic digitization of physical environments in a modular, open and reusable way.
  • Digitizers can be created to model various aspects of the environment within the enclosure, including but not restricted to temperature, airflow, luminosity, acoustics, air quality and barometric pressure. Digitizers can also be created to model various aspects of the dynamics of people and things within the enclosure, including but not restricted to their position, orientation, velocity, body pose, gestures, clothing, attachments, expressions, actions and interactions with other people or environment.
  • the application layer provides interfaces for spatial applications to access the context (SOM) , along with the data and platform resources available in the digital enclosure.
  • SOM context
  • spatial applications become context-aware, and can be programmed to be responsive to and interact with people and events within the digital enclosure and physical environment through the SOM.
  • the application layer enables spatial applications to be delivered to multiple application endpoints.
  • These application endpoints may include (1) end-user smartphones, through mobile-based native apps, AR, or voice assistants, (2) directly to the physical environment, through device-based actuators and robots, and (3) simulation environments, such as Unity3D, that are used by developers to visualize and simulate the application logic against simulated real-world scenarios.
  • simulation environments such as Unity3D, that are used by developers to visualize and simulate the application logic against simulated real-world scenarios.
  • the digital enclosures may be bound to anchor points in the simulation, rather than in the physical real world.
  • privacy and security is enforced through fine-grained authorization and permissions across users, data, resources and services. All data that are used or generated within a digital enclosure is cryptographically bound to the enclosure, with strong security and privacy guarantees. All access to the SOM and platform resources are tracked and managed by security policies that are enforced by the Enclosure Operating System.
  • FIG. 4 shows a Spatial Object Model (SOM) , which is a representation contained in a digital enclosure of the spatial, temporal and semantic context of a physical space.
  • SOM Spatial Object Model
  • the SOM includes entities that describe various artifacts within the digital enclosure, which may be either physical or virtual.
  • the SOM can be extended in a straightforward manner, where new types of entities can be defined to describe broad and diverse aspects of the physical environment.
  • SOM is to physical spaces what DOM is to Web documents. Whereas DOM describes the structure of page elements in a Web document along with their changes expressed as events, SOM describes the structure of physical elements in a physical space along with the dynamics and interactions expressed also as events.
  • the core of SOM are entities, which represent physical or logical artifacts within the enclosure, including but not restricted to people, robots, things such as tables and furniture, ithings such as sensors and devices, or regions of space called zones.
  • Each entity is defined by a set of typed components, where a component is a collection of typed attributes that represent some aspect of the entity, such as its location or size or action. For physical objects, attributes may include its physical state (color, size, orientation, position, posture) .
  • attributes may include identity, gender, demographics, link to user profile, expression, clothing, role like nurse or doctor, and other information about a person, anonymous or identified.
  • attributes can also include services the device can perform, such as turning on, printing, etc.
  • the SOM may be described using Resource Description Framework (RDF) , which is a generalized W3C specification for describing knowledge and semantics that is broadly used to provide semantically rich and linked knowledge on the Web.
  • RDF Resource Description Framework
  • SOM is based on RDF, it can leverage the rich semantic types and extensibility of RDF, as well as link to the rich knowledge representations that are described in RDF and available on the Web.
  • the ability to link to and build upon the existing knowledge representations through RDF expands the depth and potential value of SOM to connect the online world of knowledge to the offline world of people, places and things, and their dynamics.
  • RDF RDF
  • SOM can be modelled using RDF, and hence is part of the broader “semantic web” , whereby other RDF-based ontologies can refer to SOM and its contents, and vice versa.
  • SOM components can point to other RDF objects, which means that the SOM can link to artifacts in the semantic web. Usually, this means ability to describe the type of an object or action by describing it with a particular RDF object (URI) .
  • URI RDF object
  • the ECS (Entity, Component, System) paradigm is used to provide a flexible and extensible way to manage the complexity and evolution of SOM with multi-faceted information overlays.
  • entities correspond to physical and virtual objects, which are spatially situated within the physical space.
  • Each object is modelled by an entity.
  • the object's set of attributes and characteristics are described by components, which are associated with each entity.
  • the components for an entity provide a multi-faceted view of the entity, from location to appearance to capabilities to state, etc.
  • the state of an entity is described through a set of typed components, where a component captures a particular facet of the entity, such as its location or color.
  • the SOM can be organized as a hierarchy of partitions. Each partition contains the subset of the SOM that fall into a particular region of the digital enclosure across a period of time, for example, linearly partition the SOM based on time windows, or hierarchically partition the SOM based on spatial structure.
  • a SOM partition can be independently stored, analyzed, replicated and processed. Multiple SOM partitions can be merged to recreate the spatial and temporal dynamics across any region of space x time within the enclosure.
  • FIG. 5 shows examples of how SOM can be used to describe the spatial context within an enclosure.
  • An entity class defines an entity, such as “person” , “thing” , “ithing” , “entrepreneur” and “vc. ” Inheritance is supported, such as the “entrepreneur” class inherits from the “person” class. Root entity classes at the base of the inheritance hierarchy are called the core entity classes, which represents people, places, things and ithings (digitized things, such as devices) .
  • An entity class is defined based on one or more components, for example, the “thing” class has the “space” component and the “render” component.
  • a component describes a particular aspect of an entity and contains a set of typed attributes. For example, the “space” component contains two attributes, a “loc” attribute that describes the geolocation of the corresponding entity, and a “orientation” attribute that describes the angular orientation relative to compass for the corresponding entity.
  • FIG. 6 shows the architecture and components for embodiments of the SOM Extension Framework (SEF) .
  • SEF SOM Extension Framework
  • the framework provides developers the ability to extend the SOM with new entities and components, in a way that enables the extensions to be fully synchronized to changes in the SOM.
  • These new entities and components enrich the knowledge of the spatial context, for example for different domains, as well as enable join with existing knowledge and services that are available for the digital enclosure and its contents.
  • any change to the SOM such as the creation of a new entity or component, or the change in state of an entity, component, or relationship, generates an event that is sent to a SOM event bus.
  • An event typically detects some state change within an entity, or across a collection of entities, within some span of time.
  • An event can also depend on other events, like causality, such as if a person enters the room and then after 5 minutes he leaves, then this can trigger an event.
  • An event can also be digitally sourced, for example, if the user “likes” some object or person, or when the weather changes to rain, then this can also be an event that can then be analyzed and used in the context of physically sourced events.
  • An event can also be a timer, which periodically emits an event, that can in turn trigger other events and services.
  • the SOM Extension Framework provides an interface for custom event handlers to be defined and registered to a digital enclosure. Each event handler subscribes to a particular SOM event, and upon triggered, responds by executing its handler computation. These event handlers can be used to create an extension layer of new annotations, entities and components that extends the SOM in a fully synchronized way. For example, whenever a new person entity is detected in the digital enclosure, such as through camera vision or location tag, an event handler can be triggered to determine the role of this person based on analyzing the SOM to assess the physical characteristics (such as whether uniform is worn) or virtual characteristics (such as whether identity is registered) . Once the role is determined, the person entity in the SOM is annotated with a new component that describes the role. In another example, when a person entity is detected to stand in front of a store for a minimum period of time, then a new event handler can annotate the person as being potentially interested in the store.
  • the SOM Extension Framework supports custom event definitions, used to detect complex spatial and temporal conditions within an enclosure. For example, safe distancing between people to lower infection risks due to infectious diseases such as COVID-19 can be defined as a custom event that is triggered whenever two people are unsafely close to each other.
  • the custom event definitions may be input to an event trigger engine, which compiles the custom event definitions into efficient and optimized automata that can resolve ambiguous events and multiple parallel event firings.
  • FIG. 7 shows the architecture and components of a Spatial Application Programming Layer. This provides an interface layer for spatial applications to program against the SOM to deliver in-situ, context-aware tasks and services for a physical environment.
  • the left side of the diagram shows the system that is deployed to digitize a particular physical environment, through the Enclosure Operating System and the SOM Extension Framework.
  • the middle of the diagram shows the result of the digitization of the physical environment, to create a SOM model that describes the spatial, temporal and semantic context of the digital enclosure and the dynamics within, along with any extensions to the SOM through application frameworks.
  • the right side of the diagram shows the programming and runtime interfaces that developers use to create and deploy spatial applications to the digital enclosure.
  • spatial applications connect to the digital enclosure through a sequence of four steps.
  • the first step is to register the spatial application to the digital enclosure, which must be provisioned and permissioned by the enclosure operator to define the scope of what can be accessed by the application.
  • the second step is for the spatial application to anchor itself to a particular entity or location within the digital enclosure. For example, an application may be running on the smartphone of an occupant, in which case the application will anchor itself to the person entity within the digital enclosure. Spatial anchoring is important to define the spatial context for a given application.
  • the third step is to subscribe to a set of events from the context (SOM) , providing handlers to respond to relevant changes and events within the context.
  • the fourth step is to access the SOM, resources, or services within the EOS to deliver context-aware services and experiences to occupants and the physical environment.
  • spatial application development is modelled in a similar way as Web development.
  • JavaScript scripts are loaded into a Web document to manipulate the DOM in response to DOM or system events.
  • SOM replaces DOM, and the JavaScript scripts are programmed against SOM and SOM events as opposed to DOM and DOM events.
  • spatial applications may be portable across multiple digital enclosures, leveraging the SOM to abstract away the details of the underlying hardware, models and software that is used to digitize and model the spatial context for different digital enclosures. For example, an application responding to movement of a person can work regardless of whether the location is provided through UWB or through computer vision.
  • the ability to create portable spatial applications that work across many digital enclosures greatly expands the opportunity to create value by developers, which benefits users and space operators through the availability of a lot more spatial, context-aware applications.
  • Software applications may also share data (events, entities, relations) by extending the shared context with app-specific events and services, which can in turn be used by other applications to implement useful cross-app experiences.
  • spatial applications can be registered to an app store, similar to mobile app stores for iOS and Android.
  • the app store provides a marketplace for developers to reach more customers, and a way for users and space operators to discover more applications. Since a digital enclosure may have different levels of digitization, the resulting SOM may have different levels of quality and completeness. Because of this, a qualification step is performed to determine whether a spatial application can be deployed into a given digital enclosure, based on the required level of quality and completeness compared to the actual.
  • digitization of a physical environment is delivered as a service to space operators, where a service provider creates and manages a set of digitization modules (digitizers) and assists the space operator to deploy these digitizers to their physical environments.
  • a service provider creates and manages a set of digitization modules (digitizers) and assists the space operator to deploy these digitizers to their physical environments.
  • a system or apparatus to enable the efficient development of spatial and context-aware software applications that can perform tasks and deliver services in-situ to people and activities within a physical environment includes the following:
  • a digitization component (digitization modules) that create a digital model of the spatial temporal and semantic context for a physical environment, including the people, places, things, their activities and relationships over time, contained in a digital enclosure.
  • SOM Spatial Object Model
  • a domain-specific component that enables efficient enrichment and augmentation of the SOM to represent new domain knowledge, information and services, in a fully synchronized way with the SOM.
  • multiple domain frameworks can be created to overlay SOM with domain-specific semantics that are appropriate for different domains, such as for the retail domain, manufacturing domain, office domain and hospital domain.
  • An application component that enables developers to create and deploy spatial applications that are context-aware and are delivered to multiple application targets, by programming against the SOM in a similar way as programming against the DOM in Web-based programming.
  • SOM By using the SOM to represent the spatial context of physical spaces, spatial applications can focus on the tasks to perform, and not the effort to model the context.
  • the digitization component may further include any of the following:
  • An enclosure operating system is deployed to transform a digital enclosure into an intelligent computing apparatus that can sense, reason and interact with the corresponding physical environment and its occupants.
  • the enclosure operating system may include five layered components -infrastructure layer, spatial structure layer, digitization layer, intelligence layer and application layer.
  • the spatial structure layer may be described as 3D point cloud and/or 3D CAD model and serves as the anchor representation against which all the devices, entities, data and computations within the digital enclosure are bound to and indexed against.
  • the enclosure operating system deploys and manages a set of extensible and containerized digitization modules or digitizers, which can programmatically define the distributed computations that connect and process data streams from a set of sensors, through AI models and software modules, to distill knowledge about some aspects of the physical space and the dynamics within.
  • Digitizers can model aspects of the physical environment, including but not restricted to airflow, temperature, luminosity, acoustics, electromagnetics, air quality and barometric pressure. These digitizers compute the gradient of measurements within the spatial structure of the enclosure, based on sensor readings as well as factor in physical laws of environmental dynamics and occupancy within the physical space.
  • Digitizers can model aspects of the dynamics of people and things within the physical space, including but not restricted to their identity, position, orientation, velocity, body pose, gestures, clothing, attachments, expressions, actions and interactions with other people or environment.
  • Different digitizers can use different types of sensors and models, such as computer vision or thermal sensing or radiofrequency modeling, to distill specific knowledge about these dynamics. The separately distilled knowledge from different digitizers are then fused together, using the spatial structure to index and merge against, to create a consistent, integrated and multi-dimensional view of the context for a digital enclosure.
  • the application programming interface component may further include any of the following:
  • the SOM is modelled using Resource Descriptor Framework (RDF) and organized according to the Entity-Component-System (ECS) programming paradigm to enable decentralized and independent extensibility of the SOM to new knowledge extensions and overlays.
  • RDF Resource Descriptor Framework
  • ECS Entity-Component-System
  • a SOM entity is defined by a set of typed components, where each typed component defines a collection of typed attributes that can contain any value or reference to external resource (URI) .
  • URI external resource
  • Every physical object within a digital enclosure is associated with a globally unique Uniform Resource Identifier (URI) and is described by a logical SOM entity.
  • Multiple SOM entity instances can refer to the same logical SOM entity, where each SOM entity instance describes a particular facet of the physical entity through a subset of the components.
  • URI Uniform Resource Identifier
  • a registry service is provided to enable efficient lookup of any physical object to its corresponding SOM entity.
  • This registry service accepts as input any reference to a physical object, through a variety of methods including but not restricted to QR code scanning, scanning by smartphone camera, selection of target object through AR, or natural language reference to the object based on context (e.g., the printer on the table) .
  • the registry and lookup service enable end users to efficiently identify and interact with any physical object within a physical space, as well as enable developers to programmatically interact with the artifacts within the digital enclosure.
  • the SOM can be extended to represent domain-specific semantics and knowledge, by creating a set of new domain-specific SOM entities, components and attributes, then registering a set of event handlers to synchronize changes in the SOM to updates of the extended knowledge and services.
  • Domain frameworks can then be created to represent the semantics of the roles, activities, services and outcomes for different types of physical environments, including but not restricted to retail stores, hospitals, factories, events, nursing homes and schools.
  • the application component may further include any of the following:
  • Spatial applications can be delivered to multiple application target endpoints, including but not restricted to (1) end-user smartphones, through mobile-based native apps, AR, or voice assistants, (2) directly to the physical space, through enclosure-based actuators, devices and robots, and (3) simulation environments, such as Unity3D or custom simulators, that enable developers to visualize and simulate the application against simulated real-world scenarios of the physical space.
  • simulation environments such as Unity3D or custom simulators, that enable developers to visualize and simulate the application against simulated real-world scenarios of the physical space.
  • An app store for spatial applications is created, into which developers publish their spatial applications, and from which space operators can discover and deploy spatial applications into their physical environments that have been set up as described above.
  • Software developers use the SOM abstraction and enclosure operating system to develop portable context-aware applications that can work across a variety of physical environments.
  • a library of digitizers is created, to enable developers with expertise in specific hardware, AI models and software to create modular and reusable digitizers, which can be adopted and deployed into a digital enclosure through the support of the technologies described above.
  • An open and extensible digitization library can significantly increase the reusability, composability and interoperability across IOT devices, AI models, software modules to enable efficient digitization of physical environments and the dynamics within.
  • Alternate embodiments are implemented in computer hardware, firmware, software and/or combinations thereof. Implementations can be implemented in a computer program product tangibly embodied in a computer-readable storage device for execution by a programmable processor; and method steps can be performed by a programmable processor executing a program of instructions to perform functions by operating on input data and generating output. Embodiments can be implemented advantageously in one or more computer programs that are executable on a programmable computer system including at least one programmable processor coupled to receive data and instructions from, and to transmit data and instructions to, a data storage system, at least one input device, and at least one output device.
  • Each computer program can be implemented in a high-level procedural or object-oriented programming language, or in assembly or machine language if desired; and in any case, the language can be a compiled or interpreted language.
  • Suitable processors include, by way of example, both general and special purpose microprocessors.
  • a processor will receive instructions and data from a read-only memory and/or a random access memory.
  • a computer will include one or more mass storage devices for storing data files; such devices include magnetic disks, such as internal hard disks and removable disks; magneto-optical disks; and optical disks.
  • Storage devices suitable for tangibly embodying computer program instructions and data include all forms of non-volatile memory, including by way of example semiconductor memory devices, such as EPROM, EEPROM and flash memory devices; magnetic disks such as internal hard disks and removable disks; magneto-optical disks; and CD-ROM disks. Any of the foregoing can be supplemented by, or incorporated in, ASICs (application-specific integrated circuits) , FPGAs and other forms of hardware.
  • ASICs application-specific integrated circuits

Landscapes

  • Engineering & Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Computational Linguistics (AREA)
  • Computing Systems (AREA)
  • General Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Artificial Intelligence (AREA)
  • Stored Programmes (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

A system includes a plurality of software-defined digital enclosures (digital containers) that are bound to corresponding physical spaces. A digital enclosure includes an "enclosure context" that captures the spatial, temporal and semantic states of the corresponding physical space and the dynamics thereof. The binding to the physical space updates the enclosure context based on sensor data captured by sensors that monitor the physical space. The system also includes an application programming interface (API) that provides programmatic access to the digital enclosures to a plurality of software applications. The software applications access the digital enclosures via the API and utilize the enclosure contexts of the digital enclosures.

Description

    SPATIAL AND CONTEXT AWARE SOFTWARE APPLICATIONS USING DIGITAL ENCLOSURES BOUND TO PHYSICAL SPACES
  • CROSS-REFERENCE TO RELATED APPLICATION (S)
  • This application claims priority under 35 U.S.C. § 119 (e) to U.S. Provisional Patent Application Serial No. 63/059,871, “Novel System and Application Development Environment to Enable Efficient Development of Spatial and Context Aware Software Applications for Physical Environments through Layered Spatial and Context Representations” filed July 31, 2020. The subject matter of all of the foregoing is incorporated herein by reference in their entirety.
  • BACKGROUND 1. Technical Field
  • This disclosure relates generally to systems and application development environments to enable, support and accelerate the development and execution of software applications that can perform tasks and services using the context of a physical environment. 
  • 2.  Background
  • The broad availability of mobile and Internet-enabled services, along with rapid advances in technologies such as artificial intelligence, the Internet-of-Things, augmented reality, wearables, and edge and 5G, have increased the demand for a new and emerging class of software applications that are aware of, and can respond to and interact with, the spatial and semantic context of the physical space in which they operate.
  • However, for application developers, the development of contextually-intelligent spatial applications is challenging because significant effort is required to accurately model the physical environment and the context within. Unlike mobile  applications where the context is well defined by the end user’s online profile and optionally the geolocation of the mobile device, spatial applications require a much richer and more expressive model of the spatial and semantic context in which they operate.
  • For spatial applications operating in a complex environment with rich dynamics, the amount of code and effort required to model the context of the physical environment may well exceed what is required to perform the actual tasks. The deeper and more complete the modeling of the spatial and semantic context, the more intelligent and valuable an application can become through its contextual awareness.
  • A common approach to developing context-aware applications is to create customized solutions that are tailored to a particular physical environment and application context. These custom solutions are designed to work for a specific set of sensors and hardware devices, configured and deployed in a particular way for a specific physical environment, and run a customized set of AI models and software modules over a particular topology of computing resources. One downside of this approach is that the resulting application is brittle and hard to evolve, including changes in the selection of hardware devices, AI models and application logic. Another downside is that since the modelled context is internal to each solution and restricted to a narrow slice of the physical environment, interoperability across multiple custom solutions operating within the same physical environment is difficult due to the lack of shared and orchestrated context. For physical environments undergoing systematic digital transformation, where change is frequent and orchestration is necessary, these downsides impose considerable additional cost and effort for the solution developers and the space operators. These frictions greatly limit the development and adoption of context-aware spatial applications, even though demand for these services are rising rapidly.
  • SUMMARY
  • In one aspect, a system includes a plurality of software-defined digital enclosures (digital containers) that are bound to corresponding physical spaces. A digital enclosure includes an enclosure context that captures the spatial, temporal and semantic states of the corresponding physical space and the dynamics thereof. The binding to the physical space updates the enclosure context based on sensor data captured by sensors that monitor the physical space. The system also includes an application programming interface (API) that provides programmatic access to the digital enclosures to a plurality of software applications. The software applications access the digital enclosures via the API and utilize the enclosure contexts captured in the digital enclosures.
  • Other aspects include components, devices, systems, improvements, methods, processes, applications, computer readable mediums and other technologies related to any of the above.
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • Some embodiments of the present invention are illustrated as an example and are not limited by the figures of the accompanying drawings, in which like references may indicate similar elements and in which:
  • FIG. 1 shows an overall system view, with components organized in a layered architecture.
  • FIGS. 2 and 3 show the architecture and components of the Enclosure Operating System of FIG. 1.
  • FIG. 4 shows a Spatial Object Model (SOM) .
  • FIG. 5 shows examples of the Spatial Object Model to illustrate the structure of entities, entity classes, components, attributes and relations in the model.
  • FIG. 6 shows the architecture and components of a SOM Extension Framework.
  • FIG. 7 shows the architecture and components of a Spatial Application Programming Layer.
  • DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENTS
  • The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the invention. As used herein, the term “and/or” includes any or all combinations of one or more of the associated listed items. As used herein, the singular forms “a, ” , “an, ” and “the” are intended to include the plural forms as well as the singular forms, unless the context clearly indicates otherwise. It will be further understood that the terms “comprises” and/or “comprising, ” when used in this specification, specify the presence of stated features, steps, operations, elements and/or components, but do not preclude the presence of or addition of one or more other features, steps, operations, elements, components and/or groups thereof.
  • Unless otherwise defined, all terms (including technical and scientific terms) used herein have the same meaning as commonly understood by one having ordinary skill in the art to which this disclosure belongs. It will be further understood that terms, such as those defined in commonly used dictionaries, should be interpreted as having a meaning that is consistent with their meaning in the context of the relevant art and the present disclosure and will not be interpreted in an idealized or overly formal sense unless expressly so defined herein.
  • In describing various embodiments, it will be understood that a number of techniques and steps are disclosed. Each of these has individual benefit and each can also be used in conjunction with one or more, or in some cases all, of the other disclosed techniques. Accordingly, for the sake of clarity, this description will refrain from repeating  every possible combination of the individual steps or systems in an unnecessary fashion. Nevertheless, the specification and claims should be read with the understanding that such combinations are entirely within the scope of the invention and the claims.
  • New methods, apparatuses, systems and applications for the efficient development, deployment, use and maintenance of spatial and context-aware applications in a physical environment are discussed herein. In the following description, for purposes of explanation, numerous specific details are set forth in order to provide a thorough understanding of various embodiments. It will be evident, however, to one skilled in the art that the present invention may be practiced without these specific details.
  • With recent advances in technology, there is an increasing demand for spatially anchored and context-aware software applications that can sense, reason and interact with the people, places and things within a physical environment to perform a variety of in-situ tasks and services for the occupants, work processes and the activities within, including big data analytics, predictive modeling, user assistance and automation. Multiple industry segments would benefit from technologies that could digitize and transform their existing physical facilities, tasks and processes so that such spatial applications may be used to increase operating efficiency, improve business outcomes and strengthen customer experiences. Examples include manufacturing and factories, hospitals and clinics, retail stores and malls, events and trade shows, homes and apartments, offices and workplaces, buildings and cities, just to name a few.
  • However, for these applications to make use of higher-level contexts of the physical environment, the underlying technology typically must first capture the who, what, when and where within a physical environment over time and then model these to develop the higher-level contexts. This involves in-depth knowledge of the spatial structure and temporal dynamics of the people, places, things, activities and relationships within the  physical environment, and also the common semantics that are specific to the domain of the physical environment through which to make sense of the roles, intentions, tasks and outcomes based on the spatial and temporal context. For example, a person resting has different semantics and is interpreted differently in a factory setting versus a hospital setting. In order for a context-aware software application to operate at this level, the fact that a person in the physical environment is resting must be captured from the physical environment, but the semantics of that event for the specific domain of interest (factory versus hospital) must also be known.
  • Semantics typically describe what happens in a given physical context. For example, in a hospital setting, people inside take on different roles. Some are doctors, some are nurses, patients, family members, visitors, etc. There are equipments that perform various functions that are pertinent to the hospital setting, such as MRI, health pressure monitor, wearables, carts, nursing stations, etc. There are also events, actions and activities that take place inside hospitals, such as new patient registering, doctor’s daily round, ICU trauma procedures, take blood pressure, check on patient, perform surgery, etc. There are also relationships, such as this person is interested in a booth, or a person is a relative of another, or an object is owned by a person, or this light is controlled by a certain switch, or a student is asking a question to a teacher, etc.
  • A set of semantics that define what is happening inside a given physical environment may be overlaid in a way that captures the semantics of the dynamics of the physical environment. These are examples of “domain semantics” -hospitals, factories, schools, etc.
  • For each application (or suite of applications) , there can also be semantics associated with the user actions and events that take place through the applications. For example, the user may “like” a product or company by clicking on thumbs up in an  application, or a visitor may hover around an exhibit for a while that indicates the person is “interested” . These app-specific events and interpretation of their intent and atcions are also examples of semantics.
  • As described in more detail below, semantics may be supported in the following ways. First, these semantics may be captured and represented in a spatial object model (as described below) , by extending and building upon the foundational abstractions of people, places, things and relations. Second, programmatic binding from low-level data and knowledge to these higher-level, semantic representations may be enabled through extensible software modules that plug into digital enclosures.
  • Physical environments with high value human presence and activities, such as schools, factories, hospitals, nursing homes, office buildings, offer a rich opportunity for application developers to create context-aware spatial applications that can elevate the value of the physical space for the human occupants and activities within. However, developing these applications is challenging because of the significant effort required to accurately model the spatial, temporal and semantic context in which they operate, requiring expertise across hardware, AI models and software and also in the specific domain of interest. Furthermore, applications often operate in silos, each with a narrow and specific model of the physical environment that is neither shared nor orchestrated with other applications operating in the same physical environment, resulting in fragmented experiences and high integration complexity.
  • A more systematic approach to developing context-aware applications for physical environments, by refactoring the modeling of context from individual applications to form a shared model of the spatial, temporal and semantic context for a physical environment, can dramatically accelerate the pace of innovation and development for context-aware spatial applications.
  • In some embodiments, the context for a physical space is represented by a digitized model of the spatial, temporal and semantic state of the physical space and its dynamics, inclusive of the physical objects (e.g., people, places, things) , attributes, actions and events within. This context may form the core of a container, referred to as a digital enclosure, which is then a basic building block upon which context-aware software applications may be built. Because the context is contained in a digital enclosure, it may also be referred to as an enclosure context.
  • The digital enclosures are software-defined. As a result, real world behavior or situations can be semantically defined and programmatically modelled and driven through software, which drives computation and also potentially hardware (controls, IOT, actuators) .
  • The digital enclosures are also bound to the corresponding physical spaces, meaning that the enclosure contexts are updated based on sensor data captured by sensors that monitor the physical spaces. The updates may be real-time, so that the enclosure contexts provide a current description of the corresponding physical space. The binding process may involve creating a digital model of the physical space (e.g., 3D scanning, CAD mapping) , deploying sensors/IOTs in the physical space and connecting these digital input/output to the digital enclosure. Once bound, a digital enclosure may represent a real-time, connected digital replica of the real world within that space, against which developers can programmatically interact.
  • In some embodiments, the contexts within a digital enclosure may be represented using a Spatial Object Model (SOM) . The SOM may be created through a digitization platform (e.g., digitizers or digitization modules described below) using spatially calibrated hardware devices, AI models and software modules deployed into the physical environment and/or accessible by the digital enclosure. The SOM may be extended with  domain-specific knowledge and semantics that model the roles, tasks, activities and outcomes that are appropriate for the physical environment.
  • Spatial applications then build on the SOM to deliver context-aware services and experiences to multiple application target endpoints, from mobile-based end-user experiences over apps, augmented reality, or voice assistants, to devices and actuators within the physical environment, and to simulation environments used by developers. An API provides programmatic access to a set of data and service artifacts associated with the SOM and the digital enclosure. By leveraging the SOM to provide the context of the physical space, developers greatly reduce their effort and cost by focusing on the application tasks to perform, and not the modelling of context for the physical environment. In addition, multiple applications may access the same shared contexts and digital enclosures, to enable collaborative applications that can orchestrate and coordinate with each other through the shared contexts.
  • In some embodiments, the spatial applications are portable across different physical environments, using the SOM to abstract the underlying digitization implementation for each physical environment. Portable spatial applications may also be published to an app store, which can be discovered and readily deployed by operators to systematically elevate the value of their physical environments.
  • Certain embodiments will now be described by referencing the figures which illustrate various embodiments. FIG. 1 describes the architecture of an embodiment for a novel system and application development environment to enable efficient development of spatial and context-aware software applications for physical environments. An Enclosure Operating System (EOS) is deployed to a physical environment to systematically digitize and model the spatial and semantic context 120 of the people, places, things, activities and relationships within the environment over time, using the Spatial Object Model (SOM) as the  representation of the context of the physical environment. The context 120 is contained in a digital enclosure 110, which includes additional components to support the development (and possibly enrichment) of the context. The SOM representation can be extended and annotated with domain-specific semantics and knowledge (SOM Extensions 130) through the SOM Extension Framework. Spatial applications can then build upon the SOM representation to create and deliver spatial and context-aware tasks in situ to the enclosures and the entities therein.
  • In more detail, the architecture includes three systems organized in a layered way, with explicit interfaces and abstractions separating the layers. System 1 is the Enclosure Operating System (EOS) which, through calibrated sensors and models, creates a representation of the context of the people, places, things, activities and relationships within a corresponding physical space over time. In this example, the contexts are represented using a Spatial Object Model 120 contained in a digital enclosure 110. The EOS is where descriptors are bound to the state of various entities or relations to some ontology, which defines the domain-specific real world artifacts, actions, relations in a common way (aka: through Semantic Web or Resource Descriptor Framework) , which can then be used to create richer, more semantically meaningful analysis, services or optimizations.
  • The context 120 captures the context of the corresponding physical space: the physical objects (people, places, things) in the physical space; attributes of the physical objects; actions, activities and behaviors of the physical objects and events occurring in the physical space. This context 120 forms the core of a container, referred to as a digital enclosure 110. The digital enclosure is a basic building block upon which context-aware software applications may be built.
  • SOM describes the context of the physical environment, whereas the digital enclosure includes the context but also everything around it as it connects to things/users  external to the digital enclosure. In addition to the context 120, a digital enclosure 110 may include additional components: references to resources accessible by/to the digital enclosure, connectors to transfer data between digital enclosures, relations that describe the relations between the physical spaces corresponding to different digital enclosures, services/applications that are internal to the digital enclosure, and roles or permissions that define access to the digital enclosure and its components. Examples of resources include the following: sensors that capture data about the physical space; controls that effect changes in the physical space; digitizers or digitization modules that form a stack from sensors to the digital enclosure; other services, analysis or other components that are external to the digital enclosure; and data sources that are exterior to the physical space (i.e., not sensors) . The term "artifacts" is used to refer to any components of a digital enclosure, including the context and its components.
  • A digital enclosure is a container that contains a set of artifacts. They are programmable in the sense that artifacts within the enclosure may be programmatically added, deleted, or otherwise modified. Its content and configuration can be programmatically changed. Digital enclosures may also be extensible, meaning that its artifacts can be logically extended to new types and artifacts. This allows the building up a repository of building block artifacts, which can be used to describe the rich real world of people, places and things. If these extensions may be done programmatically, then the digital enclosures are programmatically extensible.
  • Connectors allow digital enclosures to share data. Examples of connectors include projections and tunneling. One enclosure (or some portion within) may be projected into another enclosure, such that all the context/artifacts of the projected enclosure is accessible to the target enclosure. This supports the creation of layered enclosures, where  each enclosure builds upon another enclosure, extending it with some new information, artifacts and behaviors. Projections can be unidirectional or bidirectional.
  • The Enclosure Operating System is a system for maintaining, managing and using digital enclosures, as further described in FIGS. 2 and 3.
  • The SOM 120 is a particular way of organizing the objects, attributes, events and behaviors, that enables access and manipulation, as well as analytics and extensions. SOM 120 may be thought of as an ontologically organized knowledge graph that describes the spatial, temporal and semantic state and dynamics within the digital enclosure and corresponding physical space. In some embodiments, SOM is a digital representation of a slice of space x time x semantics, described in terms of entities, events, components and relations.
  • The SOM 120 is analogous to the Document Object Model (DOM) , but instead of representing digitized Web page content, the SOM 120 represents the who, what, when and where of the entities and dynamics inside a physical environment (the context of the physical environment) . In Web-based programming, the DOM is an application programming interface for HTML and XML documents. The DOM defines the logical structure of the document and the way a document is accessed and manipulated by users and applications, allowing developers to create, navigate the structure, and add, delete, modify elements of the content within a document. The DOM is hosted by a Web browser, which manages the runtime state of the DOM and provides interfaces to interact with end users and applications. As the DOM is changed, events are emitted, which can be processed by a set of registered event handlers, often written in JavaScript or other programming language bindings, to respond to and inject changes back to the document through the DOM.
  • The SOM plays an analogous role but with respect to the context of the physical space, rather than with respect to a Web document. For context-aware software applications,  the SOM is an application programming interface (API) that provides programmatic access to the digital enclosures and their contexts. The SOM defines the logical structure of the context and the way the context is accessed and manipulated by users and applications, allowing developers to create, navigate the structure, and add, delete, modify elements of the context. The SOM is hosted by an operator associated with the physical environment, which manages the runtime state of the SOM and provides interfaces to interact with end users and applications. As the context defined by the SOM is changed, events are emitted, which can be processed by a set of registered event handlers, to respond to and inject changes back to the context through the SOM.
  • System 2 is the SOM Extension Framework (SEF) 130, which supports extensions of the SOM 120 with domain-specific semantic overlays and knowledge to further accelerate the development of interoperable domain-specific spatial applications.
  • System 3 is the Spatial Application Programming Layer, which provides the programming and runtime interfaces for spatial applications 140 to build upon SOM 120 to perform in-situ and context-aware tasks and services. These software applications can deliver changes to the physical environment (through controls) , or deliver user experiences to users through in-situ devices (such as displays or monitors) , or to mobile devices that are used by occupants. “In-situ” real world experiences may be delivered and experienced in the context of the physical space by users inside the physical space.
  • FIGS. 2 and 3 show the architecture and components of the Enclosure Operating System (System 1 of FIG. 1) . The EOS includes five layers organized in a tiered fashion, starting from the infrastructure layer 210 that provides computing resources, to the spatial structure layer 220 that models the time-invariant spatial topology of the physical space, to the digitization layer 230 that connects data streams from sensors and actuators deployed within the physical space as required by the corresponding digital enclosures, to the  intelligence layer 240 that systematically distills knowledge about various aspects of the physical space and digital enclosures and its dynamics to produce a time-varying model of spatial, temporal and semantic context represented by the Spatial Object Model. The SOM is then used by spatial applications through the application layer 250 to gain spatial and context-awareness of the physical environment by accessing the corresponding digital enclosures.
  • In more detail, the Enclosure Operating System is a layered stack with five layers, which is deployed to physical environments to create a digitized model of the spatial and temporal dynamics, which developers can program against. The Enclosure Operating System manages digital enclosures, their bindings to physical environments (through digitization modules/digitizers) , and applications running in and/or accessing the digital enclosures. The EOS may act as a real-time engine that connects the flows of data and events from both the physical and digital world (applications) , to ensure digital enclosures are coherent and operating, with the right events and right services being called at the right time, while managing permissions and security and availability, etc.
  • The infrastructure layer 210 provides the digital computing resources required to power all the data and computations within the digital enclosure. The spatial structure layer 220 models the time-invariant spatial structure and topology of the physical environment and its contents, represented as a 3D point cloud and/or 3D CAD model in some embodiments, to form the underlying anchor spatial representation against which all data and extracted information within the digital enclosure are indexed against. The digitization layer 230 is the edge between the physical space and the digital world. It may use spatially calibrated sensors deployed within the digital enclosure to acquire spatially anchored information about the physical environment. It may also deliver spatially anchored information and services to occupants and actuators in the physical environment. The intelligence layer 240 processes  the acquired data streams by fusing them across different sensors and AI models to create a time-varying representation of the spatial, temporal and semantic context in the form of Spatial Object Model. The application layer 250 enables applications and domain frameworks to build upon and extend the SOM, along with the computing and information resources accessible by the digital enclosure, to create context-aware applications across multiple application endpoints that can sense, reason and interact with people and things within the physical environment.
  • As shown in FIG. 3, in some embodiments of the EOS, the infrastructure layer 210 may include edge computing components located within the physical environment, along with cloud computing components for large-scale analytics and heavier computations. The edge computing component supports the low-latency localized storage and processing requirements for high bandwidth sensor and AI modeling workloads, such as computer vision models or wireless localization algorithms. In some embodiments, the infrastructure layer is fully contained within the physical environment, capable of operating in standalone mode without the cloud. In this embedded mode, the infrastructure supports physical security measures that result in stronger privacy and security control.
  • In some embodiments of the EOS, the spatial structure layer 220 includes a two-step process. In the first phase, the physical space is 3D scanned using normal (RGB) or depth (RGB-D) cameras to create an integrated and geometrically calibrated 3D point cloud of the enclosure, using variants of Simultaneous Localization and Mapping (SLAM) . The 3D point cloud is then processed by a second phase, where AI models are applied to systematically segment, recognize and map the detected objects from the 3D point cloud into their corresponding CAD models from a CAD library. Any object with detected physical form factors can be recognized and mapped, including but not restricted to furniture, windows, doors, fixtures, tables, machines, lighting and appliances. The resulting 3D CAD  model of the physical environment represents (1) the time-invariant 3D spatial structure and floorplan, and (2) the placed and oriented 3D CAD models of the detected objects. This 3D spatial structure provides the anchor representation that binds everything (entity, event, data) within the digital enclosure to a particular point or region in the 3D space x time within the physical environment.
  • The spatial structure layer 220 may support the import of floorplan maps and building structural models, such as Building Information Model (BIM) or Industry Foundation Classes (IFC) , that are increasingly common for modern buildings. BIM models can be directly mapped to the 3D CAD model formats used by the system.
  • In some embodiments of the EOS, the digitization layer 230 manages the configuration and interaction with devices in the physical environment that are spatially anchored in the digital enclosure. The devices can be sensors and/or actuators, and they may be stationary (such as lighting control or wall-mounted cameras) or mobile (such as smart tags, smart phones and wearables attached to occupants) . Once the devices are registered and connected through wired or wireless gateways, they send data streams or receive commands to and from the system, intermediated through device drivers that provide uniform programming and configuration interfaces to enable interoperability across heterogeneous device types and vendors.
  • In some embodiments of the EOS, the data from the digitization layer 230 flows into a streaming and distributed data platform, which provides storage, processing, querying and organization capabilities for the data lake. The data platform virtualizes the data across the distributed infrastructure layer, across on-premise and cloud-based computing resources. In some embodiments of the data platform, a variety of indexing strategies are deployed to provide efficient spatial, temporal and graph-based query and storage capabilities to the system.
  • In some embodiments of the EOS, the digitization layer 230 includes a Distributed Task Engine (DTE) , which provides a dataflow-based computing abstraction, operating over high-dimensional data streams, with parallel execution across stateless and stateful tasks. Computation programmed in this abstraction can be efficiently executed over the distributed computing resources of the infrastructure layer, through the DTE. The ability to deploy generalized computations across hardware, models and software over a distributed computing fabric can greatly enhance the extensibility of the system. In some embodiments, the Distributed Task Engine can be implemented using a system similar to the Ray platform from UC Berkeley’s RISE laboratory.
  • In some embodiments of the EOS, the intelligence layer 240 includes modular digitization containers (digitizers or digitization modules) . Each digitizer configures and processes the data flows from a set of deployed and configured devices, through a set of computations that may involve AI models or software modules, to distill knowledge about some aspect of the physical environment, with instruction on how to leverage and update the SOM with the distilled knowledge. For example, digitizers may model the physical environment (temperature, humidity, airflow) , or they may model people, state and movements (location, pose, gestures, expressions, mask wearing, etc. ) , or they may model acoustics and speech. Digitizers can also use information relevant to the physical space, such as weather, nearby congestion, etc., where the information is provided by external sources. Multiple digitizers can be deployed into the same digital enclosure, where each digitizer distills a different aspect of the physical environment, and together they compose to create a multi-dimensional and multi-faceted time-variant model of the physical environment and the dynamics within.
  • In some embodiments of the EOS, a digitizer is a configurable module with four parts: (1) the device manifest that describes the deployment and configuration of devices  within the digital enclosure, (2) the distributed data-flow computations that connect from device drivers to generate distilled knowledge representations, (3) the interface specification that describes how to update the SOM with the distilled knowledge, and (4) an optional configuration application that provides an interactive interface for developers to configure and deploy the digitizer to a digital enclosure. Digitizers are packaged and deployed as software containers, through container systems such as Docker or Kubernetes.
  • In some embodiments of the EOS, digitizers are organized into an extensible library to enable systematic digitization of physical environments in a modular, open and reusable way. Digitizers can be created to model various aspects of the environment within the enclosure, including but not restricted to temperature, airflow, luminosity, acoustics, air quality and barometric pressure. Digitizers can also be created to model various aspects of the dynamics of people and things within the enclosure, including but not restricted to their position, orientation, velocity, body pose, gestures, clothing, attachments, expressions, actions and interactions with other people or environment.
  • In some embodiments, the application layer provides interfaces for spatial applications to access the context (SOM) , along with the data and platform resources available in the digital enclosure. Through these interfaces, spatial applications become context-aware, and can be programmed to be responsive to and interact with people and events within the digital enclosure and physical environment through the SOM.
  • In some embodiments, the application layer enables spatial applications to be delivered to multiple application endpoints. These application endpoints may include (1) end-user smartphones, through mobile-based native apps, AR, or voice assistants, (2) directly to the physical environment, through device-based actuators and robots, and (3) simulation environments, such as Unity3D, that are used by developers to visualize and simulate the application logic against simulated real-world scenarios. In the case of simulation, the  digital enclosures may be bound to anchor points in the simulation, rather than in the physical real world.
  • In some embodiments, privacy and security is enforced through fine-grained authorization and permissions across users, data, resources and services. All data that are used or generated within a digital enclosure is cryptographically bound to the enclosure, with strong security and privacy guarantees. All access to the SOM and platform resources are tracked and managed by security policies that are enforced by the Enclosure Operating System.
  • FIG. 4 shows a Spatial Object Model (SOM) , which is a representation contained in a digital enclosure of the spatial, temporal and semantic context of a physical space. The SOM includes entities that describe various artifacts within the digital enclosure, which may be either physical or virtual. The SOM can be extended in a straightforward manner, where new types of entities can be defined to describe broad and diverse aspects of the physical environment.
  • SOM is to physical spaces what DOM is to Web documents. Whereas DOM describes the structure of page elements in a Web document along with their changes expressed as events, SOM describes the structure of physical elements in a physical space along with the dynamics and interactions expressed also as events. The core of SOM are entities, which represent physical or logical artifacts within the enclosure, including but not restricted to people, robots, things such as tables and furniture, ithings such as sensors and devices, or regions of space called zones. Each entity is defined by a set of typed components, where a component is a collection of typed attributes that represent some aspect of the entity, such as its location or size or action. For physical objects, attributes may include its physical state (color, size, orientation, position, posture) . For people, attributes may include identity, gender, demographics, link to user profile, expression, clothing, role  like nurse or doctor, and other information about a person, anonymous or identified. For digitally-enabled devices (printers, light switches, database, robots) , the attributes can also include services the device can perform, such as turning on, printing, etc. These are just some examples. New types of entities and components and their attributes can be defined through type definitions, enabling a SOM to be extended to describe virtually any aspect of the environment and the dynamics within.
  • The SOM may be described using Resource Description Framework (RDF) , which is a generalized W3C specification for describing knowledge and semantics that is broadly used to provide semantically rich and linked knowledge on the Web. Because SOM is based on RDF, it can leverage the rich semantic types and extensibility of RDF, as well as link to the rich knowledge representations that are described in RDF and available on the Web. The ability to link to and build upon the existing knowledge representations through RDF expands the depth and potential value of SOM to connect the online world of knowledge to the offline world of people, places and things, and their dynamics.
  • One interesting aspect about RDF as it relates to the SOM is that SOM can be modelled using RDF, and hence is part of the broader “semantic web” , whereby other RDF-based ontologies can refer to SOM and its contents, and vice versa. Another interesting aspect is that SOM components (attributes) can point to other RDF objects, which means that the SOM can link to artifacts in the semantic web. Usually, this means ability to describe the type of an object or action by describing it with a particular RDF object (URI) .
  • In some embodiments for Spatial Object Model, the ECS (Entity, Component, System) paradigm is used to provide a flexible and extensible way to manage the complexity and evolution of SOM with multi-faceted information overlays. In the ECS paradigm, entities correspond to physical and virtual objects, which are spatially situated within the physical space. Each object is modelled by an entity. The object's set of attributes and  characteristics are described by components, which are associated with each entity. The components for an entity provide a multi-faceted view of the entity, from location to appearance to capabilities to state, etc. The state of an entity is described through a set of typed components, where a component captures a particular facet of the entity, such as its location or color. These components are processed by systems, which are stateless processors that perform functionalities based on a set of components, such as computing the spatial distance between entities.
  • In some embodiments for Spatial Object Model, the SOM can be organized as a hierarchy of partitions. Each partition contains the subset of the SOM that fall into a particular region of the digital enclosure across a period of time, for example, linearly partition the SOM based on time windows, or hierarchically partition the SOM based on spatial structure. A SOM partition can be independently stored, analyzed, replicated and processed. Multiple SOM partitions can be merged to recreate the spatial and temporal dynamics across any region of space x time within the enclosure.
  • FIG. 5 shows examples of how SOM can be used to describe the spatial context within an enclosure. An entity class defines an entity, such as “person” , “thing” , “ithing” , “entrepreneur” and “vc. ” Inheritance is supported, such as the “entrepreneur” class inherits from the “person” class. Root entity classes at the base of the inheritance hierarchy are called the core entity classes, which represents people, places, things and ithings (digitized things, such as devices) . An entity class is defined based on one or more components, for example, the “thing” class has the “space” component and the “render” component. A component describes a particular aspect of an entity and contains a set of typed attributes. For example, the “space” component contains two attributes, a “loc” attribute that describes the geolocation of the corresponding entity, and a “orientation” attribute that describes the angular orientation relative to compass for the corresponding entity.
  • FIG. 6 shows the architecture and components for embodiments of the SOM Extension Framework (SEF) . The framework provides developers the ability to extend the SOM with new entities and components, in a way that enables the extensions to be fully synchronized to changes in the SOM. These new entities and components enrich the knowledge of the spatial context, for example for different domains, as well as enable join with existing knowledge and services that are available for the digital enclosure and its contents.
  • In some embodiments, any change to the SOM, such as the creation of a new entity or component, or the change in state of an entity, component, or relationship, generates an event that is sent to a SOM event bus. An event typically detects some state change within an entity, or across a collection of entities, within some span of time. An event can also depend on other events, like causality, such as if a person enters the room and then after 5 minutes he leaves, then this can trigger an event. An event can also be digitally sourced, for example, if the user “likes” some object or person, or when the weather changes to rain, then this can also be an event that can then be analyzed and used in the context of physically sourced events. An event can also be a timer, which periodically emits an event, that can in turn trigger other events and services.
  • The SOM Extension Framework provides an interface for custom event handlers to be defined and registered to a digital enclosure. Each event handler subscribes to a particular SOM event, and upon triggered, responds by executing its handler computation. These event handlers can be used to create an extension layer of new annotations, entities and components that extends the SOM in a fully synchronized way. For example, whenever a new person entity is detected in the digital enclosure, such as through camera vision or location tag, an event handler can be triggered to determine the role of this person based on analyzing the SOM to assess the physical characteristics (such as whether uniform is worn) or  virtual characteristics (such as whether identity is registered) . Once the role is determined, the person entity in the SOM is annotated with a new component that describes the role. In another example, when a person entity is detected to stand in front of a store for a minimum period of time, then a new event handler can annotate the person as being potentially interested in the store.
  • In some embodiments, the SOM Extension Framework supports custom event definitions, used to detect complex spatial and temporal conditions within an enclosure. For example, safe distancing between people to lower infection risks due to infectious diseases such as COVID-19 can be defined as a custom event that is triggered whenever two people are unsafely close to each other. The custom event definitions may be input to an event trigger engine, which compiles the custom event definitions into efficient and optimized automata that can resolve ambiguous events and multiple parallel event firings.
  • FIG. 7 shows the architecture and components of a Spatial Application Programming Layer. This provides an interface layer for spatial applications to program against the SOM to deliver in-situ, context-aware tasks and services for a physical environment. The left side of the diagram shows the system that is deployed to digitize a particular physical environment, through the Enclosure Operating System and the SOM Extension Framework. The middle of the diagram shows the result of the digitization of the physical environment, to create a SOM model that describes the spatial, temporal and semantic context of the digital enclosure and the dynamics within, along with any extensions to the SOM through application frameworks. The right side of the diagram shows the programming and runtime interfaces that developers use to create and deploy spatial applications to the digital enclosure.
  • In some embodiments of the programming layer, spatial applications connect to the digital enclosure through a sequence of four steps. The first step is to register the spatial  application to the digital enclosure, which must be provisioned and permissioned by the enclosure operator to define the scope of what can be accessed by the application. The second step is for the spatial application to anchor itself to a particular entity or location within the digital enclosure. For example, an application may be running on the smartphone of an occupant, in which case the application will anchor itself to the person entity within the digital enclosure. Spatial anchoring is important to define the spatial context for a given application. The third step is to subscribe to a set of events from the context (SOM) , providing handlers to respond to relevant changes and events within the context. Then as part of the application logic, the fourth step is to access the SOM, resources, or services within the EOS to deliver context-aware services and experiences to occupants and the physical environment.
  • In some cases, spatial application development is modelled in a similar way as Web development. In Web programming using DOM + JavaScript, JavaScript scripts are loaded into a Web document to manipulate the DOM in response to DOM or system events. In spatial application programming, SOM replaces DOM, and the JavaScript scripts are programmed against SOM and SOM events as opposed to DOM and DOM events.
  • In some embodiments, spatial applications may be portable across multiple digital enclosures, leveraging the SOM to abstract away the details of the underlying hardware, models and software that is used to digitize and model the spatial context for different digital enclosures. For example, an application responding to movement of a person can work regardless of whether the location is provided through UWB or through computer vision. The ability to create portable spatial applications that work across many digital enclosures greatly expands the opportunity to create value by developers, which benefits users and space operators through the availability of a lot more spatial, context-aware applications.
  • Software applications may also share data (events, entities, relations) by extending the shared context with app-specific events and services, which can in turn be used by other applications to implement useful cross-app experiences.
  • In some embodiments, spatial applications can be registered to an app store, similar to mobile app stores for iOS and Android. The app store provides a marketplace for developers to reach more customers, and a way for users and space operators to discover more applications. Since a digital enclosure may have different levels of digitization, the resulting SOM may have different levels of quality and completeness. Because of this, a qualification step is performed to determine whether a spatial application can be deployed into a given digital enclosure, based on the required level of quality and completeness compared to the actual.
  • In some embodiments, digitization of a physical environment is delivered as a service to space operators, where a service provider creates and manages a set of digitization modules (digitizers) and assists the space operator to deploy these digitizers to their physical environments.
  • In some aspects, a system or apparatus to enable the efficient development of spatial and context-aware software applications that can perform tasks and deliver services in-situ to people and activities within a physical environment, includes the following:
  • a) A  digitization component (digitization modules) that create a digital model of the spatial temporal and semantic context for a physical environment, including the people, places, things, their activities and relationships over time, contained in a digital enclosure.
  • b) An  application programming interface component for the context, in the form of Spatial Object Model (SOM) . SOM provides programmatic interfaces, supported by  the digitization component above, to enable developers to access, navigate, change or interact with the occupants and contents of a digital enclosure in a programmatic way.
  • c) A  domain-specific component that enables efficient enrichment and augmentation of the SOM to represent new domain knowledge, information and services, in a fully synchronized way with the SOM. Through an extension component, multiple domain frameworks can be created to overlay SOM with domain-specific semantics that are appropriate for different domains, such as for the retail domain, manufacturing domain, office domain and hospital domain.
  • d) An  application component that enables developers to create and deploy spatial applications that are context-aware and are delivered to multiple application targets, by programming against the SOM in a similar way as programming against the DOM in Web-based programming. By using the SOM to represent the spatial context of physical spaces, spatial applications can focus on the tasks to perform, and not the effort to model the context.
  • The digitization component may further include any of the following:
  • · An enclosure operating system is deployed to transform a digital enclosure into an intelligent computing apparatus that can sense, reason and interact with the corresponding physical environment and its occupants. The enclosure operating system may include five layered components -infrastructure layer, spatial structure layer, digitization layer, intelligence layer and application layer. The spatial structure layer may be described as 3D point cloud and/or 3D CAD model and serves as the anchor representation against which all the devices, entities, data and computations within the digital enclosure are bound to and indexed against.
  • · The enclosure operating system deploys and manages a set of extensible and containerized digitization modules or digitizers, which can programmatically define  the distributed computations that connect and process data streams from a set of sensors, through AI models and software modules, to distill knowledge about some aspects of the physical space and the dynamics within.
  • · Digitizers can model aspects of the physical environment, including but not restricted to airflow, temperature, luminosity, acoustics, electromagnetics, air quality and barometric pressure. These digitizers compute the gradient of measurements within the spatial structure of the enclosure, based on sensor readings as well as factor in physical laws of environmental dynamics and occupancy within the physical space.
  • · Digitizers can model aspects of the dynamics of people and things within the physical space, including but not restricted to their identity, position, orientation, velocity, body pose, gestures, clothing, attachments, expressions, actions and interactions with other people or environment. Different digitizers can use different types of sensors and models, such as computer vision or thermal sensing or radiofrequency modeling, to distill specific knowledge about these dynamics. The separately distilled knowledge from different digitizers are then fused together, using the spatial structure to index and merge against, to create a consistent, integrated and multi-dimensional view of the context for a digital enclosure.
  • The application programming interface component may further include any of the following:
  • · The SOM is modelled using Resource Descriptor Framework (RDF) and organized according to the Entity-Component-System (ECS) programming paradigm to enable decentralized and independent extensibility of the SOM to new knowledge extensions and overlays. In the ECS paradigm, a SOM entity is defined by a set of typed components, where each typed component defines a collection of typed attributes that can contain any value or reference to external resource (URI) .
  • · Every physical object within a digital enclosure is associated with a globally unique Uniform Resource Identifier (URI) and is described by a logical SOM entity. Multiple SOM entity instances can refer to the same logical SOM entity, where each SOM entity instance describes a particular facet of the physical entity through a subset of the components. The ability to model a local SOM entity across multiple entity instances enables decentralized processing, and it supports independent and distributed development and evolution of systems that can model and enrich the digital enclosure from different perspectives.
  • · a registry service is provided to enable efficient lookup of any physical object to its corresponding SOM entity. This registry service accepts as input any reference to a physical object, through a variety of methods including but not restricted to QR code scanning, scanning by smartphone camera, selection of target object through AR, or natural language reference to the object based on context (e.g., the printer on the table) . The registry and lookup service enable end users to efficiently identify and interact with any physical object within a physical space, as well as enable developers to programmatically interact with the artifacts within the digital enclosure.
  • For the domain-specific component, the SOM can be extended to represent domain-specific semantics and knowledge, by creating a set of new domain-specific SOM entities, components and attributes, then registering a set of event handlers to synchronize changes in the SOM to updates of the extended knowledge and services. Domain frameworks can then be created to represent the semantics of the roles, activities, services and outcomes for different types of physical environments, including but not restricted to retail stores, hospitals, factories, events, nursing homes and schools.
  • The application component may further include any of the following:
  • · Developers of spatial applications program against the SOM in a similar way as Web developers program against the DOM to create Web programs. Developers create and package a set of code, in JavaScript or in other language bindings, that is deployed to a digital enclosure to enable them to respond to events in the SOM and to perform tasks and computations that include updating the SOM.
  • · Spatial applications can be delivered to multiple application target endpoints, including but not restricted to (1) end-user smartphones, through mobile-based native apps, AR, or voice assistants, (2) directly to the physical space, through enclosure-based actuators, devices and robots, and (3) simulation environments, such as Unity3D or custom simulators, that enable developers to visualize and simulate the application against simulated real-world scenarios of the physical space.
  • · An app store for spatial applications is created, into which developers publish their spatial applications, and from which space operators can discover and deploy spatial applications into their physical environments that have been set up as described above. Software developers use the SOM abstraction and enclosure operating system to develop portable context-aware applications that can work across a variety of physical environments.
  • · A library of digitizers is created, to enable developers with expertise in specific hardware, AI models and software to create modular and reusable digitizers, which can be adopted and deployed into a digital enclosure through the support of the technologies described above. An open and extensible digitization library can significantly increase the reusability, composability and interoperability across IOT devices, AI models, software modules to enable efficient digitization of physical environments and the dynamics within.
  • Although the detailed description contains many specifics, these should not be construed as limiting the scope of the invention but merely as illustrating different examples. It should be appreciated that the scope of the disclosure includes other embodiments not discussed in detail above. Various other modifications, changes and variations which will be apparent to those skilled in the art may be made in the arrangement, operation and details of the method and apparatus disclosed herein without departing from the spirit and scope as defined in the appended claims. Therefore, the scope of the invention should be determined by the appended claims and their legal equivalents.
  • Alternate embodiments are implemented in computer hardware, firmware, software and/or combinations thereof. Implementations can be implemented in a computer program product tangibly embodied in a computer-readable storage device for execution by a programmable processor; and method steps can be performed by a programmable processor executing a program of instructions to perform functions by operating on input data and generating output. Embodiments can be implemented advantageously in one or more computer programs that are executable on a programmable computer system including at least one programmable processor coupled to receive data and instructions from, and to transmit data and instructions to, a data storage system, at least one input device, and at least one output device. Each computer program can be implemented in a high-level procedural or object-oriented programming language, or in assembly or machine language if desired; and in any case, the language can be a compiled or interpreted language. Suitable processors include, by way of example, both general and special purpose microprocessors. Generally, a processor will receive instructions and data from a read-only memory and/or a random access memory. Generally, a computer will include one or more mass storage devices for storing data files; such devices include magnetic disks, such as internal hard disks and removable disks; magneto-optical disks; and optical disks. Storage devices suitable for  tangibly embodying computer program instructions and data include all forms of non-volatile memory, including by way of example semiconductor memory devices, such as EPROM, EEPROM and flash memory devices; magnetic disks such as internal hard disks and removable disks; magneto-optical disks; and CD-ROM disks. Any of the foregoing can be supplemented by, or incorporated in, ASICs (application-specific integrated circuits) , FPGAs and other forms of hardware.

Claims (53)

  1. A system comprising:
    a plurality of software-defined digital enclosures bound to corresponding physical spaces; wherein the digital enclosures comprise enclosure contexts that capture spatial, temporal and semantic states of the corresponding physical spaces and dynamics thereof, and the bindings update the enclosure contexts based on sensor data captured by sensors that monitor the physical spaces; and
    an application programming interface (API) that provides programmatic access to the digital enclosures to a plurality of software applications, wherein the software applications access the digital enclosures via the API and utilize the enclosure contexts of the digital enclosures.
  2. The system of claim 1 wherein the enclosure contexts capture which physical objects exist in the corresponding physical spaces.
  3. The system of claim 2 wherein the physical objects in the corresponding physical spaces include people, places and things in the corresponding physical spaces.
  4. The system of claim 2 wherein the enclosure contexts also capture attributes of the physical objects in the corresponding physical spaces.
  5. The system of claim 4 wherein the attributes of physical objects include locations of physical objects in the physical spaces, orientations of physical objects in the physical spaces, and physical states of physical objects.
  6. The system of claim 4 wherein the attributes of physical objects include services provided by physical objects, sensor data collected by physical objects, and actions performable by physical objects.
  7. The system of claim 2 wherein the enclosure contexts also capture relations between physical objects in the same physical space.
  8. The system of claim 7 wherein the relations between physical objects include spatial relations, temporal relations and semantic relations.
  9. The system of claim 2 wherein the digital enclosures also capture actions of physical objects.
  10. The system of claim 1 wherein the enclosure contexts also capture events occurring in the corresponding physical spaces.
  11. The system of claim 10 wherein the events comprise changes in states of physical objects in the corresponding physical spaces.
  12. The system of claim 10 wherein the events depend on occurrence of other captured events.
  13. The system of claim 1 wherein the digital enclosures further comprise connectors for sharing data between digital enclosures.
  14. The system of claim 13 wherein the connectors comprise connectors that push data from one digital enclosure to another, and connectors that pull data to one digital enclosure from another.
  15. The system of claim 1 wherein the digital enclosures further comprise access to external resources.
  16. The system of claim 15 wherein the external resources include sensors that monitor the corresponding physical spaces, controls that change the corresponding physical spaces, digitization modules that process the sensor data and update the enclosure contexts based on the processed sensor data, services external to the digital enclosures, and data not captured by sensors that monitor the corresponding physical spaces.
  17. The system of claim 1 wherein the digital enclosures further comprise relations between digital enclosures, services internal to the digital enclosures, and roles defining access to digital enclsoures.
  18. The system of claim 1 wherein the digital enclosures are hierarchical.
  19. The system of claim 1 wherein the digital enclosures are related based on a hierarchy of the corresponding physical spaces.
  20. The system of claim 1 wherein the digital enclosures and enclosure contexts are programmable.
  21. The system of claim 1 wherein the digital enclosures and enclosure contexts are based on a framework that is programmatically extensible.
  22. The system of claim 1 wherein the digital enclosures further comprise virtual objects located in the corresponding physical spaces.
  23. The system of claim 1 wherein the digital enclosures also capture digital events occurring in the digital enclosures which are not physical events occurring in the corresponding physical spaces.
  24. The system of claim 1 wherein the API comprises a spatial object model (SOM) representation of the enclosure contexts.
  25. The system of claim 24 wherein the SOM is organized according to an Entity-Component-System (ECS) programming paradigm.
  26. The system of claim 25 wherein the entities are defined by typed components, the entities represent physical objects in the corresponding physical spaces and the typed components capture attributes of the physical objects.
  27. The system of claim 25 wherein, within the digital enclosures, unique identifiers reference physical objects in the corresponding physical spaces.
  28. The system of claim 25 wherein, within the digital enclosures, logical entities represent physical objects in the corresponding physical spaces; and logical entities comprise a set of one or more entity instances and different entity instances capture different aspects of the physical objects.
  29. The system of claim 24 wherein the SOM is described using a Resource Description Framework.
  30. The system of claim 1 wherein the digital enclosures include references to physical objects in the physical spaces, and the system further comprises:
    a registry of physical objects and the corresponding references in the digital enclosures.
  31. The system of claim 30 wherein the references comprise at least one of an entity in the digital enclosures or a unique identifier used in the digital enclosures.
  32. The system of claim 1 wherein the bindings update the enclosure contexts in real-time.
  33. The system of claim 1 further comprising:
    digitization modules that process the captured sensor data and update the enclosure contexts based on the processed sensor data.
  34. The system of claim 33 wherein the plurality of digitization modules include AI models.
  35. The system of claim 33 wherein the plurality of digitization modules comprise digitization modules that monitor at least one physical quantity selected from airflow, temperature, luminosity, acoustics, electromagnetics, air quality, and barometric pressure in the corresponding physical spaces.
  36. The system of claim 35 wherein the digitization modules monitor the physical quantity based on data captured by the sensors and on spatial dynamic models of the physical quantity in the corresponding physical spaces.
  37. The system of claim 33 wherein the plurality of digitization modules comprise digitization modules that monitor dynamics of physical objects in the corresponding physical  spaces, including at least one of identity, position, orientation, velocity, body pose, gestures, clothing, attachments, expressions, actions and interactions with other objects.
  38. The system of claim 33 wherein the plurality of digitization modules comprise a library of digitization modules.
  39. The system of claim 38 wherein the digitization modules in the library are reusable.
  40. The system of claim 1 further comprising:
    an enclosure operating system that manages the digital enclosures, their bindings to the corresponding physical spaces, services running in the digital enclosures, and access of the plurality of software applications to the digital enclosures.
  41. The system of claim 40 wherein the enclosure operating system comprises a multi-layer technology stack including an infrastructure layer, a spatial structure layer, a digitization layer, and an intelligence layer; and the digital enclosures are bound at the spatial structure layer.
  42. The system of claim 1 wherein the API support domain-specific extensions for specific types of physical spaces.
  43. The system of claim 42 wherein the domain-specific extensions include a domain-specific API for at least one of retail stores, hospitals, factories, events, nursing homes, and schools.
  44. The system of claim 1 wherein the software applications execute actions based on the enclosure contexts.
  45. The system of claim 1 wherein at least one of the software applications updates the enclosure contexts based on results of processing performed by the software application.
  46. The system of claim 1 wherein at least one of the software applications alters the binding based on results of processing performed by the software application.
  47. The system of claim 1 wherein at least one of the software applications executes actions in the physical spaces.
  48. The system of claim 1 wherein at least one of the software applications delivers services to users located in the physical spaces.
  49. The system of claim 1 wherein the software applications comprise packages of code that are deployed to the digital enclosures.
  50. The system of claim 1 further comprising:
    a second plurality of digital enclosures that are bound to simulations of physical spaces.
  51. The system of claim 1 wherein the software applications are available at an app store and are deployable against digital enclosures for different physical spaces.
  52. A method carried out by operation of any of the systems of claims 1-51.
  53. A non-transitory computer-readable storage medium storing executable computer program instructions, the instructions executable by a computer system and causing the computer system to perform any of the methods of claim 52.
EP21850733.3A 2020-07-31 2021-07-30 Spatial and context aware software applications using digital enclosures bound to physical spaces Pending EP4168903A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US202063059871P 2020-07-31 2020-07-31
PCT/CN2021/109477 WO2022022668A1 (en) 2020-07-31 2021-07-30 Spatial and context aware software applications using digital enclosures bound to physical spaces

Publications (1)

Publication Number Publication Date
EP4168903A1 true EP4168903A1 (en) 2023-04-26

Family

ID=80037644

Family Applications (1)

Application Number Title Priority Date Filing Date
EP21850733.3A Pending EP4168903A1 (en) 2020-07-31 2021-07-30 Spatial and context aware software applications using digital enclosures bound to physical spaces

Country Status (3)

Country Link
EP (1) EP4168903A1 (en)
CN (1) CN116324769A (en)
WO (1) WO2022022668A1 (en)

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8719066B2 (en) * 2010-08-17 2014-05-06 Edifice Technologies Inc. Systems and methods for capturing, managing, sharing, and visualising asset information of an organization
CN106776625A (en) * 2015-11-23 2017-05-31 璧典凯 The Context-aware System Architecture that a kind of pragmatic web drives
CN111033444B (en) * 2017-05-10 2024-03-05 优玛尼股份有限公司 Wearable multimedia device and cloud computing platform with application ecosystem
US20200210804A1 (en) * 2018-12-31 2020-07-02 Qi Lu Intelligent enclosure systems and computing methods

Also Published As

Publication number Publication date
CN116324769A (en) 2023-06-23
WO2022022668A1 (en) 2022-02-03

Similar Documents

Publication Publication Date Title
US11636928B1 (en) Facilitating computerized interactions with EMRS
Chen et al. Runtime model based approach to IoT application development
US10679133B1 (en) Constructing and utilizing a knowledge graph for information technology infrastructure
Meyer et al. Internet of things-aware process modeling: integrating IoT devices as business process resources
Patel et al. Enabling high-level application development for the Internet of Things
Cubo et al. A cloud-based Internet of Things platform for ambient assisted living
O’Grady et al. Towards evolutionary ambient assisted living systems
Wang et al. DIMP: an interoperable solution for software integration and product data exchange
Wang et al. A collaborative product data exchange environment based on STEP
Patel et al. A model-driven development framework for developing sense-compute-control applications
Saunders et al. AUTOPILOT: Automating experiments with lots of Raspberry Pis
Corredor et al. Model-driven methodology for rapid deployment of smart spaces based on resource-oriented architectures
Nazarenko et al. Basis for an approach to design collaborative cyber-physical systems
Silva et al. Cyber-Physical Systems: a multi-criteria assessment for Internet-of-Things (IoT) systems
Khan et al. Rapid development of a data visualization service in an emergency response
Nam et al. Business-aware framework for supporting RFID-enabled applications in EPC Network
JP2016004359A (en) Opc ua server creation method
WO2022022668A1 (en) Spatial and context aware software applications using digital enclosures bound to physical spaces
Hammoudi et al. Model driven development of user-centred context aware services
Bucur et al. Multi-cloud resource management techniques for cyber-physical systems
Adão et al. Prototyping IoT-based virtual environments: an approach toward the sustainable remote Management of Distributed Mulsemedia Setups
Loke Building taskable spaces over ubiquitous services
Dey Explanations in Context-Aware Systems.
Reiterer et al. A Graph-Based Metadata Model for DevOps in Simulation-Driven Development and Generation of DCP Configurations
Erbel Scientific Workflow Execution Using a Dynamic Runtime Model

Legal Events

Date Code Title Description
STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: THE INTERNATIONAL PUBLICATION HAS BEEN MADE

PUAI Public reference made under article 153(3) epc to a published international application that has entered the european phase

Free format text: ORIGINAL CODE: 0009012

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: REQUEST FOR EXAMINATION WAS MADE

17P Request for examination filed

Effective date: 20230120

AK Designated contracting states

Kind code of ref document: A1

Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR

DAV Request for validation of the european patent (deleted)
DAX Request for extension of the european patent (deleted)