WO2022221532A1 - Collaboration immersive de participants à distance par le biais d'afficheurs multimédias - Google Patents

Collaboration immersive de participants à distance par le biais d'afficheurs multimédias Download PDF

Info

Publication number
WO2022221532A1
WO2022221532A1 PCT/US2022/024812 US2022024812W WO2022221532A1 WO 2022221532 A1 WO2022221532 A1 WO 2022221532A1 US 2022024812 W US2022024812 W US 2022024812W WO 2022221532 A1 WO2022221532 A1 WO 2022221532A1
Authority
WO
WIPO (PCT)
Prior art keywords
media display
media
display
user
disposed
Prior art date
Application number
PCT/US2022/024812
Other languages
English (en)
Inventor
Tanya MAKKER
Nitesh Trikha
Brian Lee SMITH
Keivan EBRAHIMI
Todd Daniel ANTES
Aditya Dayal
Amit Sarin
Original Assignee
View, Inc.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from PCT/US2021/027418 external-priority patent/WO2021211798A1/fr
Priority claimed from US17/300,303 external-priority patent/US20210383804A1/en
Priority claimed from US17/313,760 external-priority patent/US20230103284A9/en
Priority claimed from US17/338,562 external-priority patent/US11231633B2/en
Priority claimed from PCT/US2021/052595 external-priority patent/WO2022072461A2/fr
Application filed by View, Inc. filed Critical View, Inc.
Publication of WO2022221532A1 publication Critical patent/WO2022221532A1/fr

Links

Classifications

    • EFIXED CONSTRUCTIONS
    • E06DOORS, WINDOWS, SHUTTERS, OR ROLLER BLINDS IN GENERAL; LADDERS
    • E06BFIXED OR MOVABLE CLOSURES FOR OPENINGS IN BUILDINGS, VEHICLES, FENCES OR LIKE ENCLOSURES IN GENERAL, e.g. DOORS, WINDOWS, BLINDS, GATES
    • E06B7/00Special arrangements or measures in connection with doors or windows
    • E06B7/28Other arrangements on doors or windows, e.g. door-plates, windows adapted to carry plants, hooks for window cleaners
    • E06B7/30Peep-holes; Devices for speaking through; Doors having windows
    • EFIXED CONSTRUCTIONS
    • E06DOORS, WINDOWS, SHUTTERS, OR ROLLER BLINDS IN GENERAL; LADDERS
    • E06BFIXED OR MOVABLE CLOSURES FOR OPENINGS IN BUILDINGS, VEHICLES, FENCES OR LIKE ENCLOSURES IN GENERAL, e.g. DOORS, WINDOWS, BLINDS, GATES
    • E06B3/00Window sashes, door leaves, or like elements for closing wall or like openings; Layout of fixed or moving closures, e.g. windows in wall or like openings; Features of rigidly-mounted outer frames relating to the mounting of wing frames
    • E06B3/66Units comprising two or more parallel glass or like panes permanently secured together
    • E06B3/67Units comprising two or more parallel glass or like panes permanently secured together characterised by additional arrangements or devices for heat or sound insulation or for controlled passage of light
    • E06B3/6715Units comprising two or more parallel glass or like panes permanently secured together characterised by additional arrangements or devices for heat or sound insulation or for controlled passage of light specially adapted for increased thermal insulation or for controlled passage of light
    • EFIXED CONSTRUCTIONS
    • E06DOORS, WINDOWS, SHUTTERS, OR ROLLER BLINDS IN GENERAL; LADDERS
    • E06BFIXED OR MOVABLE CLOSURES FOR OPENINGS IN BUILDINGS, VEHICLES, FENCES OR LIKE ENCLOSURES IN GENERAL, e.g. DOORS, WINDOWS, BLINDS, GATES
    • E06B9/00Screening or protective devices for wall or similar openings, with or without operating or securing mechanisms; Closures of similar construction
    • E06B9/24Screens or other constructions affording protection against light, especially against sunshine; Similar screens for privacy or appearance; Slat blinds
    • EFIXED CONSTRUCTIONS
    • E06DOORS, WINDOWS, SHUTTERS, OR ROLLER BLINDS IN GENERAL; LADDERS
    • E06BFIXED OR MOVABLE CLOSURES FOR OPENINGS IN BUILDINGS, VEHICLES, FENCES OR LIKE ENCLOSURES IN GENERAL, e.g. DOORS, WINDOWS, BLINDS, GATES
    • E06B9/00Screening or protective devices for wall or similar openings, with or without operating or securing mechanisms; Closures of similar construction
    • E06B9/24Screens or other constructions affording protection against light, especially against sunshine; Similar screens for privacy or appearance; Slat blinds
    • E06B2009/2464Screens or other constructions affording protection against light, especially against sunshine; Similar screens for privacy or appearance; Slat blinds featuring transparency control by applying voltage, e.g. LCD, electrochromic panels

Definitions

  • U.S. Patent Application Serial No. 17/300,303 was initially filed as U.S. Provisional Patent Application Serial No. 63/181648, filed on April 29, 2021 , and titled “IMMERSIVE COLLABORATION OF REMOTE PARTICIPANTS ViA MEDIA DISPLAYS,” and then converted from a U.S. Provisional Patent Application to a U.S, Non-Provisional Application, specifically U.S. Patent Application Serial No. 17/300,303.
  • 17/313,760 and 17/300,303 are both also a Continuation-in-Part of the international Patent Application Serial No. PCT/US21/27418, filed April 15, 2021 , titled “INTERACTION BETWEEN AN ENCLOSURE AND ONE OR MORE OCCUPANTS,” that claims priority from U.S. Provisional Patent Application Serial No. 63/080,899, filed September 21 , 2020, titled “INTERACTION BETWEEN AN ENCLOSURE AND ONE OR MORE OCCUPANTS,” to U.S. Provisional Patent Application Serial No. 63/052,639, filed July 16, 2020, titled “INDIRECT INTERACTIVE INTERACTION WITH A TARGET IN AN ENCLOSURE,” and to U.S.
  • U.S. Patent Application Serial No. 16/664,089 is a Continuation-In-Part of International Patent Application Serial No. PCT/US18/29460, filed April 25, 2018, that claims priority to U.S. Provisional Patent Application Serial No. 62/607,618, to U.S. Provisional Patent Application Serial No. 62/523,606, to U.S. Provisional Patent Application Serial No. 62/507,704, to U.S. Provisional Patent Application Serial No. 62/506,514, and to U.S. Provisional Patent Application Serial No. 62/490,457, filed April 26, 2017, titled “ELECTROCHROMIC WINDOWS WITH TRANSPARENT DISPLAY TECHNOLOGY”.
  • 17/313,760 and 17/300,303 are both also a Continuation-In-Part of U.S. Patent Application Serial No. 17/081 ,809, filed October 27, 2020, titled “TINTABLE WINDOW SYSTEM COMPUTING PLATFORM,” which is a Continuation of U.S. Patent Application Serial No. 16/608,159, filed October 24, 2019, titled “TINTABLE WINDOW SYSTEM COMPUTING PLATFORM,” that is a National Stage Entry of International Patent Application Serial No. PCT/US18/29406, filed April, 25, 2018, titled “TINTABLE WINDOW SYSTEM COMPUTING PLATFORM,” which claims priority to U.S. Provisional Patent Application Serial No. 62/607,618, U.S.
  • This application is also a Continuation-In-Part of PCT/US21/52587 filed September 29, 2021 , titled “DISPLAY CONSTRUCT FOR MEDIA PROJECTION AND WIRELESS CHARGING,” which claims priority from U.S. Provisionai Patent Application Serial No. 63/246,770 filed September 21 , 2021 titled “DISPLAY CONSTRUCT FOR MEDIA PROJECTION AND WIRELESS CHARGING,” U.S.
  • PCT/US21/52587, PCT/US21/52595 and PCT/US21/52597 claim priority from International Patent Application Serial No. PCT/US2Q/53641 , filed September 30, 2Q2Q, titled “TANDEM VISION WINDOW AND TRANSPARENT DISPLAY.”
  • This disclosure relates generally to improved digital experience that provides users an enhanced immersive experience, which simulates common presence of a virtual participant ⁇ and optional related virtual auxiliary content) and physically present participants in a conference.
  • Various facilities e.g., buildings
  • the windows provide a way to view an environment external to the facility.
  • the window may take a substantial portion (e.g., more than about 30%, 40%, 50%, or 80% of a surface area) of a facility facade.
  • Users may request utilization of at least a portion of the window surface area to view various media.
  • the media may be for entertainment, educational, safety, health, and/or work purposes.
  • the media may facilitate processing, presenting, and/or sharing data.
  • the media may be or the purpose of conducting a conference such as in the form of a video conference with one or more remote parties.
  • a user may want to optimize usage of an inferior space of the facility to visualize the media (e.g., by using the window surface area).
  • the media may comprise an electronic media, digital media, and/or optical media.
  • a user may request viewing the media with an ability to view through at least a portion of the window (e.g., with minimal impact on visibility through the window).
  • the media may be displayed via a display that is at least partially transparent (e.g., to visible light). At times viewing the media may require a tinted (e.g., darker) backdrop.
  • a user may want to augment external views and/or projections of the display with overlays, augmented reality, and/or lighting.
  • the user interaction may occur by way of media display construct(s) and imaging device(s).
  • the imaging device may be associated with one or more interactive targets in an enclosure.
  • the interactive target(s) can comprise an optically switchabie device (e.g., tintabie window of a facility), projected media, environmental appliance, sensor, emitter, and/or any other apparatus that is communicatively coupled to a network in an enclosure, which network facilitates power and/or communication.
  • optically switchabie window(s) included in these device(s) are optically switchabie window(s).
  • enclosures e.g., buildings and other facilities
  • Electrochromic windows are a promising class of optically switchabie windows. Electrochromism is a phenomenon in which a material exhibits a reversible electrochemically-mediaied change in one or more optical properties when stimulated to a different electronic state. Electrochromic materials and the devices made from them may be incorporated into, for example, windows for home, commercial, or other use.
  • the color, shade, hue, tint, transmittance, absorbance, and/or reflectance of electrochromic windows can be changed, e.g., by inducing a change In the electrochromic material. For example, by applying a voltage across the electrochromic material.
  • Such capabilities can allow for control over the intensities of various (e.g,, visible light) wavelengths of light that may pass through the window.
  • One area of interest is control systems for driving optical transitions in optically switchabie windows to provide requested lighting conditions, e.g., while reducing the power consumption of such devices and/or improving the efficiency of systems with which they are integrated.
  • Various embodiments herein relate to methods, systems, software and networks for providing an immersive experience, which simulates common presence of a virtual participants) and/or related virtual auxiliary content, and present (e.g., local) partlcipant(s) in conference (e.g., enabled by video conferencing).
  • Such simulation may include (i) using an at least partially transparent media display having a portion of its projecting entities (e.g., pixels) projecting the virtual participant’s image and/or (e.g., select) virtual auxiliary content, while keeping at least a portion of the background at least partially transparent (e.g., to visible light), (ii) optionally disposing optical sensor(s) (e.g., included in a camera) behind the transparent media display at the gaze of the participant, and (iii) optionally using added virtual overlays (e.g., plants, furniture) to the virtual image that are consistent with the local environment, which virtual overlays appear perspectiveiy close to the local participants, e.g., to provide a sense of depth ranging from the overlays to the virtual participant projection and/or to the background showing through the transparent media display.
  • projecting entities e.g., pixels
  • virtual auxiliary content e.g., to visible light
  • optical sensor(s) e.g., included in a camera
  • added virtual overlays
  • Placement of the optical sensor(s) (e.g., camera) behind and at the gaze of the real participant may allow the participant to view the virtual participant while simultaneously being photographed at the real (e.g., actual) participant’s gaze (e.g., focal point).
  • the transparent media display can include touchscreen functionality, e.g., for shared access to any auxiliary documents (e.g., a virtual whiteboard), e.g., making it seem as if the users are sharing the same physical document in real time.
  • a method for digital collaboration comprises:
  • the second processor is operatively coupled to an other sensor configured to capture to at least one second user in the second location
  • the communication link comprises a machine to machine communication
  • the portion of the media stream which is suppressed comprises a region around an other portion of the media stream which depicts the second user.
  • the first media display comprises a transparent display, and wherein the portion of the media stream which is suppressed facilitates at least partial viewing of the first location of the first user through the transparent display, in some embodiments, the transparent display facilitates transmission of at least about 30% of light in the visible spectrum therethrough.
  • the transparent display facilitates transmission of from about 20% to about 90% of light in the visible spectrum therethrough
  • the first media display is coupled to a tintabie window.
  • the tintabie window alters visibility, color, transmission, and/or reflectance of visible light.
  • the tintabie window comprises an electrochromic device, in some embodiments, the electrochromic device is included in an insulated glass unit configured for installation in an enclosure, in some embodiments, the transparent display spans at least about 30% of an area of the tintabie window.
  • the transparent display spans from about 10% to about 1 GG% of an area of the tintabie window, in some embodiments, the tintabie window is coupled to a control system configured for adjusting a tint of the tintabie window, in some embodiments, the control system comprises, or is operatively coupled to, a building management system. In some embodiments, the control system comprises a distributed network of controllers. In some embodiments, the control system comprises a hierarchical control system in which a master controller is configured to control one or more local controllers. In some embodiments, the control system comprises a controller that is included in a device ensemble, wherein the device ensemble is disposed in an enclosure.
  • the device ensemble comprises (i) sensors or (ii) a sensor and an emitter, in some embodiments, the device ensemble is disposed in a fixture (e.g,, framing portion, ceiling, or wall), in some embodiments, the device ensemble is disposed in a non-fixture (e.g,, a furniture, a billboard, or another tangible and movable asset).
  • the device ensemble comprises (i) a plurality of processors or (ii) a plurality of circuit boards, in some embodiments, the method further comprises (C) displaying on the first media display at least one virtual object which depicts a furnishing that spatially appears to be disposed between (i) the first user and (ii) the media stream displayed on the first media display.
  • the at least one virtual object is displayed so that it provides an apparent depth which is in front of an apparent depth of the depiction of the second user, in some embodiments, the at least one virtual object is configured to flank a depiction of the at least one second user at least during a portion of streaming the media stream of the at least one second user.
  • the sensor is an image sensor associated with the first media display, which sensor is configured to capture a first user of the at least one first user, for generating an other media stream to be communicated via the communication link to the second media display, which other media stream is associated with the first location, which first user gazes towards the first media display.
  • the method further comprises adjusting the capture location to focus on a central, or on a substantially central, position (i) between pupils of a first user of the at least one first user, (ii) between brows of the first user, and/or (iii) at the end of a nose bride of the first user.
  • the position is vertically aligned, horizontally aligned, or both vertically and horizontally aligned, in some embodiments, adjustment of the capture location is performed manually at least in part.
  • adjustment of the capture location is performed automatically, in some embodiments, adjustment of the capture location is based at least in part on image processing, machine learning, and/or artificial intelligence, in some embodiments, adjustment of the capture location is controlled by at least one controller, in some embodiments, adjustment of the capture location is controlled by a control system configured to control at least one other device of a facility in which the first media display is disposed.
  • the method further comprises using the sensor for generating the other media stream from a capture location which corresponds to a gazing region of the first user directed towards the first media display, in some embodiments, the sensor is movable with respect to the first media display, the method further comprising adjusting the capture location to match the gazing region of the first user.
  • adjustment of the capture location is performed manually at least in part. In some embodiments, adjustment of the capture location is performed automatically according to a captured image of the first user.
  • the first user is disposed on a first side of the media display, and wherein the capture location of the sensor is disposed on a second side of the first media display that is at least partially transparent to visible light, such that the media stream depicts the first user using images passing through the transparent display of the first media display, which first side is an opposite of the first media display relative to the second side.
  • the first media display that is at least transparent to visible light is configured to allow at least a portion of the visible light to pass therethrough.
  • the first media display is configured to allow visible light to pass therethrough when the first media display is nonoperationai and/or when the first media display is operational.
  • the sensor is mounted on a movable carriage driven by the at least one controller.
  • the first media display is coupled to a tintable window.
  • the tintable window is an integrated glass unit, and wherein the movable carriage is (i) configured for planar motion, and (ii) disposed in an interior of the integrated glass unit, in some embodiments, the first media display includes a transparent substrate integrating a plurality of light emitting pixels, and wherein the sensor comprises a plurality of senseis disposed on the transparent substrate.
  • the region of the first media excludes depictions of the at least one second user and/or (il) the region of the second media excludes depictions of the at ieast one second user.
  • the shared auxiliary content Is updatable by the at least one first user, by the at least one second user, or by both the at least one first user and the at least one second user.
  • the region displaying the shared auxiliary content is configured to facilitate touchscreen capability for modifying the shared auxiliary content.
  • the shared auxiliary content is digitally stored in storage which is responsive to the at least one first user and/or to the at least one second user via an auxiliary communication link.
  • at least one of the first media display and the second media display is disposed in an individuai portal laid out within an enciosure.
  • at least one of the first media display and the second media display Is disposed in a small group pod laid out within an enclosure.
  • at least one of the first media display and the second media display is disposed in a large group zone laid out within an enciosure.
  • at least one of the first media display and the second media display is disposed on a freestanding panel laid out within an enciosure.
  • the method further comprises displaying, with the second media display at the second location, an other media stream of the at least the one first user sent to the second media display from the first media display via the communication link, wherein a first portion of the other media stream is suppressed from being displayed on the second media display that is at least partially transparent to visible light, to facilitate viewing of at least a portion of the second location through a portion of the second media display corresponding to the other media stream that is suppressed
  • the other media stream of the at least one second user includes a video stream captured by an other sensor associated with the second media display, and wherein the other sensor captures the video stream from a second capture location which corresponds to a gazing region of a second user of the at least one second user, on the second media display.
  • an apparatus for digital collaboration comprises at least one controller configured to perform, or direct performance of, of any of the methods disclosed above.
  • an apparatus for digital collaboration comprises at least one controller configured to: (A) operatively couple to a first processor that is operatively coupled to a first media display disposed at a first location occupied by at least one first user, which operatively coupling of the first processor is via a communication link to a second processor that is operatively coupled to a second media display disposed at a second location occupied by at least one second user; and (B) direct the first media display to display a media stream of the at least one second user sent to the first processor from the second processor via the communication link, wherein a first portion of the media stream is suppressed from being displayed on the first media display that is at least partially transparent to visible light, which suppression enables viewing of at least a portion of the first location through a portion of the first media display corresponding to the media stream that is suppressed.
  • the at least one controller comprises circuitry.
  • the first processor is included in a control system which comprises, or is operatively coupled to, a building management system, in some embodiments, the first processor is included in a control system which comprises a distributed network of controllers.
  • the first processor is included in a control system which comprises a hierarchical control system in which a master controller is configured to control one or more local controllers, in some embodiments, the first processor is included in a device ensemble, wherein the device ensemble is disposed in an enclosure.
  • the device ensemble comprises (i) sensors or (ii) a sensor and an emitter.
  • the device ensemble is disposed in a fixture (e.g., framing portion, ceiling, or wall), in some embodiments, the device ensemble is disposed in a non-fixture (e.g., a furniture, a billboard, or another tangible and movable asset).
  • the device ensemble comprises (i) a plurality of processors or (Ii) a plurality of circuit boards, in some embodiments, the apparatus further comprises a tintabie window which alters visibility, color, transmission, and/or reflectance of visible light, wherein the first processor is configured for adjusting a tint of the tintabie window, in some embodiments, the apparatus further comprises the tintabie window comprises an electrochromic device. In some embodiments, the apparatus further comprises the electrochromic device is included in an insulated glass unit configured for installation in an enclosure.
  • a non-transitory computer readable product instructions for digital collaboration when read by one or more processors, cause the one or more processors to execute, or direct execution, of any of the methods disclosed above.
  • a non-transitory computer readable product instructions for digital collaboration comprises: directing a first media display disposed at a first location, to display a media stream of the at least one second user disposed at a second location, which media stream is sent to a first processor operatively coupled to the first media display, from a second processor operatively coupled to the second media display, which media stream is sent via a communication link, wherein a first portion of the media stream is suppressed from the displaying on the first media display that is at least partially transparent to visible light, which suppression enables viewing of at least a portion of the first location through a portion of the first media display corresponding to the media stream that is suppressed, which one or more processors are operatively coupled to the first processor that is operatively coupled to the first media display disposed at the first location occupied by at least one first user, which
  • the product instructions are embedded in one of more non- transitory computer readable media. In some embodiments, the product instructions are included in a program product.
  • a system for digitai collaboration comprises a network configured to facilitate one or more operations of any of the methods disclosed above.
  • facilitating one or more operations comprises operatively coupling to one or more devices, operatively coupling to one or more apparatuses, operatively coupling to one or more systems, facilitate communication and/or faciiitate power transmission.
  • a system for digitai collaboration comprises: a network configured to: (a) operatively couple to a first media display disposed at a first location occupied by at least one first user, which first media display is operatively coupled to a first processor; a second media display disposed at a second location occupied by at least one second user, which second media display is operatively coupled to a second processor; and (b) facilitate a communication link between the first processor and the second processor, which communication link is configured to transmit the media stream transmitted to the first media display, wherein a first portion of the media stream is suppressed from being displayed on the first media display that is at least partially transparent to visible light, which suppression enables viewing of at least a portion of the first location through a portion of the first media display corresponding to the media stream that is suppressed.
  • the network is configured for transmission of the media stream at least in part by being configured to enable transmission of a protocol of the media stream.
  • the network is operatively coupled to a hierarchical control system at ieast partially disposed in an enclosure which includes the first location.
  • the network is at Ieast partly disposed in a facility and is capable of transmitting power and communication signals, in some embodiments, the network is configured to connect to a plurality of devices in the facility, in some embodiments, (i) at Ieast two of the plurality of devices are of different type and/or (ii) at least two of the plurality of devices are of the same type.
  • the plurality of devices includes processors, controllers, sensors, emitters, receivers, transmitters, and/or device ensembles. In some embodiments, the plurality of devices includes a controller operatively coupled to a tintabie window for operatively controlling the tintable window, in some embodiments, the plurality of devices includes a controller operatively coupled to control a lighting device, a tintable window, a sensor, an emitter, a media display, a dispenser, a processor, a power source, a security system, a fire alarm system, a sound media, an antenna, a radar, a controller, a heater, a cooler, a vent, or a heating ventilation and air conditioning system (HVAC).
  • HVAC heating ventilation and air conditioning system
  • the communication signals include cellular communication signals
  • the network is configured to transmit at least fourth (4G) or at least fifth (5G) generation cellular communication
  • the network is configured for transmission of power and communication signals using coaxial cables, optical wires, and/or twisted wires.
  • the network is configured of transmitting power and communication signals on a single cable.
  • the network is the first network installed in a facility.
  • the network is disposed at least in an envelope of a facility.
  • the network is configured to transmit two or more communication types on a single wire.
  • the communication types comprise cellular communication, video communication, control communication, or other data stream.
  • a method for digital collaboration comprising using a sensor to capture a media stream of at least the one first user disposed in a first location, which sensor is associated with a first media display disposed in the first location, and is configured to obtain the media stream of at least one first user through the first media display that is at least partially transparent to visible light.
  • the method further comprises establishing a communication link between (i) a first processor operatively coupled to the first media display and (ii) a second processor operatively coupled to a second media display disposed at a second location occupied by at least one second user, in some embodiments, the communication link comprises a machine to machine communication, in some embodiments, the communication link is configured to facilitate transmission of the media stream, in some embodiments, the method further comprises transmitting the media stream for display on the second media display. In some embodiments, the method further comprises using the sensor for generating the media stream from a capture location which corresponds to a gazing region of the first user directed towards the first media display.
  • the method further comprises adjusting the capture location to focus on a central, or on a substantially central, position (i) between pupils of a first user of the at least one first user, (ii) between brows of the first user, and/or (iii) at the end of a nose bride of the first user.
  • the position is vertically aligned, horizontally aligned, or both vertically and horizontally aligned.
  • adjustment of the capture location is performed manually at least in part.
  • adjustment of the capture location is performed automatically.
  • adjustment of the capture location is based at least In part on Image processing, machine learning, and/or artificial intelligence, in some embodiments, adjustment of the capture location is controlled by at least one controller.
  • adjustment of the capture location is controlled by a control system configured to control at least one other device of a facility in which the first media display is disposed.
  • the sensor is movable with respect to the first media display, the method further comprising adjusting the capture location to match the gazing region of the first user.
  • the adjusting of the capture location is performed manually at least in part.
  • adjustment of the capture location is performed automatically according to a captured image of the first user.
  • the first user is disposed on a first side of the media display, and wherein the capture location of the sensor is disposed on a second side the first media display that is at least partially transparent to visible light, such that the first media stream depicts the first user using images passing through the transparent display of the first media display, which first side is an opposite of the first media display relative to the second side, in some embodiments, the sensor is mounted on a movable carriage driven by at least one controller, in some embodiments, the first media display is coupled to a tintabie window.
  • the tintable window Is an integrated glass unit, and wherein the movable carriage is (i) configured for planar motion, and (ii) disposed in an Interior of the integrated glass unit.
  • the first media display includes a transparent substrate integrating a plurality of light emitting pixels, and wherein the sensor comprises a plurality of senseis disposed on the transparent substrate.
  • the first media display is coupled to a tintable window.
  • the tintable window alters visibility, color, hue, transmission, and/or reflectance Divisible light
  • the tintable window comprises an electrochromic device, in some embodiments, the eiectrochromic device is included in an insulated glass unit configured for installation in an enclosure.
  • the transparent display spans at least about 30% of an area of the tintable window, in some embodiments, the transparent display spans from about 10% to about 100% of an area of the tintable window.
  • the tintable window is coupied to a control system configured for adjusting a tint of the tintable window.
  • the control system comprises, or is operatively coupied to, a building management system, in some embodiments, the control system comprises a distributed network of controllers.
  • the control system comprises a hierarchical control system in which a master controller that is configured to control one or more local controllers.
  • the control system comprises a controller that is included in a device ensemble, wherein the device ensemble is disposed in the enclosure.
  • the device ensemble comprises (i) sensors or (ii) a sensor and an emitter.
  • the device ensemble is disposed in a fixture (e.g., framing portion, ceiling, orwali). in some embodiments, the device ensemble is disposed in a non-fixture (e.g., a furniture, a billboard, or another tangible and movable asset). In some embodiments, the device ensemble comprises (i) a plurality of processors or (ii) a plurality of circuit boards. In some embodiments, at least one of the first media display and the second media display, is disposed in an individual portal laid out within an enclosure.
  • At ieast one of the first media display and the second media display is disposed in a small group pod laid out within an enclosure, in some embodiments, at least one of the first media display and the second media display, is disposed in a large group zone laid out within an enclosure. In some embodiments, at Ieast one of the first media display and the second media display, is disposed on a freestanding panel laid out within an enclosure. In some embodiments, at Ieast one of the first media display and the second media display, is disposed in an activity hub laid out within an enclosure.
  • an apparatus for digital collaboration comprises at Ieast one controller configured to perform, or direct performance of, of any of the methods disclosed above.
  • an apparatus for digital collaboration comprises at Ieast one controller configured to: (A) operatively couple to a sensor that is (i) configured for capturing a media stream (ii) associated with a first image display, (iii) is disposed in a first location in which the first image display is disposed, and (iv) configured to obtain the media stream through the first media display that is at ieast partially transparent to visible light ; and (B) direct the sensor to capture the media stream in the first location.
  • the first media display is operatively coupled to a first processor, which first location is occupied by at Ieast one first user, which first processor is operatively coupled via a communication link to a second processor operatively coupled to a second media display disposed at a second location occupied by at ieast one second user, in some embodiments, the at Ieast one controller is configure to direct transmission of the media stream for display by the second media display.
  • a non-transitory computer readable product instructions for digitai collaboration when read by one or more processors, causes the one or more processors to execute, or direct execution, of any of the methods disclosed above.
  • a non-transitory computer readable product instructions for digital collaboration comprises: directing a sensor to capture a media stream in a first iocation, which one or more processors are operatively coupled to the sensor that is (I) configured for capturing a media stream (ii) associated with a first image display, (iii) is disposed in a first iocation in which the first image display is disposed, and (iv) configured to obtain the media stream through the first media display that is at least partially transparent to visible light.
  • the product instructions are embedded in one of more non- transitory computer readable media. In some embodiments, the product instructions are included in a program product.
  • a system for digital collaboration comprises a network configured to facilitate one or more operations of any of the methods disclosed above.
  • facilitating one or more operations comprises operatively coupling to one or more devices, operatively coupling to one or more apparatuses, operatively coupling to one or more systems, facilitate communication and/or facilitate power transmission.
  • a system for digital collaboration comprises: a network configured to: (a) operatively coupling to a sensor that is (i) configured for capturing a media stream (ii) associated with a first image display, (iii) is disposed in a first iocation in which the first image display is disposed, and (iv) configured to obtain the media stream through the first media display that is at least partially transparent to visible light; and (b) facilitate a communicating of the media stream.
  • the network is configured for transmitting the media stream at least in part by being configured to enable transmission of a protocol of the media stream, in some embodiments, the network is configured to operatively coupled to a hierarchical control system at least partially disposed In an enclosure which includes the first location. In some embodiments, the network is at least partly disposed in a facility and is capable of transmitting power and communication signals. In some embodiments, the network interconnects a plurality of devices in the facility.
  • the plurality of devices includes processors, controllers, sensors, emitters, receivers, transmitters, and/or device ensembles, in some embodiments, the plurality of devices includes a controller operatively coupled to a tintable window for operatively controlling the tintab!e window. In some embodiments, the plurality of devices includes a controller operatively coupled to control a lighting device, a tintable window, a sensor, an emitter, a media display, a dispenser, a processor, a power source, a security system, a fire alarm system, a sound media, an antenna, a radar, a controller, a heater, a cooler, a vent, or a heating ventilation and air conditioning system (HVAC).
  • HVAC heating ventilation and air conditioning system
  • the communication signals include cellular communication signals.
  • the network is configured to transmit at least fourth (4G) or at least fifth (5G) generation cellular communication.
  • the network is configured for transmission of power and communication signals using coaxial cables, optical wires, and/or twisted wires, in some embodiments, the network is capable of transmitting both power and communication signals in a single cable.
  • the network is the first network installed in a facility.
  • the network is disposed at least in an envelope of a facility, in some embodiments, the network is configured to transmit two or more communication types on a single wire.
  • the communication types comprise cellular communication, video communication, control communication, or other data stream.
  • a device for interactive digital communication comprising: a frame configured to frame a supportive structure, a media display, and one or more sensors configured for image capturing.
  • the frame includes curved and/or straight portions, in some embodiments, at least one corner (e.g., four corners) of the frame are curved, in some embodiments, the supportive structure comprises an opaque or a transparent portion. In some embodiments, the supportive structure is a window such as a tintable window. In some embodiments, the display is a transparent display, in some embodiments, the transparent display is configured to project a redacted image.
  • the display is configured to project a higher intensity image by a portion of the projecting entities (e.g., pixels) of the media display, and project a relatively reduced Intensity image on an other portion of the projecting entities, in some embodiments, the reduced intensity comprises no projection (e.g., zero intensity), in some embodiments, the reduce intensity projection facilitates viewing through the media display, in some embodiments, the device further comprises lighting (e.g., fluorescent, incandescent, and/or LED). The lighting may be a strip disposed above (e.g,, immediately above may be in a direction against the gravitational center. Immediately above may be contacting the display) the display.
  • lighting e.g., fluorescent, incandescent, and/or LED
  • the lighting comprises a lighting strip
  • the device comprises a ledge.
  • the ledge is configured to act as a table, in some embodiments, the ledge is disposed immediately below the display (e.g., immediately below may be in a direction towards the gravitational center, immediately below may be contacting the display), in some embodiments, the device is configured to operatively couple (e.g,, connect) to a communication and/or power network (e.g., comprising wired and/or wireless coupling).
  • the display may be configured to project images (e.g., stream video images), e.g., of participants and/or any auxiliary content.
  • the display is configured to project overlays (e.g., virtual objects).
  • the device and/or display is operatively coupled to an app that facilitates a user to configure the display and/or its projection. For example, facilitate choosing overlays and/or adjusting the one or more sensors.
  • the one or more sensors are operatively coupled to an actuator.
  • the one or more sensors can be stationary or mobile.
  • a user may adjust position of the one or more sensors (e.g., camera) to align with a facial user of the user, e.g., such that an image taken by the one or more sensors will coincide with the user’s face ⁇ e.g., pupils).
  • a method for digital collaboration comprises: moving a media display with respect to a wall portion of a digital collaboration unit, which media display is at least partially transparent to visible light, which digital collaboration unit comprises (i) a physical work surface disposed adjacent to the media display and configured to be disposed between the media display and a user of the media display, and/or (ii) lighting disposed adjacent to the media display and configured to project light onto the user and/or across the media display and towards a gravitational center.
  • moving of the media display is based at least in part on a position of at least one bodily feature and/or body portion of the user.
  • moving of the media display comprises a movement with respect to the gravitational center.
  • the bodily portion comprises a nose, eyebrows, eyes, pupils, a head, a chin, lips, a nose bridge, or ears.
  • the method further comprises using a sensor to capture a media stream of the user disposed in a location adjacent to the media display, which sensor is associated with the digitai collaboration unit.
  • the sensor is configured to be located on an opposite side of the media display from the user of the media display.
  • moving the media display includes the media display being movable to a height above a floor of at least an average person in a sitting and/or standing position. In some embodiments, including moving the physical work surface and/or the lighting in coordination with movement of the media display, or vice versa. In some embodiments, the coordination of the movement of the physical work surface and/or the lighting with movement of the media display comprises vertical movement relative to the gravitational center, in some embodiments, the method including moving the physical work surface and/or the lighting without coordination with movement of the media display, or vice versa, in some embodiments, the movement of the physical work surface and/or the lighting comprises a vertical movement relative to the gravitational center.
  • an apparatus for digital collaboration comprises at least one controller configured to: (a) operatively coupled to the media display, and (b) perform, or direct performance of, any of the methods disclosed above.
  • a non-transitory computer readable program instructions for digital collaboration the non-transitory computer readable program instructions, w hen read by one or more processors operatively coupled to the media display, cause the one or more processors to execute, or direct execution of, any of the methods disclosed above.
  • a system for digital collaboration comprises: a network configured to operatively coupled to the media display, and transmit one or more signals facilitating the method of any disclosed above.
  • an apparatus for digital collaboration comprises one or more controllers comprising circuitry, which at least one controller is configured to:
  • a) operatively couple to a media display; and (b) move, or direct movement of, the media display with respect to a wall portion of a digital collaboration unit, which media display is at least partially transparent to visible light
  • digital collaboration unit comprises (i) a physical work surface disposed adjacent to the media display and configured to be disposed between the media display and a user of the media display, and/or (ii) lighting disposed adjacent to the media display and configured to project light onto the user and/or across the media display and towards a gravitational center.
  • the at least one controller comprises a hierarchical control system having at least three levels of hierarchy.
  • the at least one controller comprises a controller disposed in a device ensemble having a housing enclosing at least one sensor, in some embodiments, the device ensemble comprises another sensor, an emitter, or a transceiver.
  • the at least one controller comprises a microcontroller, in some embodiments, the at least one controller is configured to utilize, or direct utilization of, artificial intelligence for predictive control.
  • the at least one controller comprises a controller disposed in, or attached to, a fixture of a facility.
  • a non-transitory computer readable program instructions for digital coliaboration when read by one or more processors operatively coupled to a media display, cause the one or more processors to execute operations comprises: moving, or directing movement of, the media display with respect to a wall portion of a digital collaboration unit, which media display Is at least partially transparent to visible light, which digital coliaboration unit comprises (i) a physical work surface disposed adjacent to the media display and configured to be disposed between the media display and a user of the media display, and/or (ii) lighting disposed adjacent to the media display and configured to project light onto the user and/or across the media display and towards a gravitational center.
  • the one or more processors comprises a hierarchical system of processors having at least three levels of hierarchy.
  • the one or more processors comprises a processor disposed in a device ensemble having a housing enclosing at least one sensor.
  • the device ensemble comprises another sensor, an emitter, or a transceiver, in some embodiments, the processor comprises a graphic processing unit.
  • the operations comprise utilizing, or directing utilization of, an artificial intelligence computational scheme for prediction of a second attribute
  • the one or more processors comprises a processor disposed in, or attached to, a fixture of a facility.
  • the one or more processors comprises a processor disposed externally to a facility.
  • externally to the facility comprises a cloud server, in some embodiments, the operations comprise remotely updating, or directing remote update, from a source external to a facility.
  • a system for digitai collaboration comprises: a network configured to: (a) operatively couple to a media display; and (b) transmit one or more signals that facilitate moving the media display with respect to a wail portion of a digital collaboration unit, which media display is at least partially transparent to visible light, which digitai collaboration unit comprises (i) a physical work surface disposed adjacent to the media display and configured to be disposed between the media display and a user of the media display, and/or (ii) lighting disposed adjacent to the media display and configured to project light onto the user and/or across the media display and towards a gravitational center.
  • the network is configured to transmit communication and power on a single cable, in some embodiments, the network is configured to transmit communication protocols, wherein at least two of the communication protocols are different. In some embodiments, the communication protocols comprise at ieast a fourth generation, or a fifth generation cellular communication protocol. In some embodiments, the communication protocols facilitate cellular, media, control, security, and/or other data communication, in some embodiments, the communication protocols comprise a control protocol that comprises building automation control protocol. In some embodiments, the network is configured to operatively couple to one or more antennas, and optionally wherein the one or more antennas comprise a distributed antenna system. In some embodiments, the network is configured to facilitate remote software updates from a source externa! to the facility.
  • an apparatus for digital collaboration comprises: a digital collaboration unit having a wail portion and a media display that is configured to be moveable with respect to the wall portion, which media display is at Ieast partially transparent to visible light, which digitai collaboration unit comprises (i) a physical work surface disposed adjacent to the media display and configured to be disposed between the media display and a user of the media display, and/or (ii) lighting disposed adjacent to the media display and configured to project light onto the user and/or across the media display and towards a gravitational center.
  • the physical work surface and/or the lighting is configured to be moveable in coordination with movement of the media display, or vice versa, in some embodiments, the movement is in at least a vertical direction relative to the gravitational center. In some embodiments, the physical work surface and/or the lighting is configured to be moveable without coordination with movement of the media display, or vice versa. In some embodiments, the movement is in at least a vertical direction relative to the gravitational center, in some embodiments, movement of the display relative to the wall portion is based at least in part on a position of at least one bodily feature and/or body portion of the user. In some embodiments, movement of the media display comprises a movement with respect to the gravitational center.
  • the bodily portion comprises a nose, eyebrows, eyes, pupils, a head, a chin, lips, a nose bridge, or ears.
  • the apparatus further comprises a sensor configured to capture a media stream of the user disposed in a location adjacent to the media display, which sensor is associated with the digital collaboration unit.
  • the sensor is located on an opposite side of the media display from the user of the media display.
  • movement of the media display includes the media display being configured to be moveable to a height above a floor of at least an average person in a sitting and/or standing position.
  • an apparatus for digital collaboration comprises: a first digital collaboration unit comprising a first waii portion including a media display that is at least partially transparent, which media display is configured to be moveable with respect to the first wail portion, which first digital collaboration unit further comprises (i) a second wall portion configured to be moveable to selectively hinder viewing onto the media display and/or onto a first user disposed in the first digital collaboration unit, and/or (ii) at least two sensors mounted spaced apart from each other as part of the first wall portion of the first digital collaboration unit, which at least two sensors are configured to capture an image of the first user located in the first digital collaboration unit.
  • the first digital collaboration unit is located in a facility having walls defining a room and the first wail portion is disposed within the room and is configured to be a partition, in some embodiments, first wail portion is shorter than the wails of the room.
  • the first wall portion is of at least a height of an average person, in some embodiments, movement of the second wall portion comprises a swiveling motion, or a sliding motion.
  • movement of the media display is based at least in part on a position of at least one bodily feature and/or body portion of the first user, in some embodiments, movement of the media display comprises a movement with respect to a gravitational center, in some embodiments, the bodily portion comprises a nose, eyebrows, eyes, pupils, a head, a chin, lips, a nose bridge, or ears. In some embodiments, at least one of the at least two sensors is located on an opposite side of the media display from the first user of the media display. In some embodiments, movement of the media display includes the media display being moveable to a height above a floor or at least an average person in a sitting and/or standing position.
  • the apparatus further including (i) a physical work surface disposed adjacent to the media display and configured to be dispose between the media display and the first user of the media display, and/or (ii) lighting disposed adjacent to the media display and configured to project light onto the first user and/or across the media display and towards a gravitational center: and wherein the physical work surface and/or the lighting is configured to be moveable in coordination with movement of the media display, or vice versa.
  • the movement of the physical work surface and/or the lighting is in at least a vertical direction relative to the gravitational center
  • the apparatus further including (i) a physical work surface disposed adjacent to the media display and configured to be dispose between the media display and the first user of the media display, and/or (ii) lighting disposed adjacent to the media display and configured to project light onto the first user and/or across the media display and towards a gravitational center: and wherein the physical work surface and/or the lighting is configured to be moveabie without coordination with movement of the media display, or vice versa, in some embodiments, the movement of the physical work surface and/or the lighting is in at least a vertical direction relative to the gravitational center.
  • a method for digital collaboration comprises: moving a media display with respect to a first wall portion of a first digital collaboration unit, which media display is at least partially transparent, which first digital collaboration unit comprises (i) a second wail portion and/or (ii) at least two sensors mounted spaced apart from each other as part of the first wail portion of the first digital collaboration unit, which at least two sensors are configured to capture an image of a first user located in the first digital collaboration unit; and moving the second wall portion to selectively hinder viewing onto the media display and/or onto the first user disposed in the first digital collaboration unit.
  • the first digital collaboration unit is located in a facility having walls defining a room and the first wail portion is disposed within the room and is configured to be a partition.
  • first wall portion is shorter than the wails of the room.
  • the first wall portion Is of at least a height of an average person.
  • moving the second wail portion comprises a swiveling motion, or a sliding motion.
  • moving the media display is based at least in part on a position of at least one bodily feature and/or body portion of the first user, in some embodiments, moving the media display comprises a movement with respect to a gravitational center.
  • the bodily portion comprises a nose, eyebrows, eyes, pupils, a head, a chin, lips, a nose bridge, or ears.
  • at least one of the at least two sensors is configured to be located on an opposite side of the media display from the first user of the media display.
  • moving the media display includes the media display being moveable to a height above a floor or at least an average person in a sitting and/or standing position.
  • the first digital collaboration unit comprises (i) a physical work surface disposed adjacent to the media display and configured to be dispose between the media display and the first user of the media display, and/or (II) lighting disposed adjacent to the media display and configured to project light onto the first user and/or across the media display and towards a gravitational center; and the method further comprises moving the physical work surface and/or the lighting in coordination with movement of the media display, or vice versa. In some embodiments, moving the physical work surface and/or the lighting is in at least a vertical direction relative to the gravitational center.
  • the first digital collaboration unit comprises (i) a physical work surface disposed adjacent to the media display and configured to be dispose between the media display and the first user of the media display, and/or (II) lighting disposed adjacent to the media display and configured to project light onto the first user and/or across the media display and towards a gravitational center; and the method further comprises moving the physical work surface and/or the lighting without coordination with movement of the media display, or vice versa.
  • the movement of the physical work surface and/or the lighting is in at least a vertical direction relative to the gravitational center.
  • an apparatus for digital collaboration comprises at least one controller configured to perform, or direct performance of, any of the methods disclosed above.
  • a system for digital collaboration comprises: a network configured to operatively couple to the digital collaboration unit, and transmit one or more signals facilitating the method of any disclosed above.
  • a non-transitory computer readable program instructions for digital collaboration when read by one or more processors operatively coupled to the first digital collaboration unit, cause the one or more processors to execute, or direct execution of, any of the methods disclosed above,
  • an apparatus for digital collaboration comprises one or more controllers comprising circuitry, which at least one controller is configured to:
  • first digital collaboration unit (a) operatively couple to a first digital collaboration unit; and (b) move, or direct movement of, a media display with respect to a first wail portion of the first digital collaboration unit, which media display is at least partially transparent, which first digital collaboration unit comprises (i) a second wall portion configured to be moveable to selectively hinder viewing onto the media display and/or onto a first user disposed in the first digital collaboration unit, and/or (ii) at least two sensors mounted spaced apart from each other as part of the first wall portion of the first digital collaboration unit, which at least two sensors are configured to capture an image of the first user located in the first digital collaboration unit.
  • the at least one controller comprises a hierarchical control system having at least three levels of hierarchy.
  • the at least one controller comprises a controller disposed in a device ensemble having a housing enclosing at least one sensor.
  • the device ensemble comprises another sensor, an emitter, or a transceiver.
  • the at least one controller comprises a microcontroller.
  • the at least one controller is configured to utilize, or direct utilization of, artificial intelligence for predictive control.
  • the at least one controller comprises a controller disposed in, or attached to, a fixture of a facility.
  • a non-transitory computer readable program instructions for digital collaboration when read by one or more processors operatively coupled to a first digital collaboration unit, cause the one or more processors to execute operations comprises: moving, or directing movement of, a media display with respect to a first wail portion of the first digital collaboration unit, which media display is at least partially transparent, which first digital collaboration unit comprises (i) a second wall portion configured to be moveable to selectively hinder viewing onto the media display and/or onto a first user disposed in the first digital collaboration unit, and/or (ii) at least two sensors mounted spaced apart from each other as part of the first wall portion of the first digital collaboration unit, which at least two sensors are configured to capture an image of the first user located in the first digital collaboration unit.
  • the one or more processors comprises a hierarchical system of processors having at least three levels of hierarchy.
  • the one or more processors comprises a processor disposed in a device ensemble having a housing enclosing at least one sensor.
  • the device ensemble comprises another sensor, an emitter, or a transceiver, in some embodiments, the processor comprises a graphic processing unit.
  • the operations comprise utilizing, or directing utilization of, an artificial intelligence computational scheme for prediction of a second attribute.
  • the one or more processors comprises a processor disposed in, or attached to, a fixture of a facility, in some embodiments, the one or more processors comprises a processor disposed externally to a facility.
  • externally to the facility comprises a cloud server, in some embodiments, operations comprise remotely updating, or directing remote update, from a source external to a facility.
  • a system for digital collaboration comprises: a network configured to: (a) operatively couple to a media display: and (b) transmit one or more signals that facilitate movement of, a media display with respect to a first wail portion of a first digital collaboration unit, which media display is at least partially transparent, which first digital collaboration unit comprises (i) a second wail portion configured to be moveable to selectively hinder viewing onto the media dispiay and/or onto a first user disposed in the first digital collaboration unit, and/or (ii) at least two sensors mounted spaced apart from each other as part of the first wall portion of the first digital collaboration unit, which at least two sensors are configured to capture an image of the first user located in the first digital collaboration unit.
  • the network is configured to transmit communication and power on a single cable, in some embodiments, the network is configured to transmit communication protocols, wherein at least two of the communication protocols are different. In some embodiments, the communication protocols comprise at least a fourth generation, or a fifth generation cellular communication protocol. In some embodiments, the communication protocols facilitate cellular, media, control, security, and/or other data communication. In some embodiments, the communication protocols comprise a control protocol that comprises a building automation control protocol. In some embodiments, the network is configured to operatively couple to one or more antennas, and optionally wherein the one or more antennas comprise a distributed antenna system, in some embodiments, the network is configured to facilitate remote software updates from a source external to a facility.
  • operatively coupled comprises physically coupled, wirelessly coupled, communicatively coupled, or electronically coupled.
  • the present disclosure provides systems, apparatuses (e.g., controllers), and/or non-transitory computer-readable medium (e.g., software) that implement any of the methods disclosed herein.
  • apparatuses e.g., controllers
  • non-transitory computer-readable medium e.g., software
  • the present disclosure provides methods that use any of the systems and/or apparatuses disclosed herein, e.g., for their intended purpose.
  • an apparatus comprises at least one controller that is programmed to direct a mechanism used to implement (e.g., effectuate) any of the method disclosed herein, wherein the at least one controller is operatively coupled to the mechanism.
  • an apparatus comprises at least one controller that is configured (e.g., programmed) to implement (e.g., effectuate) the method disclosed herein.
  • the at least one controller may implement any of the methods disclosed herein.
  • a system comprises at least one controller that is programmed to direct operation of at least one another apparatus (or component thereof), and the apparatus ⁇ or component thereof), wherein the at least one controller is operatively coupled to the apparatus (or to the component thereof).
  • the apparatus (or component thereof) may include any apparatus (or component thereof) disclosed herein.
  • the at least one controller may direct any apparatus (or component thereof) disclosed herein.
  • a computer software product comprising a non-transitory computer-readable medium in which program instructions are stored, which instructions, when read by a computer, cause the computer to direct a mechanism disclosed herein to implement (e.g., effectuate) any of the method disclosed herein, wherein the non-transitory computer-readable medium is operatively coupled to the mechanism.
  • the mechanism can comprise any apparatus (or any component thereof) disclosed herein.
  • the present disclosure provides a non-transitory computer- readable medium comprising machine-executable code that, upon execution by one or more computer processors, implements any of the methods disclosed herein.
  • the present disclosure provides a non-transitory computer- readable medium comprising machine-executable code that, upon execution by one or more computer processors, effectuates directions of the controlier(s) (e.g., as disclosed herein).
  • the present disclosure provides a computer system comprising one or more computer processors and a non-transitory computer-readable medium coupled thereto.
  • the non-transitory computer-readable medium comprises machine-executable code that, upon execution by the one or more computer processors, implements any of the methods disclosed herein and/or effectuates directions of the controiler(s) disclosed herein.
  • the content of this summary section is provided as a simplified introduction to the disclosure and is not intended to be used to limit the scope of any invenfion disclosed herein or the scope of the appended claims.
  • Fig. 1 depicts an immersive video interaction between collaborators via a media display
  • FIG. 2 schematically shows a side view of an arrangement of a media display and movable sensor(s) (e.g., camera);
  • a media display and movable sensor(s) e.g., camera
  • Fig. 3 shows a plan view of a display and movable sensor(s) (e.g., camera);
  • Fig. 4 shows interactions of media display substrates with image sensors
  • FIG. 5 schematically illustrates a user cutout to be extracted from an incoming image to be displayed on a media display
  • Fig. 6 depicts an immersive video interaction between collaborators via a transparent media display integrated with an exterior window of a building
  • Fig. 7 depicts an immersive video interaction between collaborators via a transparent media display on a standalone panel inside a facility
  • FIG. 8 shows a flowchart of an immersive collaboration method
  • Fig. 9 depicts an enclosure communicatively coupled to its digital twin representation
  • Figs. 10A and 1QB show various windows and displays
  • FIG. 11 schematically shows a display (e.g., a display construct assembly);
  • FIG. 12 schematically shows a user interacting with a device of disposed on or attached to a wall
  • Fig. 13 schematically shows a perspective view of an office space in a building including areas for immersive video collaboration
  • Fig. 14 depicts an immersive video interaction between collaborators using an individual portal
  • Fig, 15 depicts a nook or pod for immersive video interaction which is at least partially enclosed for privacy
  • Fig. 16 depicts an immersive video interaction between collaborators using multiple individual portals
  • Fig. 17 depicts an immersive video interaction between collaborators using multiple displays in a local area accommodating many local participants
  • FIG. 18 schematically shows an eiectrochromic device
  • Fig. 19 shows a cross-sectional view of an example eiectrochromic window in an Integrated Glass Unit (IGU);
  • FIG. 20 schematically shows an example of a control system architecture and a building
  • Fig. 21 shows a schematic example of a sensor arrangement
  • FIG. 22 schematically shows a processing system and related components
  • Fig, 23 show various windows and a display construct in a framing system
  • Fig. 24 scbematicaily shows a perspective view of digital collaboration units
  • FIG. 25 schematically shows a control system, associated network, and associated devices
  • FIG. 26 schematically shows a perspective view of digital collaboration units
  • Fig. 27 shows a flowchart of immersive digital collaboration
  • Fig. 28 shows a flowchart of immersive digital collaboration.
  • ranges are meant to be inclusive, unless otherwise specified.
  • a range between value 1 and value 2 is meant to be inclusive and include value 1 and value 2.
  • the inclusive range will span any value from about value 1 to about value 2.
  • the term “adjacent” or “adjacent to,” as used herein, includes “next to,” “adjoining,” “in contact with,” and “in proximity to.”
  • the conjunction “and/or” in a phrase such as “including X, Y, and/or Z”, refers to in inclusion of any combination or plurality of X, Y, and Z.
  • such phrase is meant to include X.
  • such phrase is meant to include Y,
  • such phrase is meant to Include Z.
  • such phrase is meant to include X and Y.
  • such phrase is meant to include X and Z.
  • such phrase is meant to include Y and Z.
  • such phrase is meant to include a plurality of Xs.
  • such phrase is meant to include a plurality of Ys.
  • such phrase is meant to include a plurality of Xs and a plurality of Ys.
  • such phrase is meant to include a plurality of Xs and a plurality of Zs.
  • such phrase is meant to include a plurality of Ys and a plurality of Zs.
  • such phrase is meant to inciude a plurality of Xs and Y.
  • such phrase is meant to include a plurality of Ys and Z.
  • such phrase is meant to include X and a plurality of Ys.
  • such phrase is meant to include X and a plurality of Zs.
  • such phrase is meant to include Y and a plurality of Zs.
  • the conjunction “and/or” is meant to have the same effect as the phrase “X, Y, Z, or any combination or plurality thereof.”
  • the conjunction “and/or” is meant to have the same effect as the phrase “one or more X, Y, Z, or any combination thereof.”
  • operatively coupled refers to a first element (e.g., mechanism) that is coupled (e.g., connected) to a second element, to allow the intended operation of the second and/or first element.
  • the coupling may comprise physical or non-physical coupling.
  • the non-physical coupling may comprise signal-induced coupling (e.g., wireless coupling). Coupled can include physical coupling (e.g., physically connected), or non-physical coupling (e.g., via wireless communication).
  • the phrases “operable to,” “adapted to,” “configured to,” “designed to,” “programmed to,” or “capable of may be used interchangeably where appropriate.
  • An element that is “configured to” perform a function includes a structural feature that causes the element to perform this function.
  • a structural feature may include an electrical feature, such as a circuitry or a circuit element.
  • a structural feature may include an actuator.
  • a structural feature may include a circuitry (e.g., comprising electrical or optical circuitry).
  • Electrical circuitry may comprise one or more wires.
  • Optical circuitry may comprise at least one optical element (e.g., beam splitter, mirror, lens and/or optical fiber).
  • a structural feature may include a mechanical feature.
  • a mechanical feature may comprise a latch, a spring, a closure, a hinge, a chassis, a support, a fastener, or a cantilever, and so forth.
  • Performing the function may comprise utilizing a logical feature.
  • a logical feature may include programming instructions. Programming instructions may be executable by at least one processor. Programming Instructions may be stored or encoded on a medium accessible by one or more processors. Additionally, in the following description, the phrases “operable to,” “adapted to,” “configured to,” “designed to,” “programmed to,” or “capable of may be used interchangeably where appropriate.
  • eiectrochromic windows also referred to as smart windows
  • some of the systems, devices and methods disclosed herein can be made, applied or used without undue experimentation to incorporate, or while incorporating, other types of optically switchable devices that are actively switched/controlled, rather than passive coatings such as thermochromic coatings or photochromlc coatings that tint passively in response to the sun’s rays.
  • Some other types of actively controlled optically switchable devices include liquid crystal devices, suspended particle devices, and micro-blinds, among others.
  • some or all of such other optically switchable devices can be powered, driven or otherwise controlled or integrated with one or more of the disclosed implementations of controllers described herein.
  • an enclosure comprises an area defined by at least one structure (e.g., fixture).
  • the at least one structure may comprise at least one wall.
  • An enclosure may comprise and/or enclose one or more sub-enclosure.
  • the at least one wail may comprise metal (e.g., steel), clay, stone, plastic, glass, plaster (e.g., gypsum), polymer (e.g., polyurethane, styrene, or vinyl), asbestos, fiber-glass, concrete (e.g., reinforced concrete), wood, paper, or a ceramic.
  • the at least one wall may comprise wire, bricks, blocks (e.g., cinder blocks), tile, drywali, or frame (e.g., steel frame and/or wooden frame).
  • the enclosure comprises one or more openings.
  • the one or more openings may be reversibly ciosable.
  • the one or more openings may be permanently open.
  • a fundamental length scale of the one or more openings may be smaller relative to the fundamental length scale of the wal!(s) that define the enclosure.
  • a fundamental length scale may comprise a diameter of a bounding circle, a length, a width, or a height.
  • a surface of the one or more openings may be smaller relative to the surface the wa!i(s) that define the enclosure.
  • the opening surface may be a percentage of the total surface of the wa!l(s). For example, the opening surface can measure at most about 30%, 20%, 10%, 5%, or 1% of the walis(s).
  • the wali(s) may comprise a floor, a ceiling, or a side wall.
  • the ciosable opening may be closed by at least one window or door.
  • the enclosure may be at least a portion of a facility.
  • the facility may comprise a building.
  • the enclosure may comprise at least a portion of a building.
  • the building may be a private building and/or a commercial building.
  • the building may comprise one or more floors.
  • the building may include at least one of: a room, hall, foyer, attic, basement, balcony (e.g,, inner or outer balcony), stairwell, corridor, elevator shaft, fagade, mezzanine, penthouse, garage, porch (e,g., enclosed porch), terrace (e.g,, enclosed terrace), cafeteria, and/or Duct.
  • an enclosure may be stationary and/or movable (e.g., a train, an airplane, a ship, a vehicle, or a rocket).
  • the enclosure encloses an atmosphere.
  • the atmosphere may comprise one or more gases.
  • the gases may include inert gases (e.g., comprising argon or nitrogen) and/or non-inert gases (e.g., comprising oxygen or carbon dioxide).
  • the enclosure atmosphere may resemble an atmosphere external to the enclosure (e.g., ambient atmosphere) in at least one external atmosphere characteristic that includes: temperature, relative gas content, gas type (e.g., humidity, and/or oxygen level), debris (e.g., dust and/or pollen), and/or gas velocity.
  • the enclosure atmosphere may be different from the atmosphere external to the enclosure in at least one external atmosphere characteristic that includes: temperature, relative gas content, gas type (e.g., humidity, and/or oxygen ievei), debris (e.g., dust and/or pollen), and/or gas velocity.
  • the enclosure atmosphere may be less humid (e.g., drier) than the external (e.g., ambient) atmosphere.
  • the enclosure atmosphere may contain the same (e.g., or a substantially similar) oxygen-to-nitrogen ratio as the atmosphere external to the enclosure.
  • the velocity and/or content of the gas in the enclosure may be (e.g., substantially) similar throughout the enclosure.
  • the velocity and/or content of the gas in the enclosure may be different in different portions of the enclosure (e.g., by flowing gas through to a vent that is coupled with the enclosure).
  • the gas content may comprise relative gas ratio.
  • a transparent media display is supported on a transparent panel or substrate having a planar shape.
  • the transparent panel may Include a glass pane, a plastic sheet, or other dear material for supporting a media display, and may be configured as a window having a transparent display area.
  • the transparent panel and/or transparent media display may be configured as a thin sheet which follows a straight, curved shape and/or may include bends or other contours.
  • the media display may provide unidirectional projection of images from one side of the media display toward its opposing side to a local user. The unidirectional projection may maintain privacy of the projected media and/or reduce eye strain for the user viewing the projected media by the display.
  • the projecting media display may comprise a light emitting diode (LED) array.
  • the LED array may comprise an organic material (e.g,, organic light emitting diode abbreviated herein as OLED”).
  • OLED organic light emitting diode abbreviated herein as OLED
  • the OLED may comprise a transparent organic light emitting diode display (abbreviated herein as “TOLED”), which TOLED is at least partially transparent (e.g., to visible light).
  • the display construct may comprise the media display, binding material, and/or transparent substrates (e.g., glass) to bind it together to a display construct.
  • the display construct may comprise a high resolution display.
  • the display matrix has at least about 2000 pixels at its fundamental length scale, which pixels are the projecting entities of the media display, in some embodiments, the fundamental length scale ⁇ abbreviated as “FLS”) of the display matrix is a height or a width of the display matrix. In some embodiments, the display matrix is a high resolution or an ultra-high resolution display matrix. In some embodiments, the display construct is configured as a free-standing panel within an enclosure for generating a media display output toward a user on one side of the free-standing panel.
  • the display construct is coupled to a viewing (e.g., tintable) window such as by a fastener, wherein the window defines a portion of an exterior or interior wall.
  • a viewing (e.g., tintable) window such as by a fastener
  • the fastener comprises a hinge, a bracket, or a cover.
  • the hinge is (i) connected to the bracket that is connected to the display construct and (ii) connected to the cover that is connected to a fixture, which hinge facilitates swiveling of the display construct with respect to the fixture about a hinge joint.
  • the hinge is (i) reversibly connected to the bracket that is irreversibly connected to the display construct and (ii) reversibly connected to the cover that is reversibly connected to a fixture, which hinge facilitates swiveling of the display construct with respect to the fixture about a hinge joint.
  • the cover comprises a swiveling portion that can be reversibly opened and closed, in some embodiments, a circuitry and/or wiring is covered from a viewer by the cover, which circuitry and/or wiring can be exposed at least in part by opening the swiveling portion.
  • a tint level of the tintable window considers a position of a sun, weather condition, transmittance of light through the tintable windows, media projected by the display, and/or reading of one or more sensors.
  • at least one of the one or more sensors is disposed externally to the building in which the tintable window is disposed.
  • the weather condition comprises any dispersive entities in the atmosphere (e.g., cloud coverage, dust, rain, hail, or snow), in some embodiments, transmittance of light through the tintable windows is with respect to external light impinging on the viewing (e.g., tintable) window, in some embodiments, the transmittance of light through the viewing (e.g., tintable) window depends on the material properties of the viewing (e.g., tintable) window.
  • the material properties may include manner of fabrication, thickness of one or more layers, conductive entity type, conductive entity concentration, and/or FLS, of the tintable window (e.g., optically switchable device included therein, such as an e!ectrochromic construct).
  • a visual reproduction of a remote user on the media display may be presented in a way that results in an enhanced immersive experience
  • video reproduction of the remote user is generated on a portion of the media display (e.g., as a cutout or fi!led-in silhouette of the remote user) while another portion of the media display is 1) muted (e.g., remains transparent so that the iocal user see through the media display to a local environment on the opposite side of the media display), and/or 2) reproduces virtual objects devised to enhance an illusion that the remote user is integrated with the local environment.
  • the term “media display” or media display construct” may include light emitting structures and light receiving structures, as well as supporting electronics such as an Image processor, controller, and network interfaces capable of generating, transmitting, receiving, and or manipulating video streams.
  • Fig. 1 shows an example of digital collaboration unit 100 that is a standalone unit.
  • a transparent panel carries a media display 120 over at least a part of the surface of the panel bordered by framing 110.
  • the media display 120 may be movable relative to the framing 110 and/or transparent panel In a direction indicated, for example, in the directions of arrow 180 and/or movable relative to a gravitational center G,
  • a position of the media display 120 may be automatically adjusted based on a height and/or position of a Iocal user such as 130, which may be based on bodily features (e.g., a nose, eyebrows, eyes, pupils, a head, a chin, lips, a nose bridge, or ears) of iocal user 130 and/or historic preferences of Iocal user 130.
  • bodily features e.g., a nose, eyebrows, eyes, pupils, a head, a chin, lips, a nose bridge, or ears
  • a position of the media display 120 may be (e.g., digitally and/or manually) adjustable by local user 130.
  • a position of the media display 120 may be automatically adjustable, e.g., with a manual override, e.g., by Iocal user 130.
  • Cabling coupled to the media display 120 may move with the media display 120 when the media display is moving, in the example shown in Fig. 1 , display 120 occupies an area on the panel bordered by framing 110, with an aspect ratio corresponding to a video output generated by conventional image sensors.
  • An aspect ratio corresponding to the video output can be of about 16:9, of about 4:3, or any value between the aforementioned aspect ratios.
  • the panel bordered by framing 110 and display 120 are free-standing in an enclosure such as an office space, e.g., for utilization by a iocal user 130 disposed on a side of media display 120 toward which video images are projected by display 120.
  • User 130 is engaged in a video conference with a remote user who is depicted by a streamed virtual image 140 on the display 120, wherein the media stream used to generate image 140 may be captured by an image sensor at the remote location of the remote user.
  • the image taken by the remote image sensor may be trimmed to project an image of the remote user, e.g., without any projected content in at least in a portion of the area surrounding the remote user (e.g., in the manner of a green-screen cutout) to provide an illusion that the remote user is seen as being present in the Iocal environment.
  • the example shown in Fig. 1 shows a remote user 140 that is cut from its real surrounding captured by the remote sensor, which remote surrounding appear transparent to user 130, such that user 130 can see cutout image 140 of the remote user devoid of the remote surrounding, and user 130 can see through a surrounding of cutout image 140.
  • the display may generate the remote user’s image at an image scaie that causes the size of the image to represent the remote user at or close to life-size, e.g., with respect to the local user 130.
  • the removal of a remote surrounding e.g., background around the remote user’s virtual image
  • Remotely may be in a cloud, at the remote user’s local, or at any other remote place to user 130.
  • media display 120 has a touchscreen capability (not shown in Fig. 1).
  • the locally displayed images includes icons 150, e.g., that may be used for a control interface allowing user 130 to generate user commands for a control system handling the video conference.
  • a digital collaboration unit may include micropbone(s) and/or other sound sensor(s), loudspeaker(s), etc., e.g., to facilitate audio communication.
  • Fig. 1 shows an example of a physical work surface (e.g., a real ledge) 170 on which user 130 can lean and/or place real items on, and virtual work surface (e.g., a virtual ledge) 160 that remote user 140 seems to lean on.
  • the physical work surface 170 may be movable relative to the framing 110 and/or transparent panel in a direction indicated, for example, in the directions of arrow 190 and/or movable relative to a gravitational center G.
  • a position of the physical work surface 170 may be automatically adjusted based on a height and/or position of local user 130, which may be based on bodily features of local user 130 and/or historic preferences of local user 130.
  • a position of the physical work surface 170 may be manually adjustable by a local user 130.
  • a position of the physical work surface 170 may be automatically adjustable, e.g., with a manual override.
  • a media display 120 and physical work surface 170 may be secured to each other to move in unison (e.g., in a concerted and/or coupled movement).
  • a media display 120 and physical work surface 170 may each engage a mechanism that moves them in unison.
  • a media display 120 and/or physical work surface 170 may be moveable without movement of the other (e.g., unconceried movement, or uncoupled movement).
  • the virtual ledge 160 may be a real remote ledge captured by the remote camera, or an emulated perspective ledge that is a virtual overlay. Virtual ledge 160 may appear as part of display 120 as retaining its relative position in the projected media, regardless of any movement of display 120 and/or physical work surface (e.g., physical ledge) 170.
  • the panel held by framing 110 can comprise a transparent substrate (e.g., glass or plastic), which transparent substrate may comprise a tintabie window. The transparent substrate may support the display construct 120.
  • the display construct may be supported by framing 110 (e.g., an unsupported by the transparent substrate that is surrounded by framing 110).
  • an immersive experience is enhanced by locating at least one sensor (e.g., an optical sensor such as an image sensor) behind at least a portion of the transparent media display to capture images of a user (e.g., video conference participant) from a location corresponding to the users gaze, e.g., while participating in the video conference.
  • the sensor(s) may be positioned behind the transparent display, in some embodiments, the image sensor location is arranged to be directly behind a position on the media display from which the remote user’s image is projected to the local user.
  • the local user when looking at the image of a remote participant, becomes aligned with the sensor(s) (e.g., camera) placement, and the media stream sent to the remote participant(s) represents that focal point as being pointed directly toward the remote participant.
  • the media display is transparent at least in part (e.g., passes some degree Divisible light in both directions)
  • an image of the local user may be captured from an opposite side of the media display and/or support panel (e.g., within the aspect-ratio profile of the media display).
  • the image sensor(s) e.g., sensor array
  • the image sensor(s) is disposed at a fixed location (e.g., at a vertical and horizontal center) relative to the media display.
  • the image sensor location is adjustable (manually and/or automatically) in a vertical direction and/or a horizontal direction, e.g., to correspond with an actual gazing direction of the user whose images are being captured (e.g., the local user).
  • At least one sensor is disposed behind the display, e.g., to capture an image of a local user (e.g., to be streamed to remote user(s)).
  • a sensor(s) e.g., video camera or other image sensor(s)
  • a separate sensor(s) e.g., optical sensor and/or image sensor
  • an integrated sensor(s) may be disposed within (or intimately associated with) the display construct.
  • a sensor(s) e.g., video image sensor and/or sensor array
  • a sensor(s) is configured as an autonomous unit supported on an opposite side of the transparent panel, which opposite side of the media display is a side of the media display that is (i) opposite to the side in which a local user is disposed and/or (ii) towards which the image is projected by the media display (e.g., in a unidirectionaliy projecting media display).
  • the media display can be a display construct that is part of an integrated glass unit (IGU),
  • the media display can be coupled (e.g., attached via an adhesive and/or a fastener) to the supportive structure (e.g., tintable window).
  • the supportive structure e.g., tintable window
  • the supportive structure can be part of an IGU.
  • the sensor(s) e.g., a camera
  • the transparent display can be associated with the inner pane of an IGU, or externally coupled to an IGU that is devoid of a media display.
  • the sensor(s) that capture an image in the local in which the display is disposed are included in a camera.
  • An optical focus (e.g., fixed focus) of the camera e.g., which is disposed behind a transparent display
  • the focal distance may be different (e.g., significantly greater) than the distance from the camera to the transparent display, e.g., such that any image artifacts related to any light projected by the display and/or any visible structures of the display, are muted by de-focusing.
  • Image processing may be used to remove or otherwise compensate for any iighf that might be emitted by the media display toward the camera.
  • the camera may have an adjustable focus.
  • the adjustable focus may be manually and/or automatically adjusted.
  • the focal point of the camera may be adjusted, e.g., automatically and/or by a user (e.g., using an application (App.).
  • the senor(s) may be configured for height adjustment, e.g., to match an eye level of the person being captured.
  • the sensor(s) may be operatively coupled (e.g., connected) to an actuator such as a motor (e.g., a servomotor).
  • the actuator may comprise a Servomechanism (e.g., abbreviated herein as “servo”).
  • the actuator may use feedback control scheme to correct the action of the sensor(s).
  • the actuator may be operatively coupled to one or more controllers (e.g., a dedicated controller and/or the control system of the facility).
  • the feedback scheme may comprise error-sensing negative feedback.
  • the actuator may control the displacement of the sensor(s).
  • the actuator may comprise, or be operatively coupled to, an encoder.
  • the actuator may be operatively coupled to a position feedback mechanism, e.g., to ensure the position of the sensor(s) is at the user’s gaze.
  • the sensor(s) may be operatively coupled to one or more controllers that include a feedback control scheme.
  • the one or more controllers may receive error-correction signals to help control mechanical position of the sensor(s), speed of the sensor(s) movement, attitude or any other measurable variables related to displacement of the sensor(s).
  • the feedback control scheme may comprise a closed-loop feedback control scheme.
  • the sensor(s) e.g., camera
  • the actuator may facilitate automatically positioning the sensor(s) (e.g., camera) to a center of the users gaze (e.g., moving up-down and/or right-left).
  • the sensor(s) may be static or movable.
  • the movement may be manually controlled by the local participant who is at the same local as the sensor(s) (e.g., in the same facility such as in the same room), in some embodiments, (e.g,, manual) preferences for positioning an adjustable sensor(s) are stored, e.g., and assigned per user, per media display, and/or per local (e.g., per conference room and/or booth).
  • the preferences may later be recalled, e.g., for automatically controlling the sensor position in response to activation of that sensor(s) (e.g., camera) by the user.
  • the preferences may later be recalled, e.g., for automatically controlling sensor(s) position in response to activation of another sensor(s) (e.g., another camera) by the same user (e.g., the user preferences may be propagated to other media displays operatively coupled to vision sensor(s)).
  • movement is controlled to follow the optimal gaze point automatically, e.g., by using image recognition software.
  • facial feature tracking based at least in part on pattern recognition, can be optionally applied to the captured images.
  • the facial feature may comprise eyes, pupils, nose (e.g., bridge and/or nostrils), eye brows, ears, distance between eyebrows, cheeks, chin, mouth, border of face, or hair line.
  • the sensor(s) position adjustment may use a combination of techniques. For example, user preferences may be propagated to other media displays operatively coupled to vision sensor(s) as an initial sensors, and fine tuned (i) using image recognition software and/or (ii) manual user adjustment.
  • FIG. 2 shows an example of a camera system 200 for aligning a location from which images are captured with a focal point of a user’s gaze toward the media display.
  • a transparent media display 210 projects video images 215 toward a local user 220 who is disposed on a viewing side of media display 210.
  • User 220 has a gazing direction 225 when looking at media display 210 when viewing a media stream, e.g., of a remote participant of a video conference and/or other digital collaboration.
  • Media display 210 is supported by and/or is disposed on a transparent panel 230, which may be comprised of a tintable window.
  • Sensor(s) 214 (e.g., in a camera) is disposed on a movable carriage (e.g., servo-system) 250 that is supported by mounts 280 fixed relative to panel 230 and media display 210.
  • the mount is integrated or attached to: the media display, the supportive substrate, and/or to a framing thereof.
  • Fig. 2 shows an example of at least one controller 270 that is coupled to the movable carriage 250 for commanding movements of the carriage that place image sensor 214 in alignment with gazing direction 225.
  • Controller 270 is operatively coupled to a network 290, e.g., for streaming content between local system 200 and the remote systems (e.g., media displays and controller) of the remote participant(s).
  • the facility comprises a network.
  • the network may be a communication and/or power network.
  • the network may be coupled to a control system (e.g., that may comprise distributed network of controllers and/or a hierarchical control system).
  • the display construct, the image sensor(s), and/or the tintable window may be operatively coupled to the network, e.g., and to the control system.
  • the control system may control at least one other device of the facility such as devices adjusting to the environment of the facility, geo-location related devices, health, safety, entertainment, hospitality, work, and/or educational devices.
  • At least a portion of the network may be (i) the first network deployed in the facility, (ii) disposed at an envelope of the facility, (iii) communicate power and communication on a single cable of the network, (iv) comprise electrical and optical cabling, (v) communicate two or more communication types on a single wire, and/or (vi) transmit communication and power on a single wire.
  • the network may be configure to control different device types of the facility in which it is disposed.
  • the network may be configured for environmental, health, and/or safety control.
  • the local environment around local system 200 may include objects and/or surfaces perceived by user 220 during the collaboration.
  • Some of the local environment may be seen through transparent media display 210 (e.g., portions not blocked by an image of the remote participants and/or auxiliary objects presented), and some objects are between user 220 and transparent media display 210.
  • a desk or table 280 may provide a work surface for user 220 at a lower end of media display 210.
  • Virtual object(s) may be added to the images being displayed by media display 210 perspective ⁇ , e.g., to enhance an Illusion that the remote participant(s) are in the local environment.
  • the virtual objects can include a virtual extension of table 280 that appears to local user 220 to perspectiveiy extend into media display 210.
  • FIG. 3 shows an example of a front view of a digital collaboration system 300 having a transparent media display 310, Behind media display 310, image sensor(s) (e.g., in a camera) 320 is mounted on a movable carriage 330.
  • Carriage 330 can be (e.g., servo and/or manually) controlled for vertical 340 and/or horizontal 431 movement with respect to gravitational center 342, e.g., to position sensor(s) 320 in a location corresponding to the local user’s gaze.
  • sensor(s) e.g., comprising a video image sensor
  • sensor(s) is located separate from and/or behind a transparent media display, with behind being a side of the display away from the user and/or opposite to the direction of media projection by the display.
  • a transparent display such as a transparent organic light emitting diode (TOLED) array
  • TOLED transparent organic light emitting diode
  • a transparent display can be configured to project an image substantially unidirectionally (e.g., from a front surface). At times, some portion of the light may be projected back toward the image sensor(s).
  • the image sensor(s) may have an optical focal point such that a user located at a distance looking at the media display is in focus (e.g., at the focal point or substantially at the focal point), while the media display itself (e.g., the projecting entities of the media display) appears out of focus.
  • the user may be disposed in front of the media display.
  • the projecting entities of the media display may appear to the sensor(s) out of focus because (i) they are located away from the focal point of the sensor(s) (e.g., and closer to the sensor(s) as compared to the user).
  • Any light leakage (e.g., glare) toward the sensor(s) from emitting entities of the media display (e.g., from the display pixels) may be spread over a plurality of sensors (e.g., sensing pixels) In the captured image, e.g., because of being out of focus.
  • the emitting entities may include emitting entities that are within the field of view of the image sensor.
  • the emitting entitles may Include emitting entities that are out of the field of view of the image sensor, e.g., and adjacent to the field of view of the image sensor.
  • the brightness of any emitting entity (e.g., TOLED pixel) of the display as detected by any sensor (e.g., sensing pixel) may be (e.g., markedly) reduced (e.g., eliminated).
  • the reduction may comprise filtering (e.g., optical filtering).
  • the filtering may relate to the media projected by the projecting entities (e.g., that contribute to the glare).
  • the reduction of glare may facilitate transmission of an image of a local user captured by the local sensor(s) through the transparent display, as transmitted to a remote user.
  • the Image captured by the local sensor(s) that is transmitted to the remote user(s) may by be crisp and/or minimally affected (e.g., unaffected) by projection of the local media display.
  • FIG. 4 depicts some example relationships between display pixels 401 and camera pixels.
  • View 400 is a view from a front side of a media display wherein an array of LED pixels 401 project an image to a local user.
  • the front side of the display is the side of the display observed by the user and/or towards which the media is displayed,
  • a shaded area 402 corresponds to a region of the media display through which an image sensor behind the media display receives light being captured for a media stream to remote user(s).
  • View 430 is a view from a rear side of the media display.
  • each pixel of the camera image may capture an area within region 402 smaller than a pixel size of the media display. Without wishing to be bound to theory, this may be because of convergence of light rays directed onto the pixels of the image sensor.
  • the de-focused light from a pixel of the media display can spread over a number of camera pixels, such that a captured image may be influenced (e.g., mostly defined) by the light of the exterior scene passing through the transparent media display.
  • an integrated image sensor is disposed within, or is intimately associated with, the transparent display assembly.
  • a transparent substrate or set of substrates joined together in a common construct may include light-emitting entities (e.g., pixels) for the media display and light-sensing camera pixels (also known as “senseis”) deposited on the common construct (e.g., as part of the media display construct).
  • the senseis and the emitting entities of the media display can be part of a laminate or part of a common integrated glass unit (IGU).
  • IGU integrated glass unit
  • Various patterns can be employed for arranging the two pixel types to optimize imaging performance and/or minimize interactions between them.
  • light emitting entities may be provided for separate primary colors (e.g., RGB sub-pixels).
  • the number of such elements, their surface areas, and/or arrangement patterns may depend upon an overall design and/or manufacturing process of the media display.
  • the senseis may be arranged in a matrix (e.g., a grid of senseis).
  • the projecting entities of the media display may be arranged in a matrix (e.g., a grid of projecting entities such as an LED grid).
  • the grid of the senseis may be offset from the grid of projecting entities of the media display (e.g., to ensure optimal sensing of the senseis through the emitting entity matrix of the media display).
  • the degree of offset between the two grids may facilitate minimum interference and/or overlap (e.g., no overlap, or substantially no overlap) between senseis and the projecting entities of the media display.
  • Each of the light-emitting entities of the media display may occupy a larger surface area as compared to each of the light-sensing senseis, in some embodiments a sensei may have a size that is equal, or substantially equal, to a projecting entity of the media display (e.g., LED pixel).
  • the light emitting entities may comprise TOLED pixels.
  • senseis of a video image sensor array are disposed behind and/or between the media display pixels (e.g., in 2D from the user’s perspective).
  • a single lens, or a composite lens may be incorporated (e.g., at least with respect to the imaging senseis), to capture the requested image.
  • the glass pane of the window can be patterned and/or controlled to provide an adjustable tint.
  • the pattern and/or tint may function as an iris or filter for the senses (e.g., camera), e.g., embedded within the laminate and/or IGU.
  • separate groupings of senseis may be constructed at respective locations on the display construct, in some embodiments, separate sensei groups are spaced apart from one another. Electronic switching of the outputs of separate groups of senseis may be used to select an effective camera height from different respective locations on the display construct. A continuous expanse of senseis may be utilized to cover an area greater than what is used at any one time, to capture an image. Eiectronic switching may select between different overlapping groups of senseis to choose from different heights, e.g., at a greater resolution.
  • a high-magnification view 480 is shown of an integrated display and sensor construct 483 integrating light-emitting regions 461 with light- sensing regions 462.
  • Integrated construct 463 may include a plurality of transparent layers joined together (e.g., as a laminate) and/or include an integrated glass unit (IGU).
  • IGU integrated glass unit
  • Light-emitting regions 461 and light-sensing regions 462 may be formed on different substrates or on a common substrate.
  • the integrated construct may be constructed such that external light sensed by the image sensing senseis first passes through at least a portion of the media display before reaching the senseis.
  • light-sensing regions 482 may be located offset from (e.g., between) light-emitting regions 461 (containing light emitting entities of the media display) when viewed in 2D from a location occupied by the user to be imaged.
  • a transparent media display is used to enhance the immersive experience of a collaborative digital communication (e.g., video conference), e.g., by emulating the virtual participant’s image with the local environment of the local participant(s), while stripping away incongruous elements of the remote environment of the remote participant(s) and/or auxiliary content to be presented (e.g., presentation, data sheet, article, picture, video, or any other document or exhibit).
  • the media stream from the remote participant(s) may be altered before being displayed on the local transparent media display, e.g., by having a portion of the incoming information surrounding the material to be communicated (e.g., the virtual participant’s image and/or auxiliary presentable content) removed.
  • the removal of the incongruous content may facilitate retaining at least partial transparency of the media display in the area that was dedicated for the incongruous content.
  • emitting entities in the area of the media display in which the incongruous content should have been displayed may be emitting dimmer light, or no light, e.g., to facilitate at least partial transparency of that area.
  • the at least partial transparency of that area may facilitate viewing therethrough by a local viewer to provide an illusion that the virtual remote participant’s image and/or auxiliary presentable content is disposed in the local environment.
  • a remote background around the virtual image of the remote user and/or remote presentation content is replaced with a local (e.g., actual and real) view through the transparent display of a local environment of the local participant(s) (e.g., local viewer(s)).
  • the area around the virtual remote participant’s image and/or auxiliary content may provide visibility of the local environment, e.g., to enhance an Illusion that the remote user is present in the local environment.
  • the virtual participant’s image may be generated at an image scale that causes the size of the image on the local media display to be at or close to actual life-size.
  • physical furnishings are deployed in the local environment in ways that provide additional cues that further enhance the illusion.
  • a table or desk in the local environment placed in front of the media display may be oriented in an alignment that would extend into a plausible juxtaposition with the remote participant.
  • a virtual extension of the virtual perspective add-on overlay object e.g., plant, or furniture such as a table
  • the virtual perspective object may provide an illusion of extension between the local participant(s) and the remote participant(s) and/or virtual remove auxiliary content.
  • each combined field of view for each respective participant can include the same matching desk or table on both sides of a video conference for creating a convincing telepresence illusion.
  • virtual overlays are added to the displayed media stream that are configured to imitate the local environment (e.g., a ledge, a plant, or any other object).
  • the virtual overlay object may match the aesthetics of the local environment.
  • the virtual overlay may be added automatically and/or per user’s request.
  • the virtual overlay may be personalized and/or chosen by a user such as the local participant(s) ⁇ e.g., using an App.).
  • An overlay may be made to appear as an extension of a local furnishing (e.g., a virtual extension of a real local table or desk that is located in front of the local media display), or may represent a separate object (e.g., furnishing) having properties otherwise consistent with the local environment (e.g., aesthetic of the environment, usage of the environment, and/or purpose of the environment).
  • the virtual overlay may be a perspective overlay. For example, a virtual overlay may be made to appear closer in the projected image to the viewing user (e.g., in front of the remote user in the projected image), thereby providing a virtual transition leading to the remote virtual image of the user and/or auxiliary content for presentation.
  • a virtual object represented by the overlay may depict an object that spatially (e.g., perspectively) appears to be disposed between the local viewer(s) and the portion of the projected media stream displayed on the local media display that corresponds to the cut-out image of the remote participant and/or auxiliary content devoid of remote background.
  • An overlay may be added to (e.g., merged with) a media stream, e.g., so that a virtual object is configured to flank a depiction of the virtual image of the remote participant and/or remote auxiliary content,
  • Fig. 5 shows an example media display 500 projecting a virtual image 510 of a remote user.
  • a dashed line 520 shows a cutout profile bordering Image 510 that delineates between a foreground region to be reproduced that includes the remote user and a background region for which no image is to be projected (e.g., unless an overlay Is added).
  • a table 530 is located as a real local furnishing of a local environment in front of media display 500.
  • Area 540 of the media display represents a background that Is redacted from the remote virtual image, and is left at least partially or entirely transparent, so that the real local environment can be viewed therethrough.
  • Fig. 6 depicts an example of a video conference in progress in a framing setup 600, which video conference is between a first local user 610 and a second remote user 620 presenting auxiliary content (e.g., data sheet) 660.
  • a media display construct 630 disposed in front of local user 610 is projecting an image of user 620 along with a table overlay (e.g., a virtual table) 640 and a planter overlay (e.g., virtual planter) 650.
  • a real ledge or table 645 is present in the local environment of user 610.
  • Overlay 640 may depict a furnishing consistent with, and/or having an appearance of being a (e.g., perspective) extension of ledge 645.
  • Fig. 7 depicts an example of a video conference 700 in progress between a first, local user 710 and a second, remote user 720.
  • a media display construct 730 projects an image of user 720 along with a virtual table overlay 740, with the image of user 720 being cut-out and/or placed so that it does not overlap with overlay 740.
  • User image 720 does not extend to the bottom edge of media display 730, but instead a lower edge of the cut-out coincides with an edge 750 of overlay 740.
  • the illusion may be enhanced that makes remote user 720 appear to be farther away from local user 710 that the table or ledge represented by overlay 740.
  • a background of remote user 720 is redacted such that user 710 can see the real local surrounding through portion 760 that forms the local background of the virtual image of remote user 720.
  • shared auxiliary content is displayed to, and may optionally be manipulated by, participant(s) to a digital collaboration (e.g., simultaneously and/or in real time).
  • the right to manipulate the content can be restricted, e.g., by the presenter of the auxiliary content, according to a hierarchy of the participant(s) in the organization, and/or according to a hierarchy of the participants in the meeting.
  • a meeting organizer may have content manipulation rights, whereas a nonorganizer may not.
  • a meeting presenter may have content manipulation rights of his presented content, whereas a non-presenter may not.
  • a manager participant may have content manipulation rights of his presented content, whereas a participant at a non-managerial position may not.
  • the content manipulation rights may be prescribed manually (e.g., by the meeting organizer and/or presenter), e.g., before the meeting, during the meeting and/or in real time as the content is presented.
  • the manipulation right prescription can be visible and/or manipulabie via an app.
  • the manipulation right prescription may be presented on the media display, e.g., during presentation (e.g., in a dropdown menu and/or screen).
  • the app e.g., application
  • the app may be executable on (i) a transitory processor such as of a smartphone, laptop, tablet, or (ii) other computing device of a participant.
  • Auxiliary content may include text, graphic presentations, graphs, drawings, paintings, and/or a whiteboard capability, in some embodiments, a transparent display includes at least one region that has a touch screen functionality.
  • a support app may be used that communicates with the media displays (e.g., with the controllers or image processors of the media displays). The support app can be configured to handle the auxiliary content (e.g., controlling access, creating and editing text, graphics, or other content).
  • the support app may react to inputs generated by (e.g., each of) the participants (e.g., conveying content edits and/or modifying how the content is displayed)
  • the support ap may relay the manipulation (e.g., revisions and/or comments) (i) to a central data source or (II) directly to (e.g., each) processor associated with the media display participating in the digital communication.
  • the support app may provide functionality for defining (e.g., selecting from a menu) virtual elements to be displayed such as overlays (e.g., of furnishings, plants, or any other virtual objects). Selections defining a virtual environment may be made before, during, and/or after the digital communication (e.g., video conference) has launched.
  • Configurations for particular media display systems may be stored for use in automatically configuring calls involving the stations.
  • Fig. 6 depicts an example of a virtual document 660 being displayed on media display 630 being projected to user 610.
  • a remote media display at a remote location of user 620 could likewise project an image of virtual document, e.g., similar to 660.
  • Auxiliary content, the media display, the virtual overlays, or any combination thereof are controllable (e.g., manipuiatable) using a digital twin of the enclosure in which the media display is disposed (e.g., in a touchless manner).
  • the digital twin may include a database in a local server and/or in a cloud server, that stores content and/or rendering information that may be used to generate a representation of the auxiliary content to be shown on (e.g., each of) the media displays of the participants to a digital collaboration, and/or manipulation toolkit that can be utilized during the digital collaborative communication.
  • a support app and/or a digital twin is activated to configure details of the session. For example, network access Information (e.g., addresses), media display and media streaming capabilities, image placement, elements of a virtual environment (e.g., overlays), and/or any pre-defined auxiliary content may optionally be defined using the support app.
  • network access Information e.g., addresses
  • media display and media streaming capabilities e.g., image placement
  • elements of a virtual environment e.g., overlays
  • any pre-defined auxiliary content may optionally be defined using the support app.
  • the participants may take their places and launch their conference session via a network or networks (e.g,, transport media and servers) linking their media displays (e.g., transparent displays, image sensors, controllers, and/or processors).
  • media streams between the participant’s media displays may be initiated such that a “cut-out” representation of remote participant(s) are projected on each transparent media display. For example, a portion of a media stream surrounding an image of the remote participant may be suppressed from being displayed, which suppression enabies viewing at least a portion of the local environment through a portion of the local media display corresponding to the media stream portion that is suppressed. If selected (e.g., manually and/or automatically), appropriate overlays are merged with the media stream to respective media displays.
  • the redacted ⁇ e.g., cut-out) representation of a participant may be captured using image sensor(s) that is located at a capture location that corresponds to a gazing region of the corresponding user directed towards the corresponding media display.
  • the image sensor(s) may capture images through at least a portion of the transparent media display.
  • the capture location(s) may be fixed or adjustable (e.g., manually and/or automatically adjustable). When an image sensor has an adjustable capture location then before or during a video conference session, the sensing location may be adjusted according to a direction in which the imaged user gazes towards the transparent media display.
  • the capture location may be adjusted to focus on a central, or on a substantially central, position such as (i) between pupils of the imaged user, (ii) between their brows, (iii) at the end of a nose bridge of the user, and/or (iv) any other capture location to focus, e.g., as disclosed herein.
  • a conference session includes auxiliary content
  • the touchscreen portions of the media dispiay(s), a support app, and/or a virtual twin e.g., data server
  • a virtual twin e.g., data server
  • the touchscreen portions of the media display(s), support app, and/or virtual twin may be used to display and/or adjust virtual overlays before and/or during a conference session, e.g., if requested to enhance the integration of the immersive digital experience, for aesthetic considerations, for branding considerations, or just for fun.
  • Fig. 8 shows an example of operations that may be performed in connection with a collaborative digital communication (e.g., video conference) session between remote participants
  • a virtual environment may be defined (e.g., retrieved or selected) and optional auxiliary content may be set up and/or retrieved.
  • video conference links associated with the participants are initiated at each of the corresponding media display systems.
  • images from respective media streams may be processed and displayed so that (i) if at least one of the media displays of the collaborative digital experience is a transparent display, background redacted (e.g,, cut-out) images of the virtual image of the participant(s) displayed and/or any auxiliary content are displayed, and (iii) any selected overlays are displayed, on the respective media display.
  • background redacted (e.g,, cut-out) images of the virtual image of the participant(s) displayed and/or any auxiliary content are displayed, and (iii) any selected overlays are displayed, on the respective media display.
  • the positions and/or focus of any adjustable cameras may be adjusted in an operation 805.
  • Camera adjustment may be vertical and/or horizontal, and may be manual and/or automatic, in an operation 806, a touchscreen, support app, and/or virtual twin may be used to interact with the video conference streams (e.g., to display, manipulate (e.g,, adjust) overlays or auxiliary content).
  • the network links may be ciosed in operation 807. Preferences of participants, virtual overlays, camera, and/or media display setting may be stored (e.g., on the network).
  • a digital twin (e.g., virtual twin) is used.
  • the digital twin may provide a model of facility (e.g., comprising a building or buildings), including the structure of the facility, various (e.g., network-connected) devices in the facility, and network components in or coupled to the facility, in some embodiments, the digital twin includes representations of one or more transparent media display systems along with predetermined overiay(s).
  • the digital twin may include a database for storing auxiliary content data, user preferences, media display preferences, camera preferences, and/or various definitions.
  • the virtual model may comprise an electronic file associated with the facility, device(s), and/or network(s) such as a Building information Model (BIM) (e.g., an Autodesk Revit ® file or similar facility related file).
  • BIM Building information Model
  • a control interface to the digital twin can be configured to permit authorized users to initiate changes in the operation of various target devices (e.g., including media disp!ay(s)), e.g,, since the digital twin links up each represented target element with (e.g., all) the needed information to select and/or control that target device (e.g., media display).
  • the target device may comprise a media display system. Users may initiate changes in how auxiliary content is displayed and/or changes to the auxiliary content itself. Via the media display system (e.g., using a touchscreen and/or via remote communication comprising gesture or sound recognition), a user may control any other device operatively coupled to the network, e.g., through the digital twin,
  • dynamic elements in the digital twin include target (e.g., device) settings.
  • the target setting may comprise (e.g., existing and/or predetermined): tint values, temperature settings, and/or light switch settings for the facility.
  • the target settings may comprise available actions in media displays, such as controlling auxiliary content and/or overlays.
  • the available actions may comprise menu items and/or hotspots in displayed content.
  • the digital twin may include virtual representation of the target, of movable objects (e.g., chairs or doors), and/or of occupants (actual images from a camera or from stored avatars), in some embodiments, the dynamic elements can be targets (e.g., devices) that are newly plugged into the network, and/or disappear from the network (e.g., due to a malfunction or relocation).
  • the digital twin can reside in any circuitry (e.g., processor) operatively coupled to the network.
  • the circuitry in which the digital circuitry resides may be in the facility, outside of the facility, and/or in the cloud. In some embodiments, a two-way link is maintained between the digital twin and a real circuitry.
  • the real circuitry may be part of the control system (e.g., of the facility).
  • the real circuitry may be included in the master controller, network controller, floor controller, local controller, or in any other node in a processing system (e.g., in the facility or outside of the facility).
  • the two-way link can be used by the real circuitry to inform the digital twin of changes in the dynamic and/or static elements, e.g., so that the 3D representation of the enclosure can be updated ⁇ e.g., in real time).
  • the two-way link may be used by the digital twin to inform the real circuitry of manipulative (e.g., control) actions entered by a user on a mobile circuitry.
  • the mobile circuitry can be a remote controller (e.g., comprising a handheld pointer, manual input buttons, or touchscreen) that may execute the support app.
  • Fig. 9 shows an example of a control system in that a real, physical enclosure (e.g., room or building) 9QQ inciudes a controller network for managing Interactive network devices under control of a processor 901 (e.g., a master controller).
  • the structure and contents of building 900 are represented in a 3D model digital twin 902 as part of a modeling and/or simulation system executed by a computing asset.
  • the computing asset may be co-iocated with, or remote from, enclosure 900 and processor (e.g., master controller) 901.
  • a network link 903 in enclosure 900 connects processor 901 with a plurality of network nodes including an interactive target 905 such as a media display, interactive target 905 is represented as a virtual object 906 in digital twin 902.
  • a network link 904 connects processor 901 with digital twin 902. In some embodiments, the digital twin resides in processor 901.
  • a user located in enclosure 900 carries a handheld control 907 that may have a circuitry (e.g., processor) for executing a support app and a pointing capability (e.g., to couple with the target 905).
  • the location of handheld control 907 may be tracked, for example, via a network link with digital twin 902 (not shown).
  • the link may include some transport media contained within network 903.
  • Handheld controller 907 is represented as a virtual handheld controller 908 within digital twin 902. Based at least in part on the tracked location and pointing capability of handheld controller 907, when the user initiates a pointing event (e.g., aiming at a particular target and pressing an action button on the handheld controller) it is transmitted to digital twin 902.
  • a pointing event e.g., aiming at a particular target and pressing an action button on the handheld controller
  • digital twin 902 may identify an intended action directed to a target (e.g., represented as a digital ray 909 from the tracked location in digital twin 902), Digital ray 909 intersects with virtual device 906 at a point of intersection 910. A resulting interpretation of actions made by the user in the digital twin 902 is reported by digital twin 902 to processor 901 via network link 904. in response, processor 901 relays a control message to interactive device 905 to initiate a commanded action, e.g., in accordance with a gesture (or other input action) made by the user using handheld controller 907,
  • a commanded action e.g., in accordance with a gesture (or other input action) made by the user using handheld controller 907
  • a video camera is placed behind a transparent display for capturing Images of a local user.
  • An immersive experience can be obtained when an image of a remote participant of a video conference is blended with a present (local) environment (real and/or augmented) using a transparent media display (e.g., TOLED).
  • the transparent display construct is coupled to a structure (e.g., a supportive structure that can be a fixture or a non-fixture).
  • the structure e.g., supportive structure
  • the structure may comprise a window, a wall, or a board.
  • the display construct may be coupled to the structure, e.g., with a fastener.
  • the distance may be at most about 0.5 meters (m), 0.4m, 0.3m, 0.2m, 0.1m, G,G5m, G,025m, or 0.01m.
  • Examples of fasteners, media display, display construct, supportive structure, control system and network, can be found in International Patent Application Serial No. PCT/US20/53641 , which is incorporated herein by reference in its entirety.
  • a display construct that is coupled with a viewing (e.g., a tintabie viewing) window.
  • the viewing window may include an integrated glass unit.
  • the display construct may include one or more glass panes.
  • a window surface in a facility is utilized to display the various media using the glass display construct.
  • the display may be utilized for (e.g., at least partial) viewing an environment external to the window (e.g., outdoor environment), e.g., when the display is not operating.
  • the display may be used to display media (e.g., as disclosed herein), to augment the external view with (e.g., optical, real, and/or virtual) overlays, augmented reality, and/or lighting (e.g., the display may act as a light source).
  • the media may be used for entertainment and non-entertainment purposes.
  • the display may be used for medical, security, educational, informative, monetary, hospitality, and/or other purposes.
  • the media may be used for work (e.g., data analysis, drafting, and/or video conferencing).
  • the media may be manipulated (e.g., by utilizing the display construct, any control tools, gesture control, and/or related apps such as disclosed herein).
  • Utilizing the display construct can be direct or indirect. Indirect utilization of the media may be using an input device such as via a mobile circuitry (e.g., controller) such as an electronic mouse, a stylus, or a keyboard.
  • the input device may be communicatively (e.g., wired and/or wirelessly) coupled to the media.
  • Direct utilization may be by using the display construct as a touch screen using a user (e.g., finger) or a directing device (e.g., an electronic pen or stylus).
  • the directing device may be made or, and/or coated with a low abrasive material (e.g., a polymer).
  • the low abrasive material may be configured to facilitate (e.g., repeatedly) contacting the display construct with minimal damage (e.g., scratching) to the display construct.
  • the low abrasive material may comprise a polymer or resin (e.g., piastic).
  • the directing device may be passive or active.
  • the active directing device may operatively coupie to the display construct and/or network.
  • the active directing device may comprise a circuitry.
  • the active directing device may comprise a remote controller.
  • the directing device may facilitate direction of operations related to media presented by the display construct.
  • the directing device may facilitate ⁇ e.g., real time and/or in situ) interaction with the media presented by the display construct.
  • Examples of directing devices, control system and network can be found in International Patent Application Serial No. PCT/US20/53641 which is incorporated herein by reference in its entirety.
  • Exampies of digital twin, gesture control, controlling circuitry (e.g., VR devices) service devices, target devices, control system and network, can be found in Internationa! Patent Application Serial No. PCT/US21/27418, which is incorporated herein by reference in its entirety.
  • Embodiments described herein relate to vision windows with a tandem (e.g., transparent) display construct
  • the vision window is a tintable window such as an electrochromic window.
  • the e!ectrochromic window may comprise a solid state and/or inorganic electrochromic (EC) device.
  • the vision window may be in the form of an integrated giass unit (IGU).
  • the IGU can tint (e.g., darken) a room in which it is disposed and/or provide a tinted (e.g., darker) background as compared to a non-tinted IGU.
  • the tinted IGU can provide a background preferable (e.g., necessary) for acceptable (e.g., good) contrast on the (e.g., transparent) display construct, in another example, windows with (e.g., transparent) display constructs can replace televisions (abbreviated herein as “TVs”) in commercial and residential applications.
  • TVs televisions
  • the (e.g., transparent) display construct and the tintable window can provide visual privacy giass function, e.g., because the display can augment the privacy provided by tintable window (e.g., EC window).
  • the display may be integrated as a display construct with window panei(s) (e.g., frame(s)).
  • window panei(s) e.g., frame(s)
  • Examples of display constructs can be found in international Patent Application Serial No. PCT/US20/53841 , which is incorporated herein by reference in its entirety.
  • a display construct may include one or more glass panes.
  • the display e.g., display matrix
  • the LED may comprise a light emitting diode (LED).
  • the LED may comprise an organic material (e.g., organic light emitting diode abbreviated herein as “QLED”).
  • QLED organic light emitting diode abbreviated herein as “QLED”).
  • the OLED may comprise a transparent organic light emitting diode display (abbreviated herein as “TGLED”), which TOLED is at least partially transparent.
  • the display may have at its fundamental length scale 2000, 3000, 4000, 5000, 8000, 7000, or 8000 pixels.
  • the display may have at Its fundamental length scale any number of pixels between the aforementioned number of pixels (e.g., from about 2000 pixels to about 4000 pixels, from about 4000 pixels to about 8000 pixels, or from about 2000 pixels to about 8000 pixels).
  • a fundamental length scale may comprise a diameter of a bounding circle, a length, a width, or a height.
  • the fundamental length scale may be abbreviated herein as “FLS.”
  • the display construct may comprise a high resolution display.
  • the display construct may have a resolution of at least about 550, 576, 680, 720, 768, 1024, 1080,
  • the first number of pixels may designate the height of the display and the second pixels may designates the length of the display.
  • the display may be a high resolution display having a resolution of 1920 x 1080, 3840 x 2160, 4096 x 2160, or 7680 x 4320.
  • the display may be a standard definition display, enhanced definition display, high definition display, or an ultra-high definition display.
  • the display may be rectangular.
  • the image projected by the display matrix may be refreshed at a frequency (e.g., at a refresh rate) of at least about 20 Hz, 30 Hz, 60 Hz, 70 Hz, 75 Hz, 80 Hz, 100 Hz, or 120 Hertz (Hz).
  • the FLS of the display construct may be at least 20”, 25”, 30”, 35”, 40”, 45”, 50”, 55”, 60”, 65”, 80”, or 90 inches (“).
  • the FLS of the display construct can be of any value between the aforementioned values (e.g., from about 20” to about 55”, from about 55” to about 100”, or from about 20” to about 100”).
  • a window surface in a facility is utilized to display the various media using the glass display construct.
  • the display may be utilized for (e.g., at least partial) viewing an environment externa! to the window (e.g., outdoor environment), e.g., when the display is not operating.
  • the display may be used to display media (e.g., as disclosed herein), to augment the external view with (e.g., optical) overlays, augmented reality, and/or lighting (e.g., the display may act as a light source).
  • the media may be used for entertainment and non-entertainment purposes.
  • the media may be used for work (e.g., data analysis, data processing, data manipulation, drafting, compilation, and/or video conferencing).
  • the media may be manipulated (e.g., at least in part by utilizing the display construct).
  • Utilizing the display construct can be direct or indirect. Indirect utilization of the media may be using an input device such as an electronic mouse, or a keyboard.
  • the input device may be communicatively (e.g., wired and/or wirelessly) coupled to the media.
  • Direct utilization may be by using the display construct as a touch screen using a user (e.g., finger) or a contacting device (e.g,, an electronic pen or stylus).
  • Fig. 10A shows an example of a window 1002 framed in a window frame 1003, and a fastener structure 1QQ4 comprising a first hinge 1005a and a second hinge 1005b, which hinges facilitate rotating display construct 1001 about the hinge axis, e.g., in a direction of arrow 1011.
  • the window may be a smart window such as an electrochromic (EC) window.
  • EC electrochromic
  • the window may be in the form of an EC IGU.
  • mounted to window frame is one or more display constructs (e.g., transparent display) (e.g., 1001) that is transparent at least in part, in one embodiment, the one or more display constructs (e.g., transparent display) comprises T-OLED technology, but it should be understood that the present invention should not be limited by or to such technology.
  • one or more display constructs e.g., transparent display
  • the fastener structure also referred to herein as a “fastener” comprises a bracket, in one embodiment, the fastener structure comprises an L-bracket.
  • L-bracket comprises a length that approximates or equals a length of a side of window (e.g., and in the example shown in Fig. 10A, also the length of the fastener 1004).
  • the fundamental length scale (e.g., length) of a window Is at most about 60 feet ('), 50’, 40’, 30’, 25’, 20’, 15’, 10’, 5’ or 1’.
  • the FLS of the window can be of any value between the aforementioned values (e.g., from V to 60’, from T to 30’, from 30’ to 60’, or from 10’ to 40’).
  • the fundamental length scale (e.g., length) of a window is at least about 60’, 80’, or 100’.
  • the display construct e.g., transparent display
  • Fig. 10B shows an example of various windows in a facade 1020 of a building, which facade comprises windows 1022, 1023, and 1021 , and display constructs 1 , 2, and 3.
  • display construct 1 is transparent at least in part and is disposed over window 1023 (e.g., display construct 1 is super positioned over window 1023) such that the entirety of window 1023 is covered by the display construct, and a user can view through the display construct 1 and the window 1023 the external environment (e.g., flowers, glass, and trees).
  • Display construct 1 is coupled to the window with a fastener that facilitates rotation of the display construct about an axis parallel to the window bottom horizontal edge, which rotation is in the direction of arrow 1027.
  • display constructs 2 and 3 are transparent at least in part and are disposed over window 1021 such that the entirety of window 1021 is covered by the two display construct each covering (e.g., extending to) about half of the surface area of window 1021 , and a user can view through the display constructs 2 and 3 and the window 1021 the external environment (e.g., flowers, glass, and trees).
  • Display construct 2 is coupled to the window 1021 with a fastener that facilitates rotation of the display construct about an axis parallel to the window left vertical edge, which rotation is in the direction of arrow 1026.
  • Display construct 3 is coupled to the window with a fastener that facilitates rotation of the display construct about an axis parallel to the window 1021 right vertical edge, which rotation is in the direction of arrow 1025.
  • the display construct comprises a hardened transparent material such as plastic or glass.
  • the glass may be in the form of one or more glass panes.
  • the display construct may include a display matrix (e.g., an array of lights) disposed between two glass panes.
  • the array of lights may include an array of colored lights. For example, an array of red, green, and blue colored lights. For example, an array of cyan, magenta, and yellow colored lights.
  • the array of lights may include light colors used in electronic screen display.
  • the array of lights may comprise an array of LEDs (e.g., OLEDs, e.g., TOLEDs).
  • the matrix display (e.g., array of lights) may be at least partially transparent (e.g., to an average human eye).
  • the transparent OLED may facilitate transition of a substantial portion (e.g., greater than about 30%, 40%, 50%, 60%, 80%, 90% or 95%) of the intensity and/or wavelength to which an average human eye senses.
  • the matrix display may form minimal disturbance to a user looking through the array.
  • the array of lights may form minimal disturbance to a user looking through a window on which the array is disposed.
  • the display matrix (e.g., array of lights) may be maximally transparent. At least one glass pane of the display construct may be of a regular glass thickness.
  • the regular glass may have a thickness of at least about 1 millimeters (mm), 2mm, 3mm, 4mm, 5mm, or 6 mm.
  • the regular glass may have a thickness of a value between any of the aforementioned values (e.g., from 1mm to 8mm, from 1mm to 3mm, from 3mm to about 4mm, or from 4mm to 8mm).
  • At least one glass pane of the display construct may be of a thin glass thickness.
  • the thin glass may have a thickness of at most about 0.4 millimeters (mm), 0.5 mm, 0.8 mm, 0.7 mm, 0.8mm, or 0.9mm thick.
  • the thin glass may have a thickness of a value between any of the aforementioned values (e.g,, from 0.4mm to 0.9mm, from 0.4mm to 0.7mm, or from 0.5mm to 0.9mm).
  • the glass of the display construct may be at least transmissive (e.g., in the visible spectrum). For example, the glass may be at least about 80%, 85%, 90%, 95%, or 99% transmissive.
  • the glass may have a transmissivity percentage value between any of the aforementioned percentages (e.g., from about 80% to about 99%).
  • the display construct may comprise one or more panes (e.g., glass panes).
  • the display construct may comprise a plurality (e.g., two) of panes.
  • the glass panes may have (e.g., substantially) the same thickness, or different thickness.
  • the front facing pane may be thicker than the back facing pane.
  • the back facing pane may be thicker than the front facing pane.
  • Front may be in a direction of a prospective viewer (e.g., in front of display construct 1001 , looking at display construct 1001).
  • Back may be in the direction of a (e.g., tintable) window (e.g., 1002).
  • One glass may be thicker relative to another glass.
  • the thicker glass may be at least about 1 ,25*, 1.5*, 2 * , 2.5*, 3*, 3.5*, or 4* thicker than the thinner glass.
  • the symbol “*” designates the mathematical operation of “times.”
  • the transmissivity of the display construct (that including the one or more panes and the display matrix (e.g., light-array or LCD)) may be of at least about 20%, 30%, 35%, 40%, 45%, 50%, 80%, 70%, 80%, or 90%.
  • the display construct may have a transmissivity percentage value between any of the aforementioned percentages (e.g., from about 20% to about 90%, from about 20% to about 50%, from about 20% to about 40%, from about 30% to about 40%, from about 40% to about 80%, or from about 50% to about 90%).
  • a higher transmissivity parentage refers higher intensity and/or broader spectrum of Sight that passes through a material (e.g., glass).
  • the transmissivity may be of visible light.
  • the transmissivity may be measured as visible transmittance (abbreviated herein as “Tvis”) referring to the amount of light in the visible portion of the spectrum that passes through a material.
  • Tvis visible transmittance
  • the transmissivity may be relative to the intensity of incoming light.
  • the display construct may transmit at least about 80%, 85%, 90%, 95%, or 99% of the visible spectrum of light (e.g., wavelength spectrum) therethrough.
  • the display construct may transmit a percentage value between any of the aforementioned percentages (e.g., from about 80% to about 99%).
  • a liquid crystal display is utilized.
  • FIG. 11 shows a schematic example of a display construct assembly (e.g., laminate)
  • display construct that includes a thicker glass pane 1105, a first adhesive layer 1104, a display matrix 1103, a second adhesive layer 1102, and a thinner glass pane 1101 , which matrix is connected via wiring 1111 to a circuitry 1112 that controls at least an aspect of the display construct, which display construct Is coupled to a fastener 1113.
  • gesture command is used for controlling a mobile circuitry or other interface that controls a video conference session, the real or virtual environments, and/or auxiliary content.
  • a sensor e.g., an image sensor
  • the mobile circuitry may be communicatively coupled to the network that is communicatively coupled to a digital twin of the enclosure in which the target is disposed.
  • a gesture recognition module may be employed for analyzing the mobile circuitry and/or sensor (e.g., camera) data.
  • a user may be positioned within a field of view of a camera so that movements of the user can be captured which are carried out according to a requested control action to be taken in connection with controllable targets (e.g., devices) such as iintable windows.
  • movements of the user can be captured by the mobile device manipulated by the user (e.g., moved by the user) that are carried out according to a requested control action to be taken in connection with controllable targets (e.g., devices) such as tintable windows.
  • controllable targets e.g., devices
  • Examples of digital twin, gesture control, controlling circuitry (e.g., VR devices) service devices, target devices, control system and network can be found in International Patent Application Serial No. PCT/US21/27418, which is incorporated herein by reference in its entirety.
  • Fig. 12 shows an example of a user interacting with a device 1205 for controlling status of a target that is the optical state of electrochromic windows 12QQa-12GGd.
  • the device 1205 is a wall device as described above.
  • the wall device 1205 is or includes a smart device such as an electronic tablet or similar device.
  • Device 1205 may be a device configured to control the electrochromic windows 1200a- 12Q0d, including but not limited to a smartphone, tablet, laptop, PC, etc.
  • the device 1205 may run an application/program that is configured to control the electrochromic windows, in some embodiments, the device 1205 communicates with an access point 1210, for example through a wired connection or a wireless connection (e.g., WiFi, Bluetooth, Bluetooth low energy, ZigBee, WiMax, etc.).
  • the wireless connection can allow at least one apparatus (e.g., target apparatus) to connect to the network, internet, and/or communicate with one another wirelessly within an area (e.g., within a range).
  • the access point 1210 may be a networking hardware device that allows a wireless technology (e.g., Wi-Fi) compliant device to connect to a wired network.
  • the device 1205 may communicate with a controller (e.g., of a control system such as a window controller, network controller, and/or master controller) through a connection scheme.
  • Embodiments of the invention may be scalable to adapt an immersive experience according to a number of participants in a video conference.
  • Media display systems and associated furnishings can be tailored to varying collaboration modalities to accommodate group sizes and/or different types of meetings, in an office setting, a plurality of conferencing units or stations having a variety of adaptations for differently sized groups of participants may be deployed in a space-efficient manner.
  • an individual portal is constructed with room for a single local participant.
  • An individual portal may be free-standing in an open space of a room for quick communications and/or to facilitate a few local participants sharing a conference with remote parties.
  • the individual portal may be constructed with isolation walls or panels around at least one side of the media display to provide audio and video privacy and/or to reduce the possibility of spreading contagions within an office.
  • small group nooks e.g., pods
  • More than one media display may be deployed in a pod to facilitate the participation of multiple remote participants (e.g., each being shown life-size on a respective media display).
  • a modality is provided in which a greater number of transparent media display constructs are deployed for large group zones or huddle spaces.
  • transparent media displays can be incorporated into freestanding panels in building interiors or into a supportive structure such as an architectural (e.g., externally bordering) glass.
  • conferencing stations can be adapted for particular functions.
  • a layout of media displays and/or the associated furnishing can be configured for supporting reception services or for acting as distribution (e.g., postal, inventory, sales, merchandise) hubs.
  • Fig, 13 shows an example fioorpian 1300 of an office setting (e.g., an office suite).
  • F!oorp!an 1300 includes various combinations and arrangements of digital collaboration units (e.g., freestanding, individual portals, small group pods and/or large open areas fitted with array of media displays).
  • Floorplan 1300 includes a pair of freestanding, individual portals 1310 (other examples illustrated in Figs. 14-16 and 26). Freestanding, individual portals 1310 may stand alone (e.g., only one digital collaboration unit) and/or multiple (two or more) portals may be disposed side-by-side (e.g., two portals illustrated in Fig. 13).
  • Fig. 13 shows examples of small group pods combined (e.g., sharing some walls) into space-efficient groupings 1320 and 1330 (other example illustrated in Fig. 24).
  • Small group pods 1320, 1330 may have one, two, three or more polygonal-shaped (e.g. hexagon shaped) digital collaboration units with connecting wails between each adjacent unit. At least a portion of the (e.g., entire) connecting wails may be opaque. At least a portion of the (e.g., entire) connecting wails may be transparent.
  • the connecting wails may be variable between opaque and at least partially translucent (e.g., a tintable window).
  • a (large) open area such as 1340 may be fitted with an array of media displays to provide immersive conferencing between groups of local and remote participants (example illustrated in Fig. 17).
  • FIG. 14 shows a portion of a floorplan section 1400 in greater detail with side-by-side freestanding, individual portals 1410 and 1420 having transparent supportive structures such as 1462.
  • Portal 1410 includes a physical work surface (e.g,, a physical ledge) 1411 on which a mobile object can be placed (e.g., laptop or cellphone).
  • the physical work surface 1411 may be (i) movable relative to the transparent supportive structures 1462 (e.g., in a direction indicated by arrows 1470) and/or (si) movable relative to a gravitational center G.
  • a position of the physical work surface 1411 may be automatically adjusted based on a height and/or position of a local user 1430, which may be based on bodily features of local user 1430 and/or historic preferences of local user 1430.
  • a position of the physical work surface 1411 may be manually adjustable by local user 1430.
  • a position of the physical work surface 1411 may be alsomaficaily adjustable, e.g., with a manual override.
  • the mobile platform can include a wireless charger.
  • a local user 1430 utilizes a transparent media display 1440 in portal 1410 to view a media stream including a remote user 1450 and auxiliary content 1453.
  • the transparent media display 1440 may be movable relative to the supportive structures 1462 and/or framing 1421 for example, as indicted by arrows 1476 and/or movable relative to a gravitational center G.
  • a position of the transparent media display 1440 may be automatically adjusted based at least in part on a height and/or position of local user 1430, which may be based at least in part on bodily features (e.g., a nose, eyebrows, eyes, pupils, a head, a chin, lips, a nose bridge, or ears) of local user 1430 and/or historic preferences of local user 1430.
  • a position of transparent media display 1440 may be manually adjustable by a local user 1430.
  • a position of the transparent media display 1440 may be automatically adjustable, e.g,, with a manual override.
  • Cabling coupled to the transparent media display 1440 may move with the display 1440.
  • a transparent media display 1440 and physical work surface 1411 may be secured to each other to move in unison (e.g., in a coupled and/or concerted movement).
  • a transparent media display 1440 and physical work surface 1411 may each engage a mechanism that moves them in unison.
  • a transparent media dispiay 1440 and/or physicai work surface 1411 may each be moveable without movement of the other (e.g., in a non-coupied, non- concerted movement, and/or individualized movement).
  • the remote background Is redacted from an image stream of the remote program and is not projected on media display 1440, thus facilitating viewing through the redacted area 1452 of the remote media stream.
  • Media display 1440 is a transparent media dispiay that project in addition to the redacted remote media stream, also icons 1451 , 1454, and 1413 that facilitate control of various aspects associated with the digital communication, and virtual overlay 1412.
  • Icon 1451 facilitates control of the video camera capturing user 1430 (e.g., adjustment of the camera’s focus, height, and/or its usage)
  • icons 1413 can facilitate various aspects of the communication such as capturing a screenshot, adjusting volume, and commenting
  • icons 1454 can facilitate annotation and/or other manipulation of items presented during the digital interaction such as documents 1453.
  • a media dispiay 1480 of portal 1420 can be rolled into the video conferencing session (e.g., as a screen extension and/or a second screen) e.g., so that the remote user and the auxiliary content can be show at a large (e.g., actual size) size simultaneously.
  • the second media display 1420 includes auxiliary content 1483, notes, and annotations 1483 (e.g., by the local and/or remote user).
  • Media display 1460 excludes any background, and thus the local background 1461 can be viewed through transparent dispiay 1480.
  • Transparent displays 1440 and 1480 are bordered by a line of light (e.g., fluorescent or LED light) such as 1465, and by framing such as 1421 that hold the transparent display in conjunction with the transparent supportive structure such as 1462.
  • the line of light 1465 may be (i) movable relative to the transparent supportive structures 1482 (e.g., in a direction indicated by arrows 1472), (ii) movable relative to the framing 1421 , and/or (ill) movable relative to a gravitational center G.
  • a position of the line of light 1485 may be automatically adjusted based at least in part on a height and/or position of a local user 1430, which may be based at least in part on bodily features of local user 1430 and/or historic preferences of local user 1430.
  • a position of the line of light 1485 may be manually adjustable, e.g., by local user 1430.
  • a position of the line of light 1465 may be automatically adjustable, e.g., with a manual override.
  • a transparent media display 1440 and line of light 1465 may be secured to each other to move in unison (e.g., in a concerted, coupled, and/or coordinated movement).
  • a transparent media display 1440 and line of light 1485 may each engage a mechanism (e.g., comprising an actuator) that moves them in unison.
  • a transparent media display 1440, physical work surface 1411 and line of light 1485 may be secured to each other to move in unison.
  • a transparent media display 1440, physical work surface 1411 and line of light 1465 may each engage a mechanism that moves them in unison.
  • a transparent media display 1440 and/or physical work surface 1411 and/or line of light 1465 may each be moveable without movement of the others (e.g,, in a non-concerted, non-coupled, nan-coordinated, and/or individualized movement).
  • Portals 1410 and 1420 also include panel caps such as 1422, thorough which wiring can go through and/or local controllers can reside. The wires can also run through the panel framing such as 1421.
  • the media display 1460 is pivotable about a hinge.
  • the hinge has a vertical pivot axis (e.g., about arrow 1474), which hinge is secured to panel framing 1421.
  • a hinge securing a media display to the framing may have a pivot axis that is vertical, a horizontal or at any other angle.
  • the hinge may be configured such that the media display may pivot from engagement with framing of the digital collaboration unit (such as 1421) (i) towards a direction of an expected user, or (ii) away from an expected position of the user.
  • the local controller (e.g., of the media dispiay(s) may reside in the panel caps and/or in the portal framing.
  • panel caps, controllers, wiring, and wiring guides can be found in international Patent Application Serial No. PCT/US20/53641 , which is incorporated herein by reference in its entirety.
  • wireless chargers, controllers, mobile circuitry, network, controllers, framing systems, and devices e.g., display construct, and tintabie windows
  • Fig. 15 shows an example of a personal portal 1500 with enhanced privacy.
  • a transparent media display 1510 and a video image sensor 1520 e.g., behind display 1510 are arranged for utilization by a user within a single seat 1530.
  • the media display 1510 may be movable relative to a body and/or framing of the personal portal 1500 (e.g., in directions of arrows 1592) and/or movable relative to a gravitational center G. Cabling coupled to the media display 1510 may move with the media display 1510.
  • a position of the media display 1510 may be automatically adjusted based at least in part on a height and/or position of a local user, which may be based at least in part on bodily features (e.g., a nose, eyebrows, eyes, pupils, a head, a chin, lips, a nose bridge, or ears) of a local user and/or historic preferences of a local user.
  • a position of the media display 1510 may be manually adjustable by a local user.
  • a position of the transparent media display 1510 may be automatically adjustable, e.g., with a manual override. Cabling coupled to the transparent media display 1510 may move with the display 1510.
  • Privacy panels 1540 and 1550 may comprise sound dampening materials, e.g., to provide a quiet space for conducting a conference and/or to limit propagation of sound outside portal 1500. Privacy panels may also comprise one or more door(s) (not shown) between privacy panel 1540 and privacy pane! 1550, which door(s) may comprise sound dampening materials. The door(s) may pivot, accordion and/or swivel to allow for Ingress and egress of the personal portal 1500. For enhancing an immersive experience, a physical work surface ⁇ e.g., a table ledge) 1560 in front of media display 1510 can be duplicated at the remote location(s) (e.g., when collaborating with users having a similarly constructed portal).
  • the physical work surface 1560 may be movable relative to a body and/or framing of the personal portal 15QQ (e.g., in directions of arrows 1590) and/or movable relative to a gravitational center G.
  • a position of the physical work surface 1560 may be automatically adjusted based at least in part on a height and/or position of a local user, which may be based at least in part on bodily features of a local user and/or historic preferences of a local user.
  • a position of the physical work surface 1560 may be manually adjustable by a local user.
  • a position of the physical work surface 1560 may be automatically adjustable, e.g., with a manual override.
  • a transparent media display 1510 and physical work surface 1560 may be secured to each other to move in unison.
  • a transparent media display 1510 and physical work surface 1560 may each engage a mechanism that moves them in unison (e.g., in concert, together, and/or in tandem).
  • a transparent media display 1510 and/or physical work surface 1560 may be moveable without movement of the other (e.g,, separately, and/or in a non-concerted action).
  • the physical work surface 1560 may comprise fixed accessories (e.g., a wireless charging station) 1593 coupled thereto.
  • the transparent media display 1510 may comprise fixed accessories (a wireless charging station, not shown) coupled thereto.
  • the fixed accessories may be embedded in the physical work surface, panei(s) and/or framing.
  • the fixed accessories coupled to the physical work surface 1560 and/or the transparent media display 1510 may be movable (e.g., up-down) relative to a body and/or framing of the personal portal 1500 in unison with, or separately from, the physical work surface 1560 and/or the transparent media display 1510. Cabling coupled to the fixed accessories may move with the respective fixed accessory.
  • Media experience adjusters 1570 e.g., lighting and/or speaker
  • the media experience adjusters 1570 may be (i) movable relative to the privacy panels 1540 (e.g,, in a direction indicated by arrows 1594), (ii) movable relative to the body of the personal portal, (III) movable relative to framing of the personal portal 1500, and/or (iv) movable relative to a gravitational center.
  • the media experience adjusters 1570 may be stationary.
  • a position of the lighting 1570 may be automatically adjusted based at least in part on a height and/or position of a local user, which may be based at least in part on bodily features of a local user and/or historic preferences of a local user.
  • a position of the lighting 1570 may be manually adjustable by a local user.
  • a position of the lighting 1570 may be automatically adjustable, e.g., with a manual override.
  • a transparent media display 1510 and lighting 1570 may be secured to each other to move in unison.
  • a transparent media display 1510 and lighting 1570 may each engage a mechanism that moves them in unison.
  • a transparent media display 1510, physical work surface 1560 and lighting 1570 may be secured to each other to move in unison.
  • a transparent media display 1510, physical work surface 1560 and lighting 1570 may each engage a mechanism that moves them in unison.
  • a transparent media display 1510 and/or physical work surface 1560 and/or lighting 1570 may each be moveable without movement of the others.
  • a loudspeaker 1580 may provide sound output, and/or a personal headphone can be provided with audio content (e.g., using a Bluetooth connection).
  • Camera 1520 may have a fixed focus (e.g., set to avoid image degradation from viewing through the pixels of media display 1510), or may have an adjustable focus.
  • Camera 1520 may be horizontally and/or vertically adjustable (e.g., by the user).
  • Camera 1520 may have a wide field-of-view to capture table ledge 1560.
  • Wiring of the network e.g., power and/or communication
  • Personal portal 1500 may be operatively coupled to the network (e.g., external network and/or local network of the facility).
  • At least one of the components of the digital communication unit may be movable, in some embodiments, at least one of the components of the digital communication unit may be stationary (e.g., not movable).
  • the at least one component of the digital communication unit may comprise (a) a media display (e.g., display construct),
  • a physical work surface e.g., physical ledge
  • lighting e.g., lighting
  • speaker e.g., speaker
  • sensor e.g., video image sensor such as a camera
  • fixed accessory e.g., a charging station such as a wireless charging station
  • another media experience adjuster e.g., another media experience adjuster
  • Fig, 16 shows an example of a group pod 1600 with space for accommodating local users such as 1610 and 1620.
  • Pod 1600 includes at least two transparent media displays such as 1630 and 1640 for displaying media streams from respective remote users (e.g., at different remote locations).
  • each participant having a transparent display such as participants 1610 and 1620, may experience ail participants as though they shared a local environment, as the remote background of remote participants is redacted, thus allowing the local environment to show through the redacted remote background portion of the media stream, as in 1634.
  • the media display 1630 displays a remote participant 1635, local camera controls 1632, lighting panel 1633, dropdown and/or informative menu 1636 that includes chat, participants data, and timing information.
  • Media display 1630 also displays ledge perspective overlay 1611 and icons 1612 that facilitate voice and streaming control.
  • Media display 1640 has displays similar features.
  • Group pod 1600 includes a physical ledge 1621 on which objects such as public items (e.g., plant 1637), and personal items (e.g., cup 1631) can be placed.
  • Group pod 1600 includes transparent supportive panels such as 1625. IN other embodiments, at least one of the transparent supportive panels can be substituted by a non-transparent (e.g., opaque) supportive panel.
  • the supportive panel can comprise gypsum, cardboard, cork, piaster, a polymer (e.g., plastic), a ceramic, a composite material, a metal (e.g., elemental metal and/or metal alloy), or glass.
  • the supportive panel can comprise a glossy or matt exposed surface.
  • the exposed surface of at least a portion of the supportive structure can be planar or rough.
  • At least a portion of the exposed surface of the supportive structure may be dispersive, transmissive, or reflective.
  • the physical ledge may comprise a (e.g., wireless) charging station.
  • Group pod 1600 may comprise wiring (e.g., in its walls, framing, and/or framing caps).
  • Fig. 17 shows an example of a large group huddle space 1700 that may achieve an immersive experience for local and remote participants by employing transparent media displays so that (1) remote users and/or remote auxiliary content are shown as cutouts that integrate with a local environment in which their remote background is redacted, and/or (2) camera(s) imaging the local and/or remote participants can obtain media streams in which the imaged participants are directing their gaze toward the camera.
  • Each image from a remote location may be shown on an individual transparent media display.
  • An image from a remote location may be shown on multiple as duplicate images and/or on all transparent media displays of a display matrix as a single image (e.g., using a video wall functionality).
  • the remote users may be shown as cutouts (e.g., without their native background), e.g., regardless of when the remote user is being shown on an individual transparent media display or being shown on a display matrix using the video wall functionality (e.g., being shown on multiple displays of a display matrix as a single collective image).
  • a transparent media display is combined with a tintable window (e.g., an electrochromic window), in some embodiments, a dynamic state of an electrochromic window is controlled by altering a voltage signal to an electrochromic device (BCD) used to provide tinting or coloring.
  • An electrochromic window can be manufactured, configured, or otherwise provided as an insulated glass unit (IGU). IGUs may serve as the fundamental constructs for holding electrochromic panes (also referred to as “Hies”) when provided for insfailafion in a building. An IGU life or pane may be a single substrate or a multi-substrate construct, such as a laminate of two substrates.
  • IGUs especially those having double- or triple-pane configurations, can provide a number of advantages over single pane configurations; for example, multi-pane configurations can provide enhanced thermal insulation, noise insulation, environmental protection and/or durability when compared with single-pane configurations.
  • a multi-pane configuration also can provide increased protection for an BCD, for example, because the eiectrochromic films, as well as associated layers and conductive interconnects, can be formed on an interior surface of the multi-pane !GU and be protected by an inert gas fill in the interior volume of the IGU.
  • Gertain disclosed embodiments provide a network infrastructure in the enclosure (e.g,, a facility such as a building).
  • the network infrastructure is available for various purposes such as for providing communication and/or power services.
  • the communication services may comprise high bandwidth (e.g., wireless and/or wired) communications services.
  • the communication services can be to occupants of a facility and/or users outside the facility (e.g., building).
  • the network infrastructure may work in concert with, or as a partial replacement of, the infrastructure of one or more cellular carriers.
  • the network infrastructure can be provided in a facility that includes electrically switchable windows. Examples of components of the network infrastructure include a high speed backhaul.
  • the network infrastructure may include at least one cable, switch, physical antenna, transceivers, sensor, transmitter, receiver, radio, processor and/or controller (that may comprise a processor).
  • the network infrastructure may be operatively coupled to, and/or include, a wireless network.
  • the network infrastructure may comprise wiring.
  • One or more sensors can be deployed (e.g., installed) in an environment as part of instaiiing the network and/or after installing the network.
  • the network may be a local network.
  • the network may comprise a cabie configured to transmit power and communication in a single cable.
  • the communication can be one or more types of communication.
  • the communication can comprise cellular communication abiding by at least a second generation (2G), third generation (3G), fourth generation (4G) or fifth generation (5G) cellular communication protocol.
  • the communication may comprise media communication facilitating stills, music, or moving picture streams (e.g., movies or videos).
  • the communication may comprise data communication (e.g., sensor data).
  • the communication may comprise control communication, e.g., to control the one or more nodes operatively coupled to the networks.
  • the network may comprise a first (e.g., cabling) network installed in the facility.
  • the network may comprise a (e.g., cabling) network installed in an envelope of the facility (e.g., such as in an envelope of an enclosure of the facility. For example, in an envelope of a building included in the facility).
  • a network infrastructure supports a control system for one or more windows such as tintab!e (e.g., eiectrochromic) windows.
  • the control system may comprise one or more controllers operatively coupled (e.g., directly or indirectly) to one or more windows.
  • tintable windows also referred to herein as “optically switchable windows/’ or “smart windows’’
  • switchable optical devices comprising a liquid crystal device, an e!ectrochromic device, suspended particle device (SPD), NanoGbromics display (NCD), Organic electroluminescent display (OELD), suspended particle device (SPD), NanoChromics display (NCD), or an Organic electroluminescent display (OELD).
  • the display element may be attached to a part of a transparent body (such as the windows).
  • the tintabie window may be disposed in a (non- transitory) facility such as a building, and/or in a transitory facility (e.g., vehicle) such as a car, RV, bus, train, airplane, helicopter, ship, or boat.
  • a transitory facility e.g., vehicle
  • a tintabie window exhibits a (e.g., controllable and/or reversible) change in at least one optical property of the window, e.g., when a stimulus is applied.
  • the change may be a continuous change.
  • a change may be to discrete tint levels (e.g., to at least about 2, 4, 8, 18, or 32 tint levels).
  • the optical property may comprise hue, or transmissivity.
  • the hue may comprise color.
  • the transmissivity may be of one or more wavelengths.
  • the wavelengths may comprise ultraviolet, visible, or infrared wavelengths.
  • the stimulus can include an optical, electrical and/or magnetic stimulus.
  • the stimulus can include an applied voltage and/or current.
  • One or more tintabie windows can be used to control lighting and/or glare conditions, e.g., by regulating the transmission of solar energy propagating through them.
  • One or more tintabie windows can be used to control a temperature within a building, e.g., by regulating the transmission of solar energy propagating through the window.
  • Control of the solar energy may control heat load imposed on the interior of the facility (e.g., building).
  • the control may be manual and/or automatic.
  • the control may be used for maintaining one or more requested (e.g., environmental) conditions, e.g., occupant comfort.
  • the control may include reducing energy consumption of a beating, ventilation, air conditioning and/or lighting systems. At least two of heating, ventilation, and air conditioning may be induced by separate systems.
  • tintabie windows may be responsive to (e.g., and communicatively coupled to) one or more environmental sensors and/or user control. Tintabie windows may comprise (e.g., may be) eiectrochromic windows. The windows may be located in the range from the interior to the exterior of a structure (e.g., facility, e.g., building). However, this need not be the case.
  • Tintabie windows may operate using liquid crystal devices, suspended particle devices, microeiectromechanical systems (MEMS) devices (such as microshutters), or any technology known now, or later developed, that is configured to control light transmission through a window.
  • MEMS microeiectromechanical systems
  • one or more tintabie windows can be located within the interior of a building, e.g., between a conference room and a hallway. In some cases, one or more tintabie windows can be used in automobiles, trains, aircraft, and other vehicles, e.g., in lieu of a passive and/or non-tinting window.
  • the tintabie window comprises an eiectrochromic device (referred to herein as an “EC device” (abbreviated herein as ECD, or “EC”).
  • An EC device may comprise at least one coating that includes at least one layer.
  • the at least one layer can comprise an eiectrochromic material.
  • the eiectrochromic material exhibits a change from one optical state to another, e.g., when an electric potential is applied across the EC device.
  • the transition of the eiectrochromic layer from one optical state to another optical state can be caused, e.g., by reversible, semi-reversible, or irreversible ion insertion into the eiectrochromic material (e.g., by way of intercalation) and a corresponding injection of charge-balancing electrons.
  • the transition of the eiectrochromic layer from one optical state to another optical state can be caused, e.g., by a reversible ion insertion into the eiectrochromic material (e.g., by way of intercalation) and a corresponding Injection of charge-balancing electrons.
  • Reversible may be for the expected lifetime of the ECD.
  • Semi-reversible refers to a measurable (e.g., noticeable) degradation in the reversibility of the tint of the window over one or more tinting cycles.
  • a fraction of the ions responsible for the optical transition is irreversibly bound up in the eiectrochromic material (e.g., and thus the induced (altered) tint state of the window is not reversible to its original tinting state).
  • at least some (e.g., ail) of the irreversibly bound ions can be used to compensate for “blind charge” in the material (e.g., ECD).
  • suitable Ions include cations.
  • the cations may Include lithium ions (Li+) and/or hydrogen ions (H+) (i.e., protons).
  • other ions can be suitable, intercalation of the cations may be into an (e.g., metal) oxide.
  • a change in the intercalation state of the ions (e.g., cations) into the oxide may induce a visible change in a tint (e.g., color) of the oxide.
  • the oxide may transition from a colorless to a colored state.
  • intercalation of lithium ions into tungsten oxide may cause the tungsten oxide to change from a transparent state to a colored (e.g., blue) state.
  • EC device coatings as described herein are located within the viewable portion of the tintabie window such that the tinting of the EC device coating can be used to control the optica! state of the tintabie window.
  • Fig. 18 shows an example of a schematic cross-section of an eiectrochromic device 1800 in accordance with some embodiments.
  • the EC device coating is attached to a substrate 1802, a transparent conductive layer (TCL) 1804, an electrocbromlc layer (EC) 1806 ⁇ sometimes also referred to as a cathodically coloring layer or a cathodically tinting layer), an ion conducting layer or region (IC) 1808, a counter electrode layer (CE) 1810 (sometimes also referred to as an anodicaiiy coloring layer or anodicaily tinting layer), and a second TCL 1814.
  • TCL transparent conductive layer
  • EC electrocbromlc layer
  • CE counter electrode layer
  • Elements 1804, 1806, 1808, 1810, and 1814 are collectively referred to as an eiecirocbromic stack 1820.
  • a voltage source 1816 operable to apply an electric potential across the eiectrochromic stack 1820 effects the transition of the electrochromic coating from, e.g., a clear state to a tinted state.
  • the order of layers is reversed with respect to the substrate. That is, the layers are in the following order: substrate, TCL, counter electrode layer, Ion conducting layer, electrochromic material layer, TCL.
  • the ion conductor region may form from a portion of the EC layer (e.g., 1806) and/or from a portion of the CE layer (e.g., 1810).
  • the eiectrochromic stack e.g., 1820
  • the ion conductor region may form where the EC layer and the CE layer meet, for example through heating and/or other processing steps.
  • an EC device coating may include one or more additional layers such as one or more passive layers. Passive layers can be used to improve certain optical properties, to provide moisture, and/or to provide scratch resistance. These and/or other passive layers can serve to hermetically seal the EC stack 1820.
  • Various layers including transparent conducting layers (such as 1804 and 1814), can be treated with anti-reflective and/or protective layers (e.g., oxide and/or nitride layers).
  • anti-reflective and/or protective layers e.g., oxide and/or nitride layers.
  • an IGU includes two (or more) substantially transparent substrates.
  • the IGU may include two panes of glass. At least one substrate of the IGU can include an eiectrochromic device disposed thereon. The one or more panes of the IGU may have a separator disposed between them.
  • An IGU can be a hermetically sealed construct, e.g., having an interior region that is isolated from the ambient environment.
  • a “window assembly” may include an IGU.
  • a “window assembly” may include a (e.g., stand-alone) laminate.
  • a “window assembly” may include one or more electrical leads, e.g., for connecting the IGUs and/or laminates.
  • the electrical leads may operatively couple (e.g., connect) one or more electrochromic devices to a voltage source, switches and the like, and may include a frame that supports the IGU or laminate.
  • a window assembly may include a window controller, and/or components of a window controller (e.g., a dock),
  • Fig. 19 shows an example implementation of an IGU 1900 that includes a first pane 1904 having a first surface SI and a second surface S2.
  • the first surface 81 of the first pane 1904 faces an exterior environment, such as an outdoors or outside environment.
  • the IGU 1900 also includes a second pane 1908 having a first surface S3 and a second surface S4.
  • the second surface 84 of the second pane 1906 faces an interior environment, such as an inside environment of a home, building or vehicle, or a room or compartment within a home, building or vehicle.
  • each of the first and/or the second panes 1904 and 1908 are transparent and/or translucent to light, e.g., in the visible spectrum.
  • each of the) first and/or second panes 1904 and 1906 can be formed of a glass material (e.g., an architectural glass or other shatter-resistant glass material such as, for example, a silicon oxide (SO x ) -based glass material.
  • the (e.g., each of the) first and/or second panes 1904 and 1906 may be a soda-lime glass substrate or float glass substrate.
  • Such glass substrates can be composed of, for example, approximately 75% silica (Si0 2 ) as well as Na 2 Q, CaO, and several minor additives.
  • the (e.g., each of the) first and/or the second panes 1904 and 1906 can be formed of any material having suitable optical, electrical, thermal, and mechanical properties.
  • other suitable substrates that can be used as one or both of the first and the second panes 1904 and 1908 can include other glass materials as well as plastic, semi-plastic and thermoplastic materials (for example, poly(methyl methacrylate), polystyrene, polycarbonate, ally!
  • each of the) first and/or the second panes 2204 and 2208 can be strengthened, for example, by tempering, heating, or chemically strengthening.
  • first and second panes 1904 and 1906 are spaced apart from one another by a spacer 1918, which is typically a frame structure, to form an interior volume.
  • the interior volume is fiiled with Argon (Ar) or another gas, such as another noble gas (for example, krypton (Kr) or xenon (Xn)), another (non-noble) gas, or a mixture of gases (for example, air).
  • Ar Argon
  • Kr krypton
  • Xn xenon
  • a mixture of gases for example, air
  • the interior volume 1908 can be evacuated of air or other gas.
  • Spacer 1918 generally determines the height “C” of the interior volume 1908 (e.g., the spacing between the first and the second panes 1904 and 1906).
  • the thickness (and/or relative thickness) of the BCD, sealant 1920/1922 and bus bars 1926/1928 may not be to scale. These components are generally thin and are exaggerated here, e.g., for ease of illustration only.
  • the spacing “C” between the first and the second panes 1904 and 1906 is in the range of approximately 6 mm to approximately 30 mm.
  • the width “D” of spacer 1918 can be in the range of approximately 5 mm to approximately 15 mm (although other widths are possible and may be desirable).
  • Spacer 1918 may be a frame structure formed around all sides of the IGU 1900 (for example, top, bottom, left and right sides of the IGU 1900).
  • spacer 1918 can be formed of a foam or plastic material.
  • spacer 1918 can be formed of metal or other conductive material, for example, a metal tube or channel structure having at least 3 sides, two sides for sealing to each of the substrates and one side to support and separate the iites and as a surface on which to apply a sealant, 1924.
  • a first primary seal 1920 adheres and hermeticaily seals spacer 1918 and the second surface S2 of the first pane 1904.
  • a second primary seal 1922 adheres and hermetically seals spacer 1918 and the first surface S3 of the second pane 1906.
  • each of the primary seals 1920 and 1922 can be formed of an adhesive sealant such as, for example, po!yisobutylene (RIB), in some implementations, IGU 1900 further includes secondary seal 1924 that hermeticaily seals a border around the entire IGU 1900 outside of spacer 1918, To this end, spacer 1918 can be inset from the edges of the first and the second panes 1904 and 1906 by a distance ⁇ ,” The distance ⁇ ” can be in the range of approximately four (4) millimeters (mm) to approximately eight (8) mm (although other distances are possible and may be desirable), in some implementations, secondary seal 1924 can be formed of an adhesive sealant such as, for example, a polymeric material that resists water and that adds structural support to the assembly, such as silicone, polyurethane and similar structural sealants that form a water-tight seal.
  • an adhesive sealant such as, for example, po!yisobutylene (RIB)
  • IGU 1900 further includes secondary seal 1924 that hermeticaily seals
  • the ECD coating on surface S2 of substrate 1904 extends about its entire perimeter to and under spacer 1918.
  • This configuration is functionally desirable as it protects the edge of the ECD within the primary sealant 1920 and aesthetically desirable because within the Inner perimeter of spacer 1918 there is a monolithic ECD without any bus bars or scribe lines.
  • an BCD 1910 is formed on the second surface S2 of the first pane 1904.
  • the ECD 1910 includes an electrochromic (“EC”) stack 1912, which itself may include one or more layers.
  • the EC stack 1912 can include an electrochromic layer, an ion-conducting layer, and a counter electrode layer.
  • the electrochromic layer may be formed of one or more inorganic solid materials.
  • the electrochromic layer can include or be formed of one or more of a number of electrochromic materials, including e!ectrochemica!ly-cathodic or e!ectrochemica!ly-anodic materials.
  • EC stack 1912 may be between first and second conducting (or “conductive”) layers.
  • the ECD 1910 can include a first transparent conductive oxide (TCO) layer 1914 adjacent a first surface of the EC stack 1912 and a second TCO layer 1916 adjacent a second surface of the EC stack 1912.
  • TCO transparent conductive oxide
  • An example of similar EC devices and smart windows can be found in U.S. Patent No. 8,764,950, titled ELECTROCHROMIC DEVICES, by Wang et aL, issued July 1 , 2214 and U.S. Patent No. 9,261 ,751, titled ELECTROCHROMIC DEVICES, by Pradhan et aL, issued February 16, 2216, which is Incorporated herein by reference in its entirety.
  • the EC stack 1912 also can include one or more additional layers such as one or more passive layers.
  • passive layers can be used to improve certain optical properties, to provide moisture or to provide scratch resistance. These or other passive layers also can serve to hermetically seal the EC stack 1912.
  • the media display may be disposed upon second pane 1906 (e.g., with video images projected away from second pane 1906). in other embodiments, the media display is attached (e.g., fastened or adhered to) the IGU.
  • Fig, 19 shows an example of image sensor 1908 mounted in the interior volume of the IGU between first and second panes 1904 and 1906. Such a location for sensor 1908 is unobtrusive and well protected from any harsh environmental conditions (e.g., humidity and/or debris such as dust).
  • Sensor 1908 may be fixed, or a be operatively coupled to an actuator (e.g., a servo-mechanism).
  • the actuator may be provided within the interior volume, e.g., for actively controlling an image capturing location.
  • a network infrastructure is provided in the enclosure (e.g., a facility such as a building).
  • the network infrastructure is available for various purposes such as for providing communication and/or power services.
  • the communication services may comprise high bandwidth (e.g., wireless and/or wired) communications services.
  • the communication services can be to occupants of a facility and/or users outside the facility (e.g., building).
  • the network infrastructure may work in concert with, or as a partial replacement of, the infrastructure of one or more cellular carriers.
  • the network may comprise one or more levels of encryption.
  • the network may be communicatively coupled to the cloud and/or to one or more servers external to the facility.
  • the network may support at least a fourth generation wireless (4G), or a fifth-generation wireless (5G) communication.
  • 4G fourth generation wireless
  • 5G fifth-generation wireless
  • the network may support cellular signals external and/or internal to the facility.
  • the downlink communication network speeds may have a peak data rate of at least about 5 Gigabits per second (Gb/s), 10 Gb/s, or 20 Gb/s.
  • the uplink communication network speeds may have a peak data rate of at least about 2Gb/s, 5Gb/s, or 10 Gb/s.
  • the network infrastructure can be provided in a facility that includes electrically switchabie windows. Examples of components of the network infrastructure include a high speed backhaul.
  • the network infrastructure may include at least one cable, switch, (e.g., physical) antenna, transceivers, sensor, transmitter, receiver, radio, processor and/or controller (that may comprise a processor).
  • the network infrastructure may be operatively coupled to, and/or include, a wireless network.
  • the network infrastructure may comprise wiring (e.g., comprising an optical fiber, twisted cable, or coaxial cable).
  • One or more devices e.g., sensors and/or emitters
  • the device(s) may be communicatively coupled to the network.
  • the network may comprise a power and/or communication network.
  • the device can be self-discovered on the network, e.g., once it couples (e.g., on its attempt to couple) to the network.
  • the network structure may comprise peer to peer network structure, or client-server network structure.
  • the network may or may not have a central coordination entity (e.g., server(s) or another stable host).
  • a central coordination entity e.g., server(s) or another stable host.
  • Examples of network, facility, control system, and devices can be found in International Patent Application Serial No. PCT/US21/17948, filed February 12, 2021 , titled "DATA AND POWER NETWORK OF A FACILITY,” which is incorporated herein by reference in its entirety.
  • a building management system is a computer-based control system.
  • the BMS can be installed in a facility to monitor and otherwise control (e.g., regulate, manipulate, restrict, direct, monitor, adjust, modulate, vary, alter, restrain, check, guide, or manage) the facility.
  • the BMS may control one or more devices communicatively coupled to the network.
  • the one or more devices may include mechanical and/or electrical equipment such as ventilation, lighting, power systems, elevators, fire systems, and/or security systems.
  • Controllers e.g., nodes and/or processors
  • a BMS may include hardware.
  • the hardware may include interconnections by communication channels to one or more processors (e.g., and associated software), e.g., for maintaining one or more conditions in the facility.
  • the one or more conditions in the faciiiiy may be according to preference(s) set by a user (e.g., an occupant, a facility owner, and/or a facility manager).
  • a BMS may be implemented using a local area network, such as Ethernet.
  • the software can utilize, e.g., internet protocols and/or open standards.
  • One example is software from Tridium, Inc. (of Richmond, Va.).
  • One communication protocol that can be used with a BMS is BACnet (building automation and control networks).
  • a node can be any addressable circuitry.
  • a node can be a circuitry that has an Internet Protocol (IP) address.
  • IP Internet Protocol
  • a BMS may be implemented in a facility, e.g., a multi-story building.
  • the BMS may function (e.g., also) to control one or more characteristics of an environment of the facility.
  • the one or more characteristics may comprise: temperature, carbon dioxide levels, gas flow, various volatile organic compounds (VOCs), and/or humidity in a building.
  • VOCs volatile organic compounds
  • a core function of a BMS may be to maintain a comfortable environment for occupants of the environment, e.g., while minimizing heating and cooling costs and/or demand.
  • a BMS can be used to control one or more of the various systems.
  • a BMS may be used to optimize the synergy between various systems. For example, the BMS may be used to consea ⁇ e energy and lower building operation costs.
  • a plurality of devices may be operatively (e.g., communicatively) coupled to the control system.
  • the plurality of devices may be disposed in a facility (e.g., including a building and/or room).
  • the control system may comprise the hierarchy of controllers.
  • the devices may comprise an emitter, a sensor, or a window (e.g., IGU).
  • the device may be any device as disclosed herein.
  • At least two of the plurality of devices may be of the same type. For example, two or more !GUs may be coupled to the control system.
  • At least two of the plurality of devices may be of different types.
  • a sensor and an emitter may be coupled to the control system.
  • the plurality of devices may comprise at least 20, 50, 100, 500, 1000, 2500, 5000, 7500, 10000, 50000, 100000, or 500000 devices.
  • the plurality of devices may be of any number between the aforementioned numbers (e.g., from 20 devices to 500000 devices, from 20 devices to 50 devices, from 50 devices to 500 devices, from 500 devices to 2500 devices, from 1000 devices to 5000 devices, from 5000 devices to 10000 devices, from 10000 devices to 100000 devices, or from 100000 devices to 500000 devices).
  • the number of windows in a floor may be at least 5, 10, 15, 20, 25, 30, 40, or 50.
  • the number of windows in a floor can be any number between the aforementioned numbers (e.g., from 5 to 50, from 5 to 25, or from 25 to 50).
  • the devices may be In a facility comprising a multi-story building. At least a portion of the floors of the multi-story building may have devices controlled by the control system (e.g., at least a portion of the floors of the multi-story building may be controlled by the control system),
  • the facility comprises a multi-story building.
  • the multi-story building may have at least 2, 8, 10, 25, 50, 80, 100, 120, 140, or 180 floors, e.g., that are controlled by the control system and/or comprise the network infrastructure.
  • the number of floors controlled by the control system and/or comprising the network infrastructure may be any number between the aforementioned numbers (e.g., from 2 to 50, from 25 to 100, or from 80 to 160).
  • the floor may be of an area of at least about 150 m 2 , 250 m 2 , 500m z , 1000 m 2 , 1500 m 2 , or 2000 square meters (m 2 ).
  • the floor may have an area between any of the aforementioned floor area values (e.g., from about 150 m 2 to about 2000 m 2 , from about 150 m 2 to about 500 m 2 from about 250 m 2 to about 1000 m 2 , or from about 1000 m 2 to about 2000 m 2 ).
  • the building may comprise an area of at least about 1000 square feet (sqft), 2000 sqft, 5000 sqft, 10000 sqft, 100000 sqft, 150000 sqft, 200000 sqft, or 500000 sqft.
  • the building may comprise an area between any of the above mentioned areas (e.g., from about 1000 sqft to about 5000 sqft, from about 5000 sqft to about 500000 sqft, or from about 1000 sqft to about 500000 sqft).
  • the building may comprise an area of at least about 100m 2 , 200 m 2 , 500 m 2 , 1000 m 2 , 5000 m 2 , 1QQQQ m 2 , 25000 m 2 , or 50000 m 2 .
  • the building may comprise an area between any of the above mentioned areas (e.g., from about 100m 2 to about 1000 m 2 , from about 500m 2 to about 25QQQ m 2 , from about 100m 2 to about 50000 m 2 ).
  • the facility may comprise a commercial or a residential building.
  • the commercial building may include tenant(s) and/or owner(s).
  • the residential facility may comprise a multi or a single family building.
  • the residential facility may comprise an apartment complex.
  • the residential facility may comprise a single family home.
  • the residential facility may comprise multifamily homes (e.g., apartments).
  • the residential facility may comprise townhouses.
  • the facility may comprise residential and commercial portions.
  • the facility may comprise at least about 1 , 2, 5, 10, 50, 100, 150, 200, 250, 300, 350, 400, 420, 450, 500, or 550 windows (e.g., tintabie windows).
  • the windows may be divided into zones (e.g., based at least in part on the location, fapade, floor, ownership, utilization of the enclosure (e.g., room) in which they are disposed, any other assignment metric, random assignment, or any combination thereof.
  • Allocation of windows to the zone may be static or dynamic (e.g., based on a heuristic). There may be at least about 2, 5, 10, 12, 15, 30, 40, or 48 windows per zone.
  • a window controller is integrated with a BMS.
  • the window controller can be configured to control one or more tintabie windows (e.g., e!ectrochromic windows).
  • the one or more eiectrochromic windows include at least one all solid state and inorganic eiectrochromic device, but may include more than one eiectrochromic device, e.g., where each lite or pane of an IGU is tintabie.
  • the one or more eiectrochromic windows include (e.g., only) ail solid state and inorganic eiectrochromic devices, in one embodiment, the eiectrochromic windows are multistate eiectrochromic windows. Examples of tintabie windows can be found in, in U.S. Patent Application Serial No. 12/851 ,514, filed August 5, 2010, titled "MULTIPANE ELECTROCHROMIC WINDOWS," which is incorporated herein by reference in its entirety. [0188] In some embodiments, one or more devices such as sensors, emitters, and/or actuators, are operatively coupled to at least one controller and/or processor.
  • a controller may comprise a processing unit (e.g., CPU or GPU).
  • a controller may receive an input (e.g., from at least one device or projected media).
  • the controller may comprise circuitry, electrical wiring, optical wiring, socket, and/or outlet.
  • a controller may receive an input and/or deliver an output,
  • a controller may comprise multiple (e.g,, sub-) controllers.
  • An operation (e.g., as disclosed herein) may be performed by a single controller or by a plurality of controllers. At least two operations may be each preconformed by a different controller. At least two operations may be preconformed by the same controller.
  • a device and/or media may be controlled by a single controller or by a plurality of controllers.
  • At least two devices and/or media may be controlled by a different controller. At least two devices and/or media may be controlled by the same controller.
  • the controller may be a part of a control system.
  • the control system may comprise a master controller, floor (e.g,, comprising network controller) controller, or a local controller.
  • the local controller may be a target controller.
  • the local controller may be a window controller (e.g., controlling an optically switchable window), enclosure controller, or component controller.
  • the controller may be a part of a hierarchal control system. They hierarchal control system may comprise a main controller that directs one or more controllers, e.g,, floor controllers, local controllers (e.g., window controllers), enclosure controllers, and/or component controllers.
  • the target may comprise a device or a media.
  • the device may comprise an eiectrochromic window, a sensor, an emitter, an antenna, a receiver, a transceiver, or an actuator
  • the network infrastructure is operatively coupled to one or more controllers.
  • a controller may control one or more devices (e.g., be directly coupled to the devices).
  • a controller may be disposed proximal to the one or more devices it is controlling.
  • a controller may control an optically switchable device (e.g,, IGU), an antenna, a sensor, and/or an output device (e.g., a light source, sounds source, smell source, gas source, HVAC outlet, or heater).
  • a floor controller may direct one or more window controllers, one or more enclosure controllers, one or more component controllers, or any combination thereof.
  • the floor controller may comprise a floor controller.
  • the floor (e.g., comprising network) controller may control a plurality of local (e.g., comprising window) controllers.
  • a plurality of local controllers may be disposed in a portion of a facility (e.g., in a portion of a building).
  • the portion of the facility may be a floor of a facility.
  • a floor controller may be assigned to a floor.
  • a floor may comprise a plurality of floor controllers, e.g., depending on the floor size and/or the number of local controllers coupled to the floor controller.
  • a floor controller may be assigned to a portion of a floor.
  • a floor controller may be assigned to a portion of the local controllers disposed in the facility.
  • a floor controller may be assigned to a portion of the floors of a facility,
  • a master controller may be coupled to one or more floor controllers.
  • the floor controller may be disposed in the facility.
  • the master controller may be disposed in the facility, or external to the facility.
  • the master controller may be disposed in the cloud.
  • a controller may be a part of, or be operatively coupled to, a building management system.
  • a controller may receive one or more inputs.
  • a controller may generate one or more outputs.
  • the controller may be a single input single output controller (S!SO) or a multiple input multiple output controller (MIMO).
  • a controller may interpret an input signal received.
  • a controller may acquire data from the one or more components (e.g., sensors). Acquire may comprise receive or extract. The data may comprise measurement, estimation, determination, generation, or any combination thereof.
  • a controller may comprise feedback control, A controller may comprise feed-forward control. Control may comprise on-off control, proportional control, proportional-integral (PI) control, or proportionai-integral- derivative (RID) control. Control may comprise open loop control, or closed loop control. A controller may comprise closed loop control. A controller may comprise open loop control.
  • a controller may comprise a user interface. A user interface may comprise (or operatively coupled to) a keyboard, keypad, mouse, touch screen, microphone, speech recognition package, camera, imaging system, or any combination thereof.
  • Outputs may include a display (e.g., screen), speaker, or printer.
  • a local controller controls one or more devices and/or media (e.g., media projection).
  • a local controller can control one or more IGUs, one or more sensors, one or more output devices (e.g., one or more emitters), one or more media, or any combination thereof.
  • a BMS includes a multipurpose controller.
  • a BMS can provide, for example, enhanced: 1) environmental control, 2) energy savings, 3) security, 4) flexibility in control options, 5) improved reliability and usable life of other systems (e.g., due to decreased reliance thereon and/or reduced maintenance thereof), 6) information availability and/or diagnostics, 7) higher productivity from personnel in the building (e.g., staff), and various combinations thereof.
  • These enhancements may derive automatically controlling any of the devices, in some embodiments, a BMS may not be present, in some embodiments, a BMS may be present without communicating with a master network controiier.
  • a BMS may communicate with a portion of the levels in the hierarchy of controllers.
  • the BMS may communicate (e.g., at a high ievei) with a master network controller.
  • a BMS may not communicate with a portion of the levels in the hierarchy of controllers of the control system.
  • the BMS may not communicate with the local controiier and/or intermediate controller, in certain embodiments, maintenance on the BMS would not interrupt control of the devices communicatively coupled to the control system, in some embodiments, the BMS comprises at least one controiier that may or may not be part of the hierarchical control system.
  • Fig. 20 shows an example of a control system architecture 2000 disposed at least partly in an enclosure (e.g., building) 2QQ1.
  • Control system architecture 2QQQ comprises a master controiier 2008 that controls floor controllers (e.g., network controllers) 2008, that in turn control local controllers 2004.
  • a master controller 2008 is operatively coupled (e.g., wirelessly and/or wired) to a building management system (BMS) 2024 and to a database 2020.
  • BMS building management system
  • Arrows in FIG. 20 represents communication pathways.
  • a controiier may be operatively coupled (e.g., directiy/indirectly and/or wired and/wirelessly) to an external source 2010.
  • Master controiier 2008 may control floor controllers that include network controllers 2006, that may in turn control local controllers such as window controllers 2004.
  • Floor controllers 2008 may also be include network controllers (NC).
  • the local controllers e.g., 2004
  • control one or more targets such as !GUs, one or more sensors, one or more output devices (e.g., one or more emitters), media, or any combination thereof.
  • the external source may comprise a network.
  • the external source may comprise one or more sensor or output device.
  • the external source may comprise a cloud-based application and/or database.
  • the communication may be wired and/or wireless.
  • the external source may be disposed external to the facility.
  • the external source may comprise one or more sensors and/or antennas disposed, e.g., on a wall or on a ceiling of the facility.
  • the communication may be monodirectiona! or bidirectional. In the example shown in Fig. 20, the communication ail communication arrows are meant to be bidirectional.
  • a controller or other network device includes a sensor or sensor ensemble.
  • a plurality of sensors or a sensor ensemble may be organized into a sensor module.
  • a sensor ensemble may comprise a circuit board, such as a printed circuit board, e.g., in which a number of sensors are adhered or affixed to the circuit board. Sensor(s) can be removed from a sensor module. For example, a sensor may be plugged Into and/or unplugged out of, the circuit board. Sensor(s) may be individually activated and/or deactivated (e.g,, using a switch).
  • the circuit board may comprise a polymer.
  • the circuit board may be transparent or non-transparent.
  • the circuit board may comprise metal (e.g., elemental metal and/or metal alloy).
  • the circuit board may comprise a conductor.
  • the circuit board may comprise an insulator.
  • the circuit board may comprise any geometric shape (e.g., rectangle or ellipse).
  • the circuit board may be configured (e.g., may be of a shape) to allow the ensemble to be disposed in frame portion such as a muilion (e.g., of a window).
  • the circuit board may be configured (e.g., may be of a shape) to allow the ensemble to be disposed in a frame (e.g., door frame and/or window frame).
  • the frame may comprise one or more holes, e.g., to allow the sensor(s) to obtain (e.g., accurate) readings.
  • the circuit board may be enclosed in a wrapping.
  • the wrapping may comprise flexible or rigid portions.
  • the wrapping may be flexible.
  • the wrapping may be rigid (e.g., be composed of a hardened polymer, from giass, or from a metal (e.g., comprising elemental metal or metal alloy).
  • the wrapping may comprise a composite material.
  • the wrapping may comprise carbon fibers, glass fibers, and/or polymeric fibers.
  • the wrapping may have one or more holes, e.g., to allow the sensor(s) to obtain (e.g., accurate) readings.
  • the circuit board may include an electrical connectivity port (e.g., socket).
  • the circuit board may be connected to a power source (e.g., to electricity).
  • the power source may comprise renewable and/or non-renewable power source.
  • Fig. 21 shows an example of diagram 2100 having an example of an ensemble of sensors organized into a sensor module. Sensors 2110A, 2110B, 2110C, and 211 QD are shown as included in sensor ensemble 2105.
  • An ensemble of sensors organized into a sensor module may include at least 1, 2, 4, 5, 8, 1 Q, 20, 50, or 500 sensors.
  • the sensor module may include a number of sensors in a range between any of the aforementioned values (e.g., from about 1 to about 1000, from about 1 to about 500, or from about 500 to about 1000).
  • Sensors of a sensor module may comprise sensors configured and/or designed for sensing a parameter comprising: temperature, humidity, carbon dioxide, particulate matter (e.g., between 2.5 pm and 10 pm), total volatile organic compounds (e.g., via a change in a voltage potential brought about by surface adsorption of volatile organic compound), ambient light, audio noise level, pressure (e.g., gas, and/or liquid), acceleration, time, radar, Ildar, radio signals (e.g., ultra-wideband radio signals), passive infrared, giass breakage, or movement detectors.
  • the sensor ensemble (e.g., 2105) may comprise non-sensor devices, such as buzzers and light emitting diodes. Examples of sensor ensembles and their uses can be found in U.S. Patent Application Serial No. 16/447169, filed June 20, 2019, titled “SENSING AND COMMUNICATIONS UNIT FOR OPTICALLY SW!TCHABLE WINDOW SYSTEMS” that is incorporated herein by reference in its entirety.
  • an increase in the number and/or types of sensors may be used to increase a probability that one or more measured property is accurate and/or that a particular event measured by one or more sensor has occurred.
  • sensors of sensor ensemble may cooperate with one another.
  • a radar sensor of sensor ensemble may determine presence of a number of individuals in an enclosure.
  • a processor may determine that detection of presence of a number of individuals in an enclosure is positively correlated with an increase in carbon dioxide concentration
  • the processor-accessible memory may determine that an increase in detected infrared energy is positively correlated with an increase in temperature as detected by a temperature sensor
  • network interface e.g., 650
  • the network interface may additionally communicate with a controller.
  • individual sensors e.g., sensor 2110A, sensor 2110D, etc.
  • a sensor ensemble may utilize a remote processor (e.g., 2154) utilizing a wireless and/or wired communications link.
  • a sensor ensemble may utilize at least one processor (e.g., processor 2152), which may represent a cloud-based processor coupled to a sensor ensemble via the cloud (e.g.,
  • processors may be located in the same building, in a different building, in a building owned by the same or different entity, a facility owned by the manufacturer of the window/controi!er/sensor ensemble, or at any other location.
  • sensor ensemble 2105 is not required to comprise a separate processor and network interface. These entities may be separate entities and may be operatively coupled to ensemble 2105.
  • the dotted lines in the example shown in Fig. 21 designate optional features, in some embodiments, onboard processing and/or memory of one or more ensemble of sensors may be used to support other functions (e.g., via allocation of ensembles(s) memory and/or processing power to the network infrastructure of a building).
  • sensor data is exchanged among various network devices and controllers.
  • the sensor data may also be accessible to remote users (e.g., inside or outside the same building) for retrieval using personal electronic devices, for example.
  • Applications executing on remote devices to access sensor data may also provide commands for controllable functions such as tint commands for a window controller.
  • An example window controller(s) is described in International Patent Application Serial No. PCT/US16/58872, filed October 26, 2016, titled “CONTROLLERS FOR OPTICALLY- SWITCHABLE DEVICES,” and in U.S. Patent Application Serial No. 15/334,832, filed October 26, 2016, titled “CONTROLLERS FOR OPT!CALLY-SW!TCHABLE DEVICES,” each of which is herein incorporate by reference in its entirety.
  • the methods, systems and/or the apparatus described herein may comprise a control system.
  • the control system can be in communication with any of the apparatuses (e.g., sensors) described herein.
  • the sensors may be of the same type or of different types, e.g., as described herein.
  • the control system may be in communication with the first sensor and/or with the second sensor.
  • a plurality of devices e.g., sensors and/or emitters
  • the ensemble may comprise at least two devices of the same type.
  • the ensemble may comprise at least two devices of a different type.
  • the devices in the ensemble may be operatively coupled to the same electrical board.
  • the electrical board may comprise circuitry.
  • the electrical board may comprise, or be operatively coupled to a controller (e.g., a local controller).
  • the control system may control the one or more devices (e.g., sensors).
  • the control system may control one or more components of a building management system (e.g., lightening, security, and/or air conditioning system).
  • the controller may regulate at least one (e.g., environmental) characteristic of the enclosure.
  • the control system may regulate the enclosure environment using any component of the building management system.
  • the control system may regulate the energy supplied by a heating element and/or by a cooling element.
  • the control system may regulate velocity of an air flowing through a vent to and/or from the enclosure.
  • the control system may comprise a processor.
  • the processor may be a processing unit.
  • the controller may comprise a processing unit.
  • the processing unit may be central.
  • the processing unit may comprise a central processing unit (abbreviated herein as “CPU”).
  • the processing unit may be a graphic processing unit (abbreviated herein as “GPU”).
  • the controlier(s) or control mechanisms may be programmed to implement one or more methods of the disclosure.
  • the processor may be programmed to implement methods of the disclosure.
  • the controller may control at least one component of the forming systems and/or apparatuses disclosed herein. Examples of a digital architectural element can be found in International Patent Application Serial No. PCT/US20/70123 that is incorporated herein by reference in its entirety.
  • Fig. 22 shows a schematic example of a computer system 200 that is programmed or otherwise configured to one or more operations of any of the methods provided herein.
  • the computer system can control (e.g., direct, monitor, and/or regulate) various features of the methods, apparatuses, and systems of the present disclosure, such as, for example, control heating, cooling, lightening, and/or venting of an enclosure, or any combination thereof.
  • the computer system can be part of, or be in communication with, any sensor or sensor ensemble disclosed herein.
  • the computer may be coupled to one or more mechanisms disclosed herein, and/or any parts thereof.
  • the computer may be coupled to one or more sensors, valves, switches, lights, windows (e.g., !GUs), motors, pumps, optica!
  • the computer system can include a processing unit (e.g., 2206) (also “processor,” “computer” and “computer processor” used herein).
  • the computer system may include memory or memory location (e.g., 2202) (e.g., random-access memory, read-only memory, flash memory), electronic storage unit (e.g., 2204) (e.g., hard disk), communication interface (e.g., 2203) (e.g., network adapter) for communicating with one or more other systems, and peripheral devices (e.g., 2205), such as cache, other memory, data storage and/or electronic display adapters.
  • memory or memory location e.g., 2202
  • electronic storage unit e.g., 2204
  • communication interface e.g., 2203
  • peripheral devices e.g., 2205
  • the memory 2202, storage unit 2204, interface 2203, and peripheral devices 2205 are in communication with the processing unit 2206 through a communication bus (solid lines), such as a motherboard.
  • the storage unit can be a data storage unit (or data repository) for storing data.
  • the computer system can be operatively coupled to a computer network (“network”) (e.g., 2201) with the aid of the communication interface.
  • the network can be the internet, an internet and/or extranet, or an intranet and/or extranet that is in communication with the Internet.
  • the network is a telecommunication and/or data network.
  • the network can include one or more computer servers, which can enable distributed computing, such as cloud computing.
  • the network in some cases with the aid of the computer system, can implement a peer-to-peer network, which may enable devices coupled to the computer system to behave as a client or a server.
  • the processing unit can execute a sequence of machine-readable instructions, which can be embodied in a program or software.
  • the instructions may be stored in a memory location, such as the memory 2202.
  • the instructions can be directed to the processing unit, which can subsequently program or otherwise configure the processing unit to implement methods of the present disclosure. Examples of operations performed by the processing unit can include fetch, decode, execute, and write back.
  • the processing unit may interpret and/or execute instructions.
  • the processor may include a microprocessor, a data processor, a central processing unit (CPU), a graphical processing unit (GPU), a system-on-chip (SOC), a co-processor, a network processor, an application specific integrated circuit (ASIC), an application specific instruction-set processor (ASIPs), a controller, a programmable logic device (PLD), a chipset, a field programmable gate array (FPGA), or any combination thereof.
  • the processing unit can be part of a circuit, such as an integrated circuit.
  • One or more other components of the system 2200 can be included in the circuit.
  • the storage unit can store files, such as drivers, libraries and saved programs.
  • the storage unit can store user data (e.g., user preferences and user programs), in some cases, the computer system can include one or more additional data storage units that are external to the computer system, such as located on a remote server that is in communication with the computer system through an intranet or the Internet.
  • the computer system can communicate with one or more remote computer systems through a network. For instance, the computer system can communicate with a remote computer system of a user (e.g., operator).
  • remote computer systems examples include personal computers (e.g., portable PC), slate or tablet PC's (e.g., Apple® iPad, Samsung® Galaxy Tab), telephones, Smart phones (e.g., Apple® iPhone, Android-enabled device, Blackberry®), or personal digital assistants.
  • a user e.g., client
  • Methods as described herein can be implemented by way of machine (e.g., computer processor) executable code stored on an electronic storage location of the computer system, such as, for example, on the memory 2202 or electronic storage unit 2204.
  • the machine executable or machine-readable code can be provided in the form of software.
  • the processor 2208 can execute the code.
  • the code can be retrieved from the storage unit and stored on the memory for ready access by the processor, in some situations, the electronic storage unit can be precluded, and machine- executable instructions are stored on memory.
  • the code can be pre-compi!ed and configured for use with a machine have a processer adapted to execute the code or can be compiled during runtime.
  • the code can be supplied in a programming language that can be selected to enable the code to execute in a pre-compiied or as-compiled fashion, in some embodiments, the processor comprises a code.
  • the code can be program instructions.
  • the program instructions may cause the at least one processor (e.g., computer) to direct a feed forward and/or feedback control loop, in some embodiments, the program instructions cause the at least one processor to direct a closed loop and/or open loop control scheme.
  • the control may be based at least in part on one or more sensor readings (e.g., sensor data).
  • One controller may direct a plurality of operations.
  • At least two operations may be directed by different controllers.
  • a different controller may direct at least two of operations (a), (b) and (c).
  • different controllers may direct at least two of operations (a), (b) and (c).
  • a non-transitory computer-readable medium cause each a different computer to direct at least two of operations (a), (b) and (c).
  • different non-transitory computer-readable mediums cause each a different computer to direct at least two of operations (a), (b) and (c).
  • the controller and/or computer readable media may direct any of the apparatuses or components thereof disclosed herein.
  • the controller and/or computer readable media may direct any operations of the methods disclosed herein.
  • the controller may be operatively (communicatively) coupled to control logic (e.g., code embedded in a software) In which Its operation(s) are embodied.
  • one or more of the functions described herein may be implemented in hardware, digital electronic circuitry, analog electronic circuitry, computer software, firmware, including the structures disclosed in this specification and their structural equivalents thereof, or in any combination thereof
  • Certain implementations of the subject matter described in this document also can be implemented as one or more controllers, computer programs, or physical structures, for example, one or more modules of computer program instructions, encoded on a computer storage media for execution by, or to control the operation of window controllers, network controllers, and/or antenna controllers.
  • Any disclosed implementations presented as or for electrochromic windows can be more generally implemented as or for switchabie optical devices (including windows, mirrors, etc.).
  • the network infrastructure is configured to operatively couple to one or more (e.g., an array of) chargers.
  • the charger can be disposed in the interior of the framing or framing cap portion.
  • the chargers may be wireless chargers in the sense that they do not require wiring into the device they are charging (e.g,, a transitory circuitry such as a mobile phone, pad, laptop, a tag (e.g., and ID tag), or any other charge requiring device such as one comprising a transitory processor).
  • the charging devices may electrically charge the transitory circuitry.
  • the charging device may be disposed in a transom (also known as “horizontal mullion”).
  • the charging device may be disposed in any real asset that operatively (e.g., electronically) coupled to the network (e.g., local network of a facility), and that is configured to facilitate wireless charging, e.g., on at least one of its surfaces.
  • the charging device may comprise an electromagnetic induction charging for transitory circuitry (e.g., mobile device).
  • the transitory circuitry (e.g., mobile device) to be wirelessly charge is configured for (e.g,, enables) wireless charging.
  • the wireless charging may or may not require contact of the charging device with the device to be charged.
  • the wireless charging device does not require connection of electrical wiring between the charging device and the device to be charged (e.g., the mobile circuitry).
  • the wireless charging may facilitate interaction of facility occupants with fixtures and/or real assets (e.g., furniture) of the facility.
  • the charging stations may be installed as part of the network, e.g., during construction of the facility, if the local network of the facility is the initial network installed in the facility, the facility may be opened to occupants with such wireless charging devices on its first day of opening. Usage of charging station may reduce the number of required outlets in the facility, and/or free outlets for usage other than mobile device charging, thus potentially increasing the aesthetics of the facility fixtures, allowing more design flexibility of the facility interior, and increase the usage of fixtures and/or real assets of the facility.
  • the infrastructure installed in the building may include the wireless charging stations (e.g., as part of the framing system or not).
  • wireless charging stations e.g., as part of the framing system or not.
  • developers can offer a state of the art building with mobile device wireless charging integrated into the building from day one (1) in convenient, non-obstructive locations and/or
  • occupants will have more and/or easier access locations to charge their mobile devices (e.g., without wires getting in their way and/or taking up much needed outlets).
  • Developers may wish to create connected spaces that are built to the requests of occupants to increasingly utilize mobile devices, and/or allow seamless and easy charging.
  • Wireless charging may require a user to place the mobile device on the wireless charging stations without more.
  • a real asset and/or a fixtures may comprise a material that facilitates wireless charging therethrough.
  • a material that facilitate e.g., has no or reduced blockage of
  • the real asset and/or fixture may have a portion having a material that facilitates wireless charging.
  • a transom made of metal e.g., Aluminum
  • a portion e.g., break portion
  • a material that facilitates wireless charging e.g., a non-metallic material.
  • the material may constitute an electrical break area that is configured to facilitate wireless charging (e.g., electromagnetic induction) technology.
  • the real asset and/or fixture may have at least one portion of a material that is configured for reduced blockage of (e.g., does not block) the electromagnetic field from penetrating therethrough from the charging device to the charged device.
  • the wireless charging station is in a framing portion supporting one or more display constructs.
  • the user may place his mobile device on a transom (e.g., near the wireless charging device) while watching the media, and the mobile device of the user may be (e.g., seamlessly) charged during that time.
  • the wireless charging may require placement of the mobile device adjacent to (e.g., on top of, beneath, or to the side of) the charging device.
  • Fig, 23 shows an example of a charging station embedded in a fixture of a facility.
  • Display construct 2331 (also indicated as #1) is disposed in a framing system having a mullion 2330 and a transom 2332.
  • the framing station holds the display construct 2331 and windows such as 2322 and 2332,
  • the transom 2332 includes charging device in its interior in the area 2350, which charging device is coupled to the network (e.g., the same network to which the display construct 2331 is coupled to).
  • Transom 2332 includes a wireless charging station in the exterior of transom 2332, in area 2350.
  • the area of the charging station may extend beyond 2350, e.g., depending on the charging capability (e.g., range) of the wireless charging device,
  • a user watching media displayed by display construct 2331 may rest the device to be charged (e.g., mobile device) on the transom while watching the displayed media, thus aiiowing seamiess charging of the device to be charged (e.g., provided the device to be charged is configured for wireless charging).
  • the device to be charged can be wirelessly charged regardless of the user using the display construct.
  • At least one of the windows e.g., 2322 and 2323) may or may not be tintable windows.
  • At least one of the windows (e.g., 2322 and 2323) may or may not be smart windows such as electrochromic windows.
  • the local network is operatively coupled to wireless charging device.
  • the wireless charging may comprise inductive charging.
  • the wireless charging may be cordless charging.
  • the wireless charging may facilitate contactless (e.g., cordless) charging between the charging device and the device to be charged.
  • the wireless charging may be devoid of a requirement to make electrical contact with the charging device or any intermediary thereto (e.g., a docking station or a plug).
  • the wireless charging may facilitate wireless transfer of electrical power.
  • the wireless charging may utilize electromagnetic Induction to provide electricity to devices to be charged, e.g., portable (e.g., transitory) devices.
  • the transitory device may comprise vehicles, power tools, electric dental equipment (e.g., toothbrushes), or any other medical devices.
  • the portable device may or may not require precise alignment with the charging device (e.g., charging pad).
  • the wireless charging may transfer energy through inductive coupling.
  • the wireless charging may include passing an alternating current through an induction coil in the charging device (e.g., charging pad).
  • the wireless charging may include generating a magnetic field.
  • the magnetic field may fluctuate in strength (e.g., when an amplitude of the alternating electric current is fluctuating). This changing magnetic field may create an alternating electric current in an induction coil of the portable device (e.g,, mobile device).
  • the alternating current In the induction coil may pass through a rectifier, e.g., to convert It to a direct current.
  • the direct current may charge a battery and/or provide operating power of the portable device (e.g., transitory circuitry).
  • the wireless charging device (e.g., also used here as wireless charge or inductive charger) utilizes resonant inductive coupling.
  • the charging device may comprise a capacitor, e.g., to one or more (e.g., to each of the) induction coils.
  • the addition of the capacitor may create two low current circuits with a (e.g., specific) resonance frequency.
  • the frequency of the alternating current may be matched with the resonance frequency.
  • the frequency may be chosen, e.g., depending on the distance requested for peak efficiency. For example, depending on the distance between the charging device and the designated placement of the device to be charged. For example, depending on the material(s) disposed between the charging device and the designated placement of the device to be charged.
  • the charging device may comprise a movable transmission coil.
  • the charging device and/or device to be charged may comprise a receiver coii such as silver-plated copper or aluminum (e.g,, to minimize weight and/or decrease resistance such as due to skin effects).
  • the wireless charging device is a high power charging device
  • the wireless charging device Is a low power charging device.
  • the low power charging device may be configured to charge smali electronic devices such as celi phones, handheld devices, computers (e.g., iaptops).
  • the iow power charging device may be configured to charge at power ieveis of at most about 50watts (W), 100W, 150W, 200W, 250W, 300W, 350W, 400W, 450W, or 500W.
  • the iow power charging device may be configured to charge at power ieveis between any of the aforementioned power ieveis (e.g., from about 50 W to about 100 W, from about 1 QQW to about 500W, or from about SOW to about 5GGW).
  • the high power charging device may be configured to charge at power Ieveis of at least about 700W, one (1) kilowatt (KW), 10KW, 11KW, 100KW, 200KW, 300KW, or 500KW.
  • the high power charging device may be configured to charge at power Ieveis between any of the aforementioned power ieveis (e.g., from about 700 Wto about 500KW, from about 700Wto about 10KV, or from about 1KWto about 5QQKW).
  • the wireless charging stations may provide advantages over wired charging stations. For example, in wireless charging there is a lower risk of electrical faults such as due to corrosion, electrocution, and wiring tangling. For example, in wireless charging there is an absence of wear and tear damage of electrical connectors, sockets and/or wiring, e.g., since no wiring and connection is required between the charging device and the device to be charge. For example, wireless charging offers an increased usage convenience and/or facility aesthetics. For example, wireless charging offers convenient frequent charging. The wireless charging may allow for dynamic charging, e.g., charging the mobile device while it is In motion (e.g., depending on the capacity of the charging device).
  • wireless charging may reduce the infection risk, e.g., by eliminating a requirement to connect to electricity outlets and/or wiring used by multiple users.
  • the charging speed can be of 1 , 2 or 3 wireless power transfer (WPT) class, e.g., as defined by the Society of Automotive Engineers (SAE) International.
  • WPT wireless power transfer
  • SAE Society of Automotive Engineers
  • the wireless charging may be at a distance of at most about 1 centimeter (cm), 2cm, 4cm, 5cm, 8cm, 10cm, 25cm, 50cm, 75cm, 100cm, 250cm, 500cm, 750cm, 900cm, or 1000cm from the charging device.
  • the wireless charging may be at a distance from the charging device between any of the aforementioned values (e.g., from about 1cm to about 10cm, from about 1cm to about 50cm, from about 1cm to about 100cm, or from about 1cm to about 1000cm).
  • the wireless charging may be at a distance of at most about 1 inches (“),
  • the wireless charging may be at a distance from the charging device between any of the aforementioned values (e.g., from about 1” to about 12”).
  • the wireless charging may be at a distance of at most about 5 feet Q, 10’, 20’, 30’, 40’ or 50’ from the charging device.
  • the wireless charging may be at a distance from the charging device between any of the aforementioned values (e.g., from about 5’ to about 50’).
  • the charging device may abide by at least one standard (e.g., protocol) accepted in the jurisdiction.
  • the standard may comprise Qi or Power Matter Alliance (PMA) standard.
  • the standard may comprise J1773 (Mange charge), SAE J2954, AirFuel Alliance, Alliance for Wireless Power (A4WP, or Rezence), or ISO 15118 standard.
  • the standard may define a frequency and/or a connection protocol.
  • the charging device may be configured to compile with a plurality (e.g., all) standards accepted in the jurisdiction.
  • the standard may be an open interface standard.
  • the standard may be a wireless power transfer standard.
  • the standard may be a Wireless Power Consortium standard.
  • the standard may be an Institute of Electrical and Electronics Engineers standard.
  • the standard may be an AirFuel alliance standard (e.g., combining A4WP and PMA).
  • the standard may be a road vehicle standard.
  • the charging device is operatively coupled to the network and/or control system of the facility.
  • the charging device may be controlled by the control system.
  • the control system may schedule shutting off or on the charging device.
  • the control system may control the operating mode of the charging device.
  • the control system may be integrated or separate from the control system of the facility.
  • the charging device may be additionally or alternatively manually controlled (e.g., by a user), e.g., through an application module.
  • the application module of the charging device may comprise a graphic user interface (GUI).
  • GUI graphic user interface
  • the application module may be configured to receive user input.
  • the application module may be configured for instaliation of the device to be charged.
  • the application module may be configured for installation of a device coupled to the network of the facility.
  • the charging device may be discoverable by the network.
  • the network may be operatively (e.g., communicatively) coupled to a Building information Modeling (BIM) (e.g., Revit file) of the facility.
  • BIM Building information Modeling
  • Location and/or status of the charging device(s) coupled to the network may be updated (e.g., intermittently or in real time) to the network, e.g., and to the BIM file.
  • the application module may indicate the location, operational mode, and/or status of the charging device.
  • the GUI may depict a location, operational mode, and/or status of the charging device in the BIM file of the facility.
  • the GUI may indicate location of the user and/or device to be charged, which location is with respect to the facility (e.g., of an enclosure such as a room of the facility), for example, as depiction in the BIM file.
  • the GUI may show a simplistic version (e.g., with lower level of details such as a select level of details) than the details available in the BIM file.
  • the application module may show fixtures and select devices (e.g., one or more charging devices and/or one or more media displays) of the facility.
  • dynamic mobility e.g., vertical movement relative to a body of a digital collaboration unit, to a framing of a digital collaboration unit, and/or to a gravitational center
  • the portions of a digital collaboration unit may comprise a physical work surface, media display, lighting, transparent panel, cabling and/or a fixed accessory.
  • the digital collaboration unit may comprise free-standing collaboration unit, digital collaboration unit coupled to exterior fixture (e.g., wall or window), small group pods, or digital collaboration units disposed in large open areas in the form of a matrix (e.g., a display matrix such as functioning as a video wall).
  • At least one, some and/or all portions of a digital collaboration unit that have dynamic mobility may be mounted to a body (e.g., panel) and/or framing of a digital collaboration unit.
  • a media display with dynamic mobility may be coupled to a ceiling of a facility, with an ability to slide or pivot (e.g., vertically) into position relative to a digital collaboration unit.
  • a dynamically moveable media display coupled to a ceiling may comprise guide railings and/or magnetic support.
  • the movement may be actuated by a physical force comprising magnetic or electrical force.
  • the movement may be actuated by an actuator (e.g., motor).
  • the movement may be actuated by an attractive force (e.g., a magnetic force).
  • At least one, some and/or all portions of a digital collaboration unit that have dynamic mobility may have movement that is coupled (e.g., move together in unison).
  • the dynamic mobility may be controlled (e.g., using at least one controller such as any of the controller or control systems disclosed herein).
  • Some and/or all portions of a digital collaboration unit that have dynamic mobility may engage a mechanism to cause movement of multiple portions in unison (e.g., a coupled, and/or concerted movement).
  • At least one, some and/or ail portions of a digital collaboration unit that have dynamic mobility may have independent movement (e.g., not coupled, non-concerted, and/or separate).
  • At least one, some and/or all portions of a digital collaboration unit that have dynamic mobility may have automatic movement (e.g., using sensors that sense physical characteristics (e.g., bodily features of a local user) and/or historic preferences of a local user to determine adjustment).
  • the physical bodily features of a local user may comprise a nose, eyebrows, eyes, pupils, a head, a chin, lips, a nose bridge, or ears.
  • Some and/or all portions of a digital collaboration unit that have dynamic mobility may have manual movement.
  • Some and/or all portion of a digital collaboration unit that have dynamic mobility may have automatic movement, e.g., with manual override.
  • a virtual ledge may appear in a transparent display and retain its position to, in effect, move with the local media display and/or physical work surface.
  • a digital collaboration unit includes multiple image sensors (e.g., video cameras) mounted on at least two remote portions of the digital collaboration unit and/or a media display.
  • the at least two remote portions may include corners on opposite sides of a body and/or any framing and/or a transparent panel.
  • the multiple image sensors may be employed to produce a stereo Image of a local user of the digital collaboration unit.
  • the stereo image may be streamed to a remote user.
  • the multiple image sensors may have dynamic mobility, providing movement (e.g., automatic and/or manual) that may adjust for physical characteristics and/or historic preferences of a local user.
  • a digital collaboration unit includes a movable wall that allows expansion, contraction, opening and/or confining the unit.
  • a movable partition e.g., panel such as a wall and/or door
  • a moveable partition may be partially or fully opaque.
  • a movable partition may comprise tintabie glass.
  • a movable partition may be mounted (e.g., via hinge) to a body and/or framing of a digital collaboration unit, e.g., to allow for movement (e.g., pivoting about a hinge).
  • a movable partition may be mounted to a body and/or framing of the digital collaboration unit, e.g., to allow for (e.g., accordion style) expanding and contracting the unit.
  • a moveable partition may be partially or fully opaque and/or comprise tintabie glass, e.g., to limit viewing into the digital collaboration unit from outside of the digital collaboration unit and/or from an adjacent digital collaboration unit (e.g., as part of a group pod).
  • a moveable partition may comprise a split door with or without a media display on (e.g., a top portion) of the moveable partition (e.g., wall).
  • Moveable partition e.g., wail and/or supportive panels
  • Fig. 24 shows an example of digital collaboration units (e.g., group pods) 2400.
  • a body of a digital coiiaboration unit 2400 comprises supportive panels 2410, which may be transparent and/or partially opaque and/or comprise tintabie windows.
  • Adjacent digital collaboration units 24QQ has a connecting (common) wall 2415, which may be transparent and/or partially or fully opaque and/or comprise tintabie windows.
  • a media display 2420 may be movable relative to a body 2405 and/or framing of the digital coiiaboration unit 2400 (e.g., in direction of arrows 2425) and/or movable relative to a gravitational center G.
  • Cabling coupled to the media display 2420 may move with the media display 2420.
  • a position of the transparent media display 2420 may be automatically adjusted based at least in part on a height and/or position of a local user 2430, which be based at least in part on bodily features (e.g., a nose, eyebrows, eyes, pupils, a head, a chin, lips, a nose bridge, or ears) of local user such as 2430 and/or historic preferences of the local user such as 2430.
  • a position of the transparent media display 2420 be manually adjustable by local user such as 2430.
  • Supportive panels 2410 may comprise sound dampening materials, e.g., to provide a quiet space for conducting a conference and/or to limit propagation of sound outside digital collaboration unit 2400.
  • Supportive panels may comprise a moveable partition (e.g., wall and/or door) 2435 mounted (e.g., with a hinge) for pivoting (e.g., in direction of arrow 2440 and vice versa, between a first position, shown in solid lines in fig. 24, and a second position, shown in dashed iines) to a supportive panel 2410 and/or connecting wall 2415, which moveable wall 2435 may comprise sound dampening materials.
  • the moveable wall 2435 may pivot to allow for ingress and egress of the digital collaboration unit 2400.
  • a physical work surface (e.g., a table ledge) 2445 in front of media display 2420 can be duplicated at the remote iocation(s) (e.g., when collaborating with users having a similarly constructed portal).
  • the physical work surface may enhance an immersive experience and/or enhance convenience generally.
  • the physical work surface 2445 may be movable relative to a body and/or framing (e.g., supportive panels 2410) of the digital collaboration unit 2400 (e.g., in direction of arrows 2450) and/or movable relative to a gravitational center G.
  • a position of the physical work surface 2445 may be automatically adjusted based at least in part on a height and/or position of a local user 2430, which may be based at least in part on bodily features of a local user 2430 and/or historic preferences of a local user 2430.
  • a position of the physical work surface 2445 may be manually adjustable by a local user 2430.
  • a position of the physical work surface 2445 may be automatically adjustable, e.g., with a manual override.
  • a transparent media display 2420 and physical work surface 2445 may be secured to each other to move in unison. At least one of transparent media display 2420 and physical work surface 2445 may (e.g., each) engage a mechanism that moves them, e.g., separately or in unison.
  • a transparent media display 2420 and/or physical work surface 2445 each be moveable without movement of the other (e.g., separate movement, and/or non-coordinated movement).
  • the physical work surface 2445 comprise fixed accessories (not shown) coupled thereto.
  • the transparent media display 2420 comprise fixed accessories (not shown) coupled thereto.
  • Fixed accessories coupled to the physical work surface 2445 and/or the transparent media display 2420 be movable (e.g., up-down) relative to a supportive panel 2410, relative to framing of the digital collaboration unit 2400 (e.g., in unison with the physical work surface 2445), relative to the transparent media display 2420, and/or relative to the gravitational center G. Cabling coupled to the fixed accessories move in association with the respective fixed accessory.
  • Lighting 2455 be provided, e.g., to help ensure a good quality media stream is obtained by one or more image sensors (e.g., cameras) 2460.
  • the lighting 2455 may be movable relative to the supportive panels 2410 (e.g., in a direction indicated by arrows 2465), movable relative to framing of the digital collaboration unit 2400, and/or movable relative to a gravitational center G.
  • a position of the lighting 2455 may be automatically adjusted based at least in part on a height and/or position of local user such as 2430, which may be based at least in part on bodily features of local user such as 2430 and/or historic preferences of local user such as 2430.
  • a position of the lighting 2455 may be manually adjustable by local user such as 2430.
  • a position of the lighting 2455 be automatically adjustable, e.g., with a manual override.
  • a transparent media display 2420 and lighting 2455 may be secured to each other to move in unison.
  • a transparent media display 2420 and lighting 2455 may each engage a mechanism that moves them in unison.
  • a transparent media display 2420, physical work surface 2445 and lighting 2455 may be secured to each other to facilitate their concerted (e.g., unison) movement.
  • a transparent media display 2420, physical work surface 2445 and lighting 2455 may each engage a mechanism that moves them in unison or separateiy.
  • At least two of: (I) transparent media display 2420, (ii) physical work surface 2445, and (iii) lighting 2455, may be moveable separately (e.g., In a non-coordinated fashion, In a non-unlfied fashion).
  • a ioudspeaker may provide sound output, and/or personal headphone or earphone (not shown) can be provided with audio content (e.g., using a Bluetooth connection, or wired).
  • the multiple image sensors 2460 may be employed to produce a stereo image of a local user 2430 of the digital collaboration unit 2400.
  • the sensors 2460 may be configured to generate the stereo image.
  • the stereo image may be streamed to a remote user, e.g., as part of a digital collaboration session.
  • the multiple image sensors 2460 may be movable, e.g., (i) relative to the supportive panels 2410 (e.g., In a direction indicated by arrows 2470), (ii) relative to framing of the digital collaboration unit 2400, and/or (iii) relative to a gravitational center G,
  • the sensors may be stationary.
  • a position of the image sensors 2460 may be automatically adjusted based at least in part on a height and/or position of a local user 2430, which may be based at least in part on bodily features of local user 2430 and/or historic preferences of local user 2430.
  • a position of the image sensors 2460 may be manually adjustable by local user 2430.
  • a position of the image sensors 2460 may be automatically adjustable, e.g., with a manual override.
  • a transparent media display 2420 and image sensors 2460 may be secured to each other to move in unison (e.g., in concert).
  • a transparent media display 2420 and image sensors 2460 may (e.g., each) engage a mechanism that moves them, e.g., separately or in unison.
  • At least two of transparent media display 2420, physical work surface 2445, lighting 2455, and image sensors 2460 may be operatively coupled (e.g., secured to each other and/or electronically coupled) to move in unison.
  • At least two of transparent media display 2420, physical work surface 2445, lighting 2455, and image sensors 2460 may engage a mechanism that moves them in unison. At least two of transparent media display 2420, physical work surface 2445, lighting 2455, and image sensors 2460, may be separateiy moveable (e.g., movable without movement of the other(s)). Wiring of the network (e.g., power and/or communication) may run thought the body 2405 of digital collaboration unit 2400, such as supportive panel 2410, and connect to the network via connector (e.g., disposed on the floor, wall or ceiling of the facility). Digital collaboration unit 2400 may be operatively coupled to the network (e.g., external network and/or local network of the facility).
  • the network e.g., external network and/or local network of the facility.
  • one or more digital collaboration units are operatively coupled to at least one controller such as part of a control system (e.g., forming a smart building platform).
  • the control system may comprise various applications (apps) (e.g., apps for operating media displays, apps for personalized health management, apps for edge computing, and/or apps for communication within or outside of a facility).
  • the apps may run on an operating system.
  • a control system may comprise operating system distributed containers (e.g., building information management connections and files, digital security to prevent unauthorized access, and/or artificial intilorence/machine learning capabilities).
  • the at least one controller may be operatively coupled to the network.
  • the operating system may run on a network.
  • the at least one controller may operatively couple to the network having network nodes and/or connections (e.g., power supply, data storage, data transfer (wired and/or wireless), and/or computing capability/processors).
  • the network may be operatively couple to various nodes such as one or more devices (e.g., sensors, emitters, or transceivers).
  • the transceivers may comprise radios (e.g., UWB radios).
  • the network may be a secured network (e.g., offer digital security).
  • the network may be operatively coupled to an artificial intelligence (e.g., a machine learning) module (e.g., computational scheme), the network may offer cellular communication (e.g,, abiding by at least a 4 th generation, or a 5 th generation cellular communication protocol).
  • an artificial intelligence e.g., a machine learning module
  • the network may offer cellular communication (e.g,, abiding by at least a 4 th generation, or a 5 th generation cellular communication protocol).
  • Fig. 25 shows an example of a control system, associated network, and associated devices, that may be used in connection with a collaborative digital communication (e.g., video conference) session between local and remote participants, e.g., using digital collaboration units as discussed herein.
  • the control system is operatively coupled to a network portion 2530 having network nodes and/or connections for communication and/or power supply.
  • Power 2531 e.g., 24 volt DC
  • the network portion 2532 is transferred over the network (e.g., via Gpbs Linear Coax Ethernet and/or fiber optic).
  • the network portion 2530 is operatively coupled to controllers and/or processors
  • the control system may comprise, or be operatively coupled to, a facility operating system (OS) portion 2520.
  • OS facility operating system
  • B!M Building Information Management
  • various sensors and/or other network nodes 2521 comprise part of the facility operating system 2520.
  • Digital security e.g., TLS v1.2, 128AES-258SHA Crypto
  • the digital security 2522 may limit access to the network while allowing for edge computing.
  • Artificial intelligence (A!)/machine learning (RNN, MRT, AER, TLSG, RTLS) 2523 may comprise part of the facility operating system 2520.
  • the control system may comprise applications (e.g., applications for operating media displays 2511 , applications for personalized health management 2512, applications allowing for edge computing 2513, and/or applications for communication within or outside of a facility 2514). 2510 that run on the facility operating system.
  • Fig. 28 shows an example of a digital collaboration unit 2800 having a supportive structure ⁇ e.g., comprising a window, a wall, framing, a body, a transparent panel, an opaque panel, or a base) 2604.
  • the digital collaboration unit 2600 comprises a physical work surface (e.g., a physical ledge) 2602 that can be moveable (e.g., in the direction of arrows 2603) relative to the supportive structure 2604, and/or to a fixture such as a floor, a ceiling, and/or a wall.
  • the physical work surface 2802 is supported by the supportive structure 2604.
  • the digital collaboration unit 2600 comprises a (e.g., transparent) display 2601 , which may be in communication with a network and used for conducting immersive video interactions with remote users.
  • Fig. 26 shows an example of a digital collaboration unit 2650 having a supportive structure (e.g., comprising a window, a wall, framing, a body, a transparent panel, an opaque panel, or a base) 2665.
  • the digital collaboration unit 2650 comprises a physical work surface (e.g., a physical ledge) 2653.
  • the physical work surface 2653 is supported by the supportive structure 2665.
  • the digital coiiaboration unit 2650 comprises a (e.g., transparent) display 2851 , which may be in communication with a network and used for conducting immersive video interactions with remote users.
  • the display 2651 may be moveable (e.g., in the direction of arrows 2852) relative to the supportive structure 2665.
  • the (e.g., transparent) display may be supported by a window, a wall, framing, a body, a transparent panel, a base and/or a ceiling.
  • Fig. 27 shows an example of operations that may be performed in connection with a collaborative digital communication session (e.g., to facilitate a video conference session, or collaborative online streaming such as movie streaming) between remote participants.
  • a user enters a digital collaboration unit 2701.
  • One or more media display(s), one or more physical work surface(s), lighting, cabling, fixed accessories, and/or other portions of the digital collaboration unit may be adjusted (e.g., moved) 2702.
  • the adjustment(s) may be automatic, manual, or a combination of automatic and manual.
  • Digital collaboration with a remote user is conducted 2703.
  • Fig. 28 shows an example of operations that may be performed in connection with a collaborative digital communication (e.g., video conference) session between remote participants.
  • a user enters a digital collaboration unit 2801.
  • One or more portions of the digital coiiaboration unit may be adjusted (e.g., moved) to hinder viewing onto the media display(s) and/or onto the user disposed in the digital collaboration unit 2802.
  • Digital collaboration with a remote user is conducted 2803.
  • a digital collaboration unit comprises item(s) (i) a display, (ii) a physical working area, (iii) lighting, or (iv) any other fixed accessory.
  • a digital collaboration unit may comprise a display (e.g., media display such as a display construct) that is stationary or (e.g., vertically) movable.
  • a digital collaboration unit may comprise a physical working area (e.g., a physical ledge) that is stationary or (e.g., vertically) movable.
  • a digital collaboration unit may comprise lighting that is stationary or (e.g., vertically) movable.
  • a digital collaboration unit may comprise any other fixed accessory that is stationary or (e.g., vertically) movable.
  • a digital collaboration unit may comprise items (i) a display, (li) a physical working area, (Iii) lighting, or (iv) any other fixed accessory, in which at least two of the items (i), (ii), (iii), and (iv) are stationary or movable. At least two of the times includes all of the times. Movable can be controi!ably movable (e.g., automatically with the aid of sensor(s), or manually with the aid of a user input. When at least two of the items (i), (ii),
  • (Iii), and (iv) are movable In the digital collaboration unit, then can be movable together (e.g., in concert, In tandem, as a coordinated movement), or separately (e.g., individually, in a non-coordinated fashion, not in tandem).
  • the movement is facilitated (e.g., controlled) by at least one controller.
  • the at least one controller can be part of, or operatively coupled to, any of the control systems disclosed herein such as a control system that controls the facility in which the digital collaboration unit is disposed.

Abstract

Une expérience numérique immersive concernant une vidéoconférence simule la présence commune d'un participant virtuel dans un environnement local. Une telle simulation peut consister (i) à utiliser un afficheur multimédia transparent dont une partie des pixels projettent l'image corporelle du participant virtuel tout en maintenant au moins une partie de l'arrière-plan transparent (par exemple, à la lumière visible), (ii) à disposer un ou plusieurs capteurs (par exemple, une caméra) derrière l'afficheur multimédia transparent au niveau du regard du participant, et/ou (iii) à utiliser des superpositions virtuelles ajoutées (par exemple, des plantes, des souvenirs et/ou des meubles) à l'image virtuelle (par exemple, qui sont compatibles avec l'environnement local), par exemple, pour fournir une sensation de profondeur allant des superpositions à la projection de participant virtuel et à l'arrière-plan montrant à travers l'afficheur multimédia transparent.
PCT/US2022/024812 2021-04-15 2022-04-14 Collaboration immersive de participants à distance par le biais d'afficheurs multimédias WO2022221532A1 (fr)

Applications Claiming Priority (26)

Application Number Priority Date Filing Date Title
PCT/US2021/027418 WO2021211798A1 (fr) 2020-04-16 2021-04-15 Interaction entre une enceinte et un ou plusieurs occupants
USPCT/US2021/027418 2021-04-15
US202163181648P 2021-04-29 2021-04-29
US63/181,648 2021-04-29
US17/300,303 US20210383804A1 (en) 2016-04-26 2021-04-29 Immersive collaboration of remote participants via media displays
US17/300,303 2021-04-29
US17/313,760 US20230103284A9 (en) 2016-04-26 2021-05-06 Immersive collaboration of remote participants via media displays
US17/313,760 2021-05-06
US17/338,562 2021-06-03
US17/338,562 US11231633B2 (en) 2017-04-26 2021-06-03 Displays for tintable windows
US202163211400P 2021-06-16 2021-06-16
US63/211,400 2021-06-16
US202163212483P 2021-06-18 2021-06-18
US63/212,483 2021-06-18
US202163246770P 2021-09-21 2021-09-21
US63/246,770 2021-09-21
US202163247684P 2021-09-23 2021-09-23
US63/247,684 2021-09-23
USPCT/US2021/052597 2021-09-29
PCT/US2021/052595 WO2022072461A2 (fr) 2020-09-30 2021-09-29 Configuration associée à un affichage multimédia d'une installation
USPCT/US2021/052587 2021-09-29
PCT/US2021/052587 WO2022072454A1 (fr) 2020-09-30 2021-09-29 Construction d'affichage pour projection multimédia et charge sans fil
PCT/US2021/052597 WO2022072462A1 (fr) 2020-09-30 2021-09-29 Construction d'affichage et cadre pour projection de support
USPCT/US2021/052595 2021-09-29
US202163255679P 2021-10-14 2021-10-14
US63/255,679 2021-10-14

Publications (1)

Publication Number Publication Date
WO2022221532A1 true WO2022221532A1 (fr) 2022-10-20

Family

ID=83640808

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2022/024812 WO2022221532A1 (fr) 2021-04-15 2022-04-14 Collaboration immersive de participants à distance par le biais d'afficheurs multimédias

Country Status (1)

Country Link
WO (1) WO2022221532A1 (fr)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115859335A (zh) * 2023-02-03 2023-03-28 合肥科颖医药科技有限公司 基于远控技术的远程信息数据查阅方法及系统

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20120092921A (ko) * 2011-02-14 2012-08-22 김영대 가상 강의실 강의 방법 및 장치
US20170085834A1 (en) * 2015-09-23 2017-03-23 Samsung Electronics Co., Ltd. Video telephony system, image display apparatus, driving method of image display apparatus, method for generating realistic image, and non-transitory computer readable recording medium
US20170264865A1 (en) * 2015-05-29 2017-09-14 Boe Technology Group Co., Ltd. Display device and video communication terminal
US20200045261A1 (en) * 2018-08-06 2020-02-06 Microsoft Technology Licensing, Llc Gaze-correct video conferencing systems and methods
US20210021788A1 (en) * 2014-09-25 2021-01-21 Steve H. McNelley Immersive communication terminals
US20210383804A1 (en) * 2016-04-26 2021-12-09 View, Inc. Immersive collaboration of remote participants via media displays
US20210390953A1 (en) * 2016-04-26 2021-12-16 View, Inc. Immersive collaboration of remote participants via media displays

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20120092921A (ko) * 2011-02-14 2012-08-22 김영대 가상 강의실 강의 방법 및 장치
US20210021788A1 (en) * 2014-09-25 2021-01-21 Steve H. McNelley Immersive communication terminals
US20170264865A1 (en) * 2015-05-29 2017-09-14 Boe Technology Group Co., Ltd. Display device and video communication terminal
US20170085834A1 (en) * 2015-09-23 2017-03-23 Samsung Electronics Co., Ltd. Video telephony system, image display apparatus, driving method of image display apparatus, method for generating realistic image, and non-transitory computer readable recording medium
US20210383804A1 (en) * 2016-04-26 2021-12-09 View, Inc. Immersive collaboration of remote participants via media displays
US20210390953A1 (en) * 2016-04-26 2021-12-16 View, Inc. Immersive collaboration of remote participants via media displays
US20200045261A1 (en) * 2018-08-06 2020-02-06 Microsoft Technology Licensing, Llc Gaze-correct video conferencing systems and methods

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115859335A (zh) * 2023-02-03 2023-03-28 合肥科颖医药科技有限公司 基于远控技术的远程信息数据查阅方法及系统

Similar Documents

Publication Publication Date Title
US20210390953A1 (en) Immersive collaboration of remote participants via media displays
US20210383804A1 (en) Immersive collaboration of remote participants via media displays
WO2021211798A1 (fr) Interaction entre une enceinte et un ou plusieurs occupants
US20220179275A1 (en) Building network
US20200150508A1 (en) Building network
KR20230162150A (ko) 탠덤 가시 윈도우 및 미디어 디스플레이
CA3066285A1 (fr) Reseau de bord pour services de construction
US11868019B2 (en) Tandem vision window and media display
US11892738B2 (en) Tandem vision window and media display
US11747698B2 (en) Tandem vision window and media display
WO2022221532A1 (fr) Collaboration immersive de participants à distance par le biais d'afficheurs multimédias
US20230132451A1 (en) Interaction between an enclosure and one or more occupants
TW202217421A (zh) 與設施中之媒體顯示器相關聯之組態
WO2023003877A1 (fr) Commande d'un ou plusieurs dispositifs dans un véhicule
TW202246865A (zh) 經由媒體顯示器之遠端參與者的沉浸式協作
US20240085754A1 (en) Display construct for media projection and wireless charging
US20240135930A1 (en) Behavior recognition in an enclosure
WO2022178150A1 (fr) Reconnaissance de comportement dans une enceinte
US20230350260A1 (en) Configuration associated with media display of a facility
TW202142939A (zh) 串聯視覺窗和媒體顯示器
US20230324952A1 (en) Display construct and framing for media projection
WO2023034839A1 (fr) Commande prédictive axée sur l'occupant de dispositifs dans des établissements
CN114730117A (zh) 串联视窗和媒体显示器

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22788935

Country of ref document: EP

Kind code of ref document: A1

WWE Wipo information: entry into national phase

Ref document number: 18555129

Country of ref document: US

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 22788935

Country of ref document: EP

Kind code of ref document: A1