WO2023133151A1 - Systems and methods for visual scene monitoring - Google Patents

Systems and methods for visual scene monitoring Download PDF

Info

Publication number
WO2023133151A1
WO2023133151A1 PCT/US2023/010131 US2023010131W WO2023133151A1 WO 2023133151 A1 WO2023133151 A1 WO 2023133151A1 US 2023010131 W US2023010131 W US 2023010131W WO 2023133151 A1 WO2023133151 A1 WO 2023133151A1
Authority
WO
WIPO (PCT)
Prior art keywords
sensor
experience
sensors
sensor data
data
Prior art date
Application number
PCT/US2023/010131
Other languages
French (fr)
Inventor
Aaron Chandler JEROMIN
Akiva Meir Krauthamer
Jeremy Seth Scheinberg
Elam Kevin HERTZLER
Original Assignee
Universal City Studios Llc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from US18/090,932 external-priority patent/US20230213925A1/en
Application filed by Universal City Studios Llc filed Critical Universal City Studios Llc
Publication of WO2023133151A1 publication Critical patent/WO2023133151A1/en

Links

Classifications

    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63GMERRY-GO-ROUNDS; SWINGS; ROCKING-HORSES; CHUTES; SWITCHBACKS; SIMILAR DEVICES FOR PUBLIC AMUSEMENT
    • A63G31/00Amusement arrangements

Definitions

  • Amusement parks and other entertainment venues contain experiences (e.g., ride vehicle experiences, animated figures (e.g., robotic characters), scenes, attractions and so on) to entertain park guests.
  • experiences e.g., ride vehicle experiences, animated figures (e.g., robotic characters), scenes, attractions and so on
  • the components within the experience may benefit from more robust monitoring and maintenance to ensure the safety of the park guests and optimal or improved operation of the experience.
  • a method of monitoring an amusement park experience may include receiving, via multiple sensors, multiple layers of first sensor data indicative of characteristics of the experience.
  • the method may generate a profile of the experience based on the first sensor data, wherein the profile includes a baseline and a threshold indicating an acceptable range of the characteristics.
  • the method may receive second sensor data and third sensor data via the multiple sensors.
  • the method may determine, in response to identifying characteristics of the second sensor data that deviate from the baseline but do not exceed the threshold, that the experience is operating properly.
  • the method may also, in response to identifying characteristics of the third sensor data that deviate from the baseline and exceed the threshold, perform a corrective action.
  • a system may include a network and one or more communication hubs communicatively coupled with one another via the network.
  • the system may also include a ride vehicle including one or more ride vehicle sensors, where the one or more ride vehicle sensors are communicatively coupled to at least one of the one or more communication hubs.
  • the system may include an animated figure, the animated figure including an actuator and an actuator sensor, wherein the actuator sensor is communicatively coupled to at least one of the one or more communication hubs.
  • the system may also include a controller that may receive data from the one or more ride vehicle sensors and data from the actuator sensor via the one or more communication hubs.
  • the controller may adjust the actuator and control the ride vehicle based on a discrepancy between a baseline relative value and a currently detected relative value of the data from the one or more ride vehicle sensors and the data from the actuator sensor.
  • one or more tangible, non-transitory, computer-readable media comprises instructions that, when executed by at least one processor, cause the at least one processor to receive, via multiple sensors, first sensor data indicative of characteristics of an amusement park experience.
  • the processor may generate a profile of the amusement park experience based on the first sensor data, and the profile may include a baseline and a threshold indicating an acceptable range of the characteristics.
  • the processor may receive a second sensor data and third sensor data via the multiple sensors and determine, in response to identifying characteristics of the second sensor data that deviate from the baseline but do not exceed the threshold, that the experience is operating properly. However, in response to identifying characteristics of the third sensor data that deviate from the baseline and exceed the threshold, the processor may perform a corrective action.
  • FIG. 1 is a block diagram of an anomaly detection and monitoring system, in accordance with an aspect of the present disclosure
  • FIG. 2 is a flowchart of a method for detecting an anomaly related to an experience and performing a corrective action utilizing the system of FIG. 1, in accordance with an aspect of the present disclosure
  • FIG. 3 illustrates an experience in which the system of FIG. 1 may be utilized, in accordance with an aspect of the present disclosure
  • FIG. 4 is a block diagram illustrating a system of unlinked localized sensor networks, in accordance with an aspect of the present disclosure.
  • FIG. 5 is a block diagram illustrating a network of interconnected localized sensors and devices, in accordance with an aspect of the present disclosure.
  • Such conventional systems may fail to detect certain maintenance and/or user experience issues due to human error, or may fail to detect maintenance and/or user experience issues due to a lack of automated monitoring (e.g., via sensors, cameras, and so on).
  • a system utilizing a multifaceted, layered, globally-connected network of sensors, multidimensional models, and machine learning engines may effectuate a more robust, accurate, responsive, and intelligent monitoring and maintenance system for amusement park attractions.
  • present embodiments are generally directed to a method and system for automated monitoring, anomaly detection, correction, mitigation, and/or predictive maintenance in theme park experiences. Specifically, present embodiments are directed to collecting, via a network of communicatively coupled sensors, data on a variety of aspects of and components within an experience, using machine learning to detect and/or predict quantitative and qualitative issues, dynamically addressing the anomaly and/or triggering an alert regarding the anomaly.
  • Visual aspects, control features, audio, aspects of the animated figures, and monitoring and maintenance components and/or aspects are prepared in accordance with present techniques to provide multifaceted and layered monitoring and maintenance of the experience. Multifaceted and layered monitoring and maintenance provides robust and intelligent monitoring of the experience.
  • Layering is achieved at least in part by utilizing a number of data sources or streams (e.g., audiovisual data, infrared data, social media data and so on) to detect, predict, and/or correct for subtle anomalies that may nevertheless impact the quality of a guest’s enjoyment of the experience.
  • data sources or streams e.g., audiovisual data, infrared data, social media data and so on
  • control instructions from an artificial intelligence (Al) or machine learning engine may coordinate sensor data, equipment performance and the like to detect and address (e.g., correct or mitigate the effects of) an anomaly.
  • numerous and variable iterative routines are included to gradually improve aspects of the monitoring and maintenance system.
  • Procedures, in accordance with the present disclosure for monitoring an amusement park experience include various different steps and procedural aspects. Some of these steps or procedures may be performed in parallel or in varying different orders. Some steps may be processor-based operations and may involve controlled equipment (e.g., actuators). Further, some procedures may be iteratively performed to achieve a desired outcome. Accordingly, while various different procedural steps may be discussed in a particular order herein, the procedural steps may not necessarily be performed in the order of introduction, as set forth by the present disclosure. While some specific steps of an operation may necessarily occur before other specific steps (e.g., as dictated by logic), the listing of certain orders of operation are primarily provided to facilitate discussion.
  • indicating that a first step or a beginning step includes a particular operation is not intended to limit the scope of the disclosure to such initial steps. Rather, it should be understood that additional steps may be performed, certain steps may be omitted, referenced steps may be performed in an alternative order or in parallel where appropriate, and so forth. However, disclosed orders of operation may be limiting when indicated as such.
  • FIG. 1 is a block diagram of an anomaly detection and monitoring system (e.g., the system) 100, in accordance with the present disclosure.
  • the system 100 includes an electronic device 102, which may be in the form of any suitable electronic computing device, such as a computer, laptop, personal computer, server, mobile device, smartphone, tablet, wearable device, and so on.
  • the electronic device 102 may include a controller 104 that includes one or more processors 106 and one or more memory and/or storage devices 108.
  • the one or more processors 106 may execute software programs and/or instructions (e.g., stored in the memory 108) to facilitate determining likelihoods and/or occurrences of changes in amusement experiences (e.g., relative to a baseline experience) and adjusting control accordingly.
  • the one or more processors 106 may include multiple microprocessors, one or more “general- purpose” microprocessors, one or more special-purpose microprocessors, and/or one or more application specific integrated circuits (ASICs), or some combination thereof.
  • the one or more processors 106 may include one or more reduced instruction set (RISC) processors.
  • RISC reduced instruction set
  • the one or more memory devices 108 may store information such as control software, look up tables, configuration data, and so on.
  • the one or more processors 106 and/or the one or more memory devices 108 may be external to the controller 104 and/or the electronic device 102.
  • the one or more memory devices 108 may include a tangible, non-transitory, machine-readable-medium, such as a volatile memory (e.g., a random access memory (RAM)) and/or a nonvolatile memory (e.g., a read-only memory (ROM)).
  • RAM random access memory
  • ROM read-only memory
  • the one or more memory devices 108 may store a variety of information and may be used for various purposes.
  • the one or more memory devices 108 may store machine-readable and/or processor-executable instructions (e.g., firmware or software) for the one or more processors 106 to execute, such as instructions for determining a likelihood that an entertainment experience (e.g., automated positioning of an animated figure) will trend past a threshold or envelope of acceptable operation (e.g., due to wear, component failure, control degradation or the like) and adjusting operation or control features accordingly.
  • the one or more memory devices 108 may include one or more storage devices (e.g., nonvolatile storage devices) that may include read-only memory (ROM), flash memory, a hard drive, or any other suitable optical, magnetic, or solid-state storage medium, or a combination thereof.
  • the electronic device 102 may also include an electronic display 110 that enables graphical and/or visual output to be displayed to a user.
  • the electronic display 110 may use any suitable display technology, and may include an electroluminescent (ELD) display, liquid crystal (LCD) display, light-emitting diode (LED) display, organic LED (OLED) display, active-matrix OLED display, plasma display panel (PDP), quantum dot LED (QLED) display, and so on.
  • ELD electroluminescent
  • LCD liquid crystal
  • LED light-emitting diode
  • OLED organic LED
  • active-matrix OLED display active-matrix OLED display
  • PDP plasma display panel
  • QLED quantum dot LED
  • the electronic device 102 may include a data acquisition system (DAQ) 112 operable to send and receive information to and from a sensor network 120.
  • DAQ data acquisition system
  • the electronic device 102 may include a communication interface that enables the controller 104 to communicate with servers and/or other computing resources of the sensor network 120 via a communication network (e.g., a mobile communication network, a WiFi network, local area network (LAN), wide area network (WAN), the Internet, and the like).
  • a communication network e.g., a mobile communication network, a WiFi network, local area network (LAN), wide area network (WAN), the Internet, and the like.
  • a communication network e.g., a mobile communication network, a WiFi network, local area network (LAN), wide area network (WAN), the Internet, and the like.
  • LAN local area network
  • WAN wide area network
  • the Internet the like.
  • at least a portion of the information received from the sensor network 120 may be downloaded and stored on the memory or storage devices 108 of the electronic device 102.
  • the sensor network 120 may include ultraviolet sensors 122, infrared sensors 124, vibration sensors 126, imaging sensors 128, audio sensors 130, biometric sensors 132
  • any other appropriate sensor such as an accelerometer, a speed sensor, a gyrometer, a torque sensor, a location sensor, a pressure sensor, a humidity sensor, a light sensor, voltage sensor, current sensor, humidity sensor, particle sensor, and so on may be employed in the system 100.
  • the illustrated sensors in FIG. 1 should be understood to represent any sensor type that can be utilized in an attraction.
  • the sensor network 120 may enable the system 100 to collect an expansive amount of data regarding an amusement park attraction, such that the system 100 may obtain a fine-grain view of the various aspects of and components within the system 100.
  • the infrared sensor 124 may detect if the wheels or motor of a ride vehicle are emitting heat beyond a heat threshold, and if so may trigger a warning (e.g., via the controller 104) to technical operators, disable the ride vehicle, or take any other appropriate action. Combinations of input may also be assessed using algorithms, lookup tables, or artificial intelligence.
  • data from the infrared sensor 124 may be analyzed in combination with the vibration sensors 126 to detect a cause of heat related to misalignment of wheels of a ride vehicle, or the like.
  • the sensor network 120 may enable the system 100 to address quantitative and qualitative issues in an amusement park attraction.
  • quantitative issues may be defined as issues that can be quantified, counted, or measured. For example, a temperature sensor detecting that the temperature in a room is 85 degrees Fahrenheit when the expected temperature is 75 degrees Fahrenheit.
  • qualitative issues may be defined as issues regarding the appearance, sound, or feel of the experience or components within the experience, particularly as they relate to a guest’s enjoyment of the experience.
  • the sensor network 120 may enable the system 100 to address qualitative issues in the case that a lightbulb of a set of lightbulbs burns out during a show or scene, negatively impacting the quality of a user’s experience.
  • a camera or other imaging sensor e.g., 130
  • the electronic device 102 may cause the brightness or direction of other lightbulbs in the set of lightbulbs to adjust to compensate for the malfunctioning lightbulb.
  • the predetermined profile may be established by utilizing a machine learning engine (e.g., 114), as will be discussed in greater detail below.
  • the sensor network 120 may enable the system 100 to detect qualitative deviations such as a guest being in an area that the guest is not expected to be in. Upon making this determination, the system 100 may alert a technical operator or take other corrective action (e.g., stopping or pausing operation of the experience) or automatically perform such actions.
  • the DAQ 112 may collect web data 134 using software applications such as web crawlers to obtain information regarding anomalies or other issues within an attraction. For example, if a scene experiences an issue that may not be easily detected by a sensor (e.g., an animated character’s wig falls off or the animated character experiences a costuming malfunction), a guest may notice the issue and post about the issue on social media.
  • the DAQ 112 may detect, via the web crawler, the social media post and may trigger an alert.
  • the system 100 may realize improvements in computer-related technologies by increasing the accuracy and efficiency of anomaly detection, predictive analysis, and intelligent maintenance in amusement park anomaly detection and maintenance systems.
  • the sensor network 120 along with other data resources such as the web data 134 may provide a layered system of anomaly detection and maintenance.
  • the sensors in the sensor network 120 may provide redundancy to ensure detection of anomalies that may go unnoticed in a conventional monitoring and maintenance system.
  • a spotlight in an attraction may be equipped with a current or voltage sensor allowing the system 100 to determine if an electrical issue is causing the spotlight to dim or turn off when considered in conjunction with other detected values (e.g., lighting level values in a staging area) provided to the system 100.
  • An imaging sensor 128 may also be fixed upon an area illuminated by the light of the spotlight and the system 100 may be trained (e.g., via the ML engine 114) to detect luminance or brightness variations based on the imaging sensor 128. In this way, a lighting issue with the spotlight may be detected by the system 100 based on the current or voltage sensor, the imaging sensor 128, or combinations thereof.
  • the electronic device 102 may include the (ML) engine 114. While the ML engine 114 may be implemented outside of the controller 104 as illustrated in FIG. 1, in other embodiments the ML engine 114 may be implemented in other circuitry (e.g., in the controller 104, or in the processor 106).
  • FIG. 2 a flowchart of a method 200 for detecting an anomaly related to an experience and performing a corrective action is illustrated, according to an embodiment of the present disclosure.
  • the method 200 receives multiple layers of sensor data (e.g., from the sensor network 120) indicative of characteristics of the experience.
  • the sensor data may include quantitative information, such as timing information regarding the activating of lights, audio equipment, and movement of the ride vehicle.
  • the sensor data may also include qualitative information such as level of ambient lighting of an attraction or experience. Such information may be acquired from an attraction or an experience, such as that illustrated in FIG. 3. [0028] FIG. 3 illustrates an attraction or an experience 300 in which the system 100 may be utilized, according to an embodiment of the present disclosure.
  • the experience may feature a ride vehicle 310 equipped with a vibration sensor 126 and a temperature sensor 304, an animated figure 312, environmental components 314 (e.g., a fan to simulate wind blowing on the guests), audio equipment 306, set lights 302, an audio sensor 130, a temperature sensor 304, imaging sensors 128 (e.g., a camera or other imaging sensors), various DAQs 112 to collect and share data (e.g., over a network) with other sensors and electronic devices 102 (e.g., communication hubs), and so on.
  • the animated figure 312 may have multiple actuators to enable movement of the animated figure 312.
  • the actuators of the animated figure 312 may be equipped with various sensors such as torque sensors, pressure sensors, temperatures sensors, gyrometers, motion sensors, speed sensors, vibration sensors, infrared sensors, and so on. Additionally, the guests may wear biometric sensors 132 to permit monitoring and/or collecting of certain biometric data. While the sensors mentioned above are illustrated in the experience 300, it should be noted that any appropriate sensor may be included in the experience 300.
  • the ride vehicle 310 may include any appropriate sensor (e.g., a motion sensor, a speed sensor, a weight sensor, a gyrometer, the temperature sensor 304, the vibration sensor 126, an accelerometer, or any combination thereof).
  • the electronic device 102 may receive data from various components (e.g., the ride vehicle 310, the animated character 312, the set lights 302, the environmental components 314, and so on) and/or may receive data from sensors of the sensor network 120 communicatively coupled to or otherwise monitoring the various components and/or conditions.
  • the controller 104 may receive the data and adjust the actuators of the animated character 312, adjust lighting, adjust climate control, control effects, adjust airflow, control the ride vehicle 310, and the like (including combinations thereof).
  • the controller 104 may, based on the data received regarding the animated character 312, cause the actuators to adjust to make the animated character 312 move in a certain direction, look towards a particular guest or group of guests, or interact with elements of the experience 300.
  • the controller 104 may, based on the data received regarding the ride vehicle 310, cause the ride vehicle 310 to speed up, slow down, stop, change direction, engage a hydraulic system of the ride vehicle 310, and so on.
  • the controller 104 may adjust the set lights 302 (e.g., adjust position, brightness, color, and so on), control the environmental components 314, and so on.
  • the experience 300 may undergo several cycles under normal operating conditions (e.g., as verified by a human technical operator) in order to obtain this data.
  • the cycling may include movement of a ride vehicle experience to determine proper triggering and response in order to verify that not only is the ride vehicle experience operating as per the original creative intent, but that the ride vehicle experience is responsive to the guest presence and/or precise location as intended.
  • the ML engine 114 may collect and train upon the data gathered through cycling to determine desired operating parameters of the experience. Upon collecting a sufficient amount of training data, the ML engine 114 may begin processing and labelling the data. In the processing and labelling phase, supervised and/or unsupervised machine learning will be performed by the ML engine 114. Based on the results of the processing and labelling phase, the ML engine 114 may determine and encode the parameters of a normally functioning experience.
  • the experience may be cycled repeatedly to enable the ML engine 114 to perform multiple layered processing passes, gradually learning additional aspects and parameters of the observed experience. For example, there may be a processing pass in which lighting color and intensity (e.g., detected via the imaging sensors 128 or features of the set lights 302) is learned and stored across the system 100. By sampling the experience 300 frame- by-frame over known time units, the system 100 may ascertain whether the set lights 302 are functioning properly to specification and are aimed as intended. The system 100 may detect anomalies such as flickering, outages, degradation or timing and triggering errors. Another processing pass may determine whether the animated figure 312, environmental components 314 and/or other action equipment are triggering properly, moving within expected motion profiles, and determine where and when motion is expected to appear under normal conditions.
  • lighting color and intensity e.g., detected via the imaging sensors 128 or features of the set lights 302
  • the method 200 may, in process block 204, generate profiles of the experience 300 based on the received sensor data.
  • the profiles may include a baseline and a threshold indicating an expected range of the characteristics or aspects of the experience 300.
  • a profile where the equipment in the experience 300 e.g., the set lights 302, audio equipment 306, environmental components 314, and animated figure 312 is determined to operate according to specification and expectation may be designated as an A profile.
  • a profile where the equipment in the experience 300 is determined to operate outside of specification and expectation may be designated as a B profile.
  • Additional profiles with different characteristics or operational aspects e.g., a profile in warm weather or cold weather
  • certain profile types e.g., C profile, D profile, and so on
  • the method 200 may determine whether the experience 300 is operating properly according to specification and expectation. That is, data 207 from a current experience (e.g., essentially real time data from the experience 300) may be compared with established profiles established as part of the process block 204. Through the cycling and processing passes of the experience 300, profile thresholds may be determined outside of which the profile may receive a different designation, but within which a designation may remain even if deviation in the aspects and characteristics of the experience 300 are detected. For example, if during operation of the experience 300 the system 100 detects that the set lights 302 are dimmer than expected due to a technical error, the system 100 may determine this to be a deviation from the A profile, but not a deviation that exceeds the A profile threshold.
  • data 207 from a current experience e.g., essentially real time data from the experience 300
  • profile thresholds may be determined outside of which the profile may receive a different designation, but within which a designation may remain even if deviation in the aspects and characteristics of the experience 300 are detected. For example, if during
  • the method 200 may continue to receive sensor data indicative of characteristics of the experience 300 (e.g., in the process block 202).
  • the profile of the experience 300 may be designated a B profile, C profile, and so on (e.g., beyond a threshold of the A profile and into a separate profile).
  • identification of the data 207 as corresponding to a non-optimal profile may result in mere adjustment to operational aspects of the experience 300 while identification of the data 207 as corresponding to an unacceptable profile (e.g., profile X) may result is a shutdown of substantial operational aspects of the experience 300.
  • a B profile or C profile may be determined to be an improper operation of the experience 300, and thus, in process block 208, a corrective action may be determined and performed.
  • the corrective action may include performing automated maintenance on malfunctioning equipment, such as having the controller 104, based on information received from the ML engine 114, send a command causing the experience 300 to adjust current to a malfunctioning set light 302 outputting less brightness than expected, or having the controller 104 send a command causing the experience 300 to adjust other equipment to account for malfunctioning equipment (e.g., adjusting the brightness of another set light 302 to compensate for the malfunctioning set light 302).
  • the controller 104 may receive data from various components such as the ride vehicle 310 and/or the animated character 312 and adjust the actuators of the animated character 312 and/or control the ride vehicle 310 accordingly. Additionally or alternatively, the controller 104 may adjust the actuators of the animated character 312 and/or control the ride vehicle 310 based on the profile designation of the experience 300. For example, if the method 200 designates the profile of the experience 300 as an A profile, the controller 104 may cause the actuators to adjust to make the animated character 312 move in a certain direction, look towards a particular guest or group of guests, or interact with elements of the experience 300, and the controller may cause the ride vehicle 310 to accelerate at a particular location of the experience 300.
  • the controller 104 may prevent the actuators from adjusting to move the animated character 312 and may prevent the ride vehicle 310 from accelerating — or may stop the ride vehicle 310 — at the particular location of the experience 300.
  • the corrective action may also include sending an alert (e.g., to an alert panel 308) informing the technical operators of the malfunction. If the malfunction requires urgent action, the ML engine 114 may send an urgent alert to the technical operators and cause the experience 300 to stop or pause operation.
  • an alert e.g., to an alert panel 308
  • the ML engine 114 may send an urgent alert to the technical operators and cause the experience 300 to stop or pause operation.
  • the ML engine 114 may learn to identify known and new anomalies as well as anticipate anomalies that may occur in the future. For example, if the ML engine 114 determines that the set lights 302 gradually experience a reduction in brightness prior to a bulb burning out, the ML engine 114 may trigger an alert to the technical operators if the ML engine 114 detects that a set light 302 has gradually experienced a reduction in brightness, and may include in the alert an estimation of how long it may take for the bulb of the set light 302 to burn out. In this way, the system 100 may enable predictive maintenance within the experience 300 and may permit the technical operators to take corrective action prior to a malfunction, preserving the quality of the experience 300.
  • the system 100 may also be able to apply the machine learning performed on the experience 300 to a different experience.
  • the learned data from the experience 300 may be extrapolated and retargeted to another sufficiently similar experience by implementing a deep learning process known as transfer learning.
  • Transfer learning allows for a sufficiently trained ML model to be repurposed and reused to make observations or predictions about a different but related problem set.
  • the ML engine 114 may be able to identify and anticipate issues that may be experienced by set lights in a different experience.
  • the system 100 and the experience 300 may utilize multiple localized sensor networks that are not linked together via a single network.
  • FIG. 4 is a block diagram illustrating a system 400 of unlinked localized sensor networks, according to an embodiment of the present disclosure.
  • Sensors 404A, 404B, 404C, and 404D may be various sensors in the experience 300. These sensors may communicate data to a communication hub 402 (e.g., the electronic device 102).
  • the communication hub 402 may receive sensor data (e.g., via the controller 104) and may analyze the sensor data from the sensors 404 (e.g., via the ML engine 114) and send commands (e.g., via the controller 104) to the sensors 404 or to other components within the experience 300. Additionally, the sensors 404 may communicate amongst themselves.
  • the communication hub 402 and the sensors 404 may constitute a sensor network 406.
  • a communication hub 410 may receive signals from a device 412 (e.g., an electronic device (e.g., 102), and/or a controller (e.g., 104) communicatively coupled to the animated figure 312 or the ride vehicle 310) communicatively coupled to sensors 414A and 414B (collectively referred to as the sensors 414).
  • the communication hub 10, the device 412, and the sensors 414 within the device 412 may constitute a sensor network 416.
  • the communication hub 410 may communicate with the device 412 and the sensors 414, but may not communicate with the communication hub 402, thus the sensors network 416 may not communicate with the sensor network 406.
  • FIG. 5 is a block diagram illustrating a network 500 of interconnected localized sensors and devices, according to an embodiment of the present disclosure.
  • a central communication hub 502 may collect data from and communicate with sensors 504 A, 504B, and 504C (collectively referred to as the sensors 504 or network sensors).
  • the sensors 504 may also communicate with each other.
  • Certain sensors e.g., 504A
  • a local communication hub 506 may also collect data from the sensors 504 as well as from a device 508 equipped with a sensor 510A, a sensor 501B, and a sensor 510C, collectively referred to as the sensors 510 (or the device sensors 510). In this way, the device 508 and the sensors 510 within the device 508 may communicate with each other and with the sensors 504.
  • the central communication hub 502 may also communicate with the local communication hub 506 via wired or wireless communication (e.g., WiFi, via a cellular network, and so on). While only one local communication hub 506 is shown, it should be understood that there may be any number of local communication hubs in the system 100. Further, a local communication hub may be assigned to an area of the experience 300, particular equipment of the experience 300 (e.g., the animated character 312), or a subsystem of particular equipment (e.g., a set of actuators within the animated character 312).
  • an issue detected in one sensor (e.g., 504B) in the network 500 may be communicated to other sensors (e.g., 504A, 504C, 510) in the network 500 as well as to other equipment (e.g., the device 508) such as controllers in the animated figures 312, the environmental components 314, and so on.
  • the temperature sensor 304 may communicate the rise in temperature over the network 500, and the ML engine 114 may, via the controller 104, cause one or more fans to activate, may adjust a setting of a central air-conditioning unit, or may reduce the output of certain heat-producing elements within the experience 300.
  • the illustrated communication hubs 402 and 502 may incorporate artificial intelligence, controls, diagnostics, algorithms, lookup tables and combinations thereof to analyze and control aspects of the experience 300.
  • control features e.g., artificial intelligence or learning algorithms
  • training data may be augmented with three-dimensional (3D) (e.g., via a 3D modeling engine 116) information about the experience 300 being observed.
  • 3D three-dimensional
  • the image sensor 128 may be retrained from a new angle, shortening or eliminating the time normally required to add, remove, move or remount new image sensors.
  • the system 100 may reduce or eliminate the time and/or processing power normally required in adding, removing, moving, and/or reconfiguring sensors, devices, and equipment.
  • the system 100 may be able to detect and correct certain positional errors of the equipment or sensors. For example, if the system 100 (e.g., via the ML engine 114) detects unexpected readings from an image sensor 128, one or more other image sensors 128 may observe the position of the problematic image sensor 128, compare the position to an expected position based on the global 3D model, determine that the problematic image sensor 128 is misaligned, and send feedback to the system 100, thereby enabling the system 100 to correct the position of the problematic image sensor 128.
  • the system 100 can be trained based on 3D modeling and/or operate using 3D modeling as a baseline template for comparison to ongoing data (e.g., essentially real-time data from an attraction or experience).

Landscapes

  • Testing And Monitoring For Control Systems (AREA)

Abstract

The present application is directed to systems and methods of performing anomaly detection, predictive maintenance, and anomaly correction in an amusement park experience. Specifically, the present application is directed to a method for monitoring an amusement park experience. The method may include receiving, via a sensor network, multiple layers of first sensor data indicative of characteristics of the experience. The method may also include generating a profile of the experience based on the first sensor data, wherein the profile includes a baseline and a threshold indicating an acceptable range of the characteristics. The method may also include receiving second sensor data and third sensor data via the sensor network. The method may further include determining, in response to identifying characteristics of the second sensor data that deviate from the baseline but do not exceed the threshold, that the experience is operating properly; and may include performing a particular corrective action in response to identifying characteristics of the third sensor data that deviate from the baseline and exceed the threshold.

Description

SYSTEMS AND METHODS FOR VISUAL SCENE MONITORING
CROSS-REFERENCE TO RELATED APPLICATION
[0001] This application claims priority to and the benefit of U.S. Provisional Application No. 63/296,733, entitled “SYSTEMS AND METHODS FOR VISUAL SCENE MONITORING,” filed January 5th, 2022, which is hereby incorporated by reference in its entirety for all purposes.
BACKGROUND
[0002] This section is intended to introduce the reader to various aspects of art that may be related to various aspects of the present techniques, which are described and/or claimed below. This discussion is believed to be helpful in providing the reader with background information to facilitate a better understanding of the various aspects of the present disclosure. Accordingly, it should be understood that these statements are to be read in this light, and not as admissions of prior art.
[0003] Amusement parks and other entertainment venues contain experiences (e.g., ride vehicle experiences, animated figures (e.g., robotic characters), scenes, attractions and so on) to entertain park guests. As the experiences become more technologically advanced and complex, the components within the experience may benefit from more robust monitoring and maintenance to ensure the safety of the park guests and optimal or improved operation of the experience. As such, it is now recognized that it is advantageous to utilize a variety of sensors, machine learning, and multi-dimensional modeling techniques to enable faster anomaly detection, automated alerting, predictive maintenance, and dynamic correction with regard to the experience. SUMMARY
[0004] Certain embodiments commensurate in scope with the originally claimed subject matter are summarized below. These embodiments are not intended to limit the scope of the disclosure, but rather these embodiments are intended only to provide a brief summary of certain disclosed embodiments. Indeed, the present disclosure may encompass a variety of forms that may be similar to or different from the embodiments set forth below.
[0005] In an embodiment, a method of monitoring an amusement park experience may include receiving, via multiple sensors, multiple layers of first sensor data indicative of characteristics of the experience. The method may generate a profile of the experience based on the first sensor data, wherein the profile includes a baseline and a threshold indicating an acceptable range of the characteristics. The method may receive second sensor data and third sensor data via the multiple sensors. The method may determine, in response to identifying characteristics of the second sensor data that deviate from the baseline but do not exceed the threshold, that the experience is operating properly. The method may also, in response to identifying characteristics of the third sensor data that deviate from the baseline and exceed the threshold, perform a corrective action.
[0006] In an embodiment, a system may include a network and one or more communication hubs communicatively coupled with one another via the network. The system may also include a ride vehicle including one or more ride vehicle sensors, where the one or more ride vehicle sensors are communicatively coupled to at least one of the one or more communication hubs. Further, the system may include an animated figure, the animated figure including an actuator and an actuator sensor, wherein the actuator sensor is communicatively coupled to at least one of the one or more communication hubs. The system may also include a controller that may receive data from the one or more ride vehicle sensors and data from the actuator sensor via the one or more communication hubs. The controller may adjust the actuator and control the ride vehicle based on a discrepancy between a baseline relative value and a currently detected relative value of the data from the one or more ride vehicle sensors and the data from the actuator sensor. [0007] In an embodiment, one or more tangible, non-transitory, computer-readable media is provided in accordance with the present disclosure. The one or more tangible, non-transitory, computer-readable media comprises instructions that, when executed by at least one processor, cause the at least one processor to receive, via multiple sensors, first sensor data indicative of characteristics of an amusement park experience. The processor may generate a profile of the amusement park experience based on the first sensor data, and the profile may include a baseline and a threshold indicating an acceptable range of the characteristics. The processor may receive a second sensor data and third sensor data via the multiple sensors and determine, in response to identifying characteristics of the second sensor data that deviate from the baseline but do not exceed the threshold, that the experience is operating properly. However, in response to identifying characteristics of the third sensor data that deviate from the baseline and exceed the threshold, the processor may perform a corrective action.
BRIEF DESCRIPTION OF DRAWINGS
[0008] These and other features, aspects, and advantages of the present disclosure will become better understood when the following detailed description is read with reference to the accompanying drawings in which like characters represent like parts throughout the drawings, wherein:
[0009] FIG. 1 is a block diagram of an anomaly detection and monitoring system, in accordance with an aspect of the present disclosure;
[0010] FIG. 2 is a flowchart of a method for detecting an anomaly related to an experience and performing a corrective action utilizing the system of FIG. 1, in accordance with an aspect of the present disclosure;
[0011] FIG. 3 illustrates an experience in which the system of FIG. 1 may be utilized, in accordance with an aspect of the present disclosure; [0012] FIG. 4 is a block diagram illustrating a system of unlinked localized sensor networks, in accordance with an aspect of the present disclosure; and
[0013] FIG. 5 is a block diagram illustrating a network of interconnected localized sensors and devices, in accordance with an aspect of the present disclosure.
DETAILED DESCRIPTION
[0014] One or more specific embodiments of the present disclosure will be described below. In an effort to provide a concise description of these embodiments, all features of an actual implementation may not be described in the specification. It should be appreciated that in the development of any such actual implementation, as in any engineering or design project, numerous implementation-specific decisions must be made to achieve the developers’ specific goals, such as compliance with system-related and business-related constraints, which may vary from one implementation to another. Moreover, it should be appreciated that such a development effort might be complex and time consuming, but would nevertheless be a routine undertaking of design, fabrication, and manufacture for those of ordinary skill having the benefit of this disclosure.
[0015] When introducing elements of various embodiments of the present disclosure, the articles “a,” “an,” and “the” are intended to mean that there are one or more of the elements. The terms “comprising,” “including,” and “having” are intended to be inclusive and mean that there may be additional elements other than the listed elements. Additionally, it should be understood that references to “one embodiment” or “an embodiment” of the present disclosure are not intended to be interpreted as excluding the existence of additional embodiments that also incorporate the recited features.
[0016] Theme parks and other such entertainment venues are becoming increasingly popular. Further, immersive experiences within such entertainment venues are in high demand. In order to provide new and exciting experiences, attractions, such as ride experiences and scenes (e.g., visual shows including live action, animated figures, computer-generated imagery, and so on) have become increasingly complex, involving integration of lighting, sound, movement, interactive elements, visual media, and so on. Accordingly, the components within the attractions may benefit from more exhaustive and holistic monitoring and maintenance. Conventional monitoring and maintenance systems include disparate clusters or networks of sensors and a manual regimen of prescribed inspections by technicians. Such conventional systems may fail to detect certain maintenance and/or user experience issues due to human error, or may fail to detect maintenance and/or user experience issues due to a lack of automated monitoring (e.g., via sensors, cameras, and so on). As may be appreciated, a system utilizing a multifaceted, layered, globally-connected network of sensors, multidimensional models, and machine learning engines may effectuate a more robust, accurate, responsive, and intelligent monitoring and maintenance system for amusement park attractions.
[0017] In view of the foregoing, present embodiments are generally directed to a method and system for automated monitoring, anomaly detection, correction, mitigation, and/or predictive maintenance in theme park experiences. Specifically, present embodiments are directed to collecting, via a network of communicatively coupled sensors, data on a variety of aspects of and components within an experience, using machine learning to detect and/or predict quantitative and qualitative issues, dynamically addressing the anomaly and/or triggering an alert regarding the anomaly. Visual aspects, control features, audio, aspects of the animated figures, and monitoring and maintenance components and/or aspects are prepared in accordance with present techniques to provide multifaceted and layered monitoring and maintenance of the experience. Multifaceted and layered monitoring and maintenance provides robust and intelligent monitoring of the experience. Layering is achieved at least in part by utilizing a number of data sources or streams (e.g., audiovisual data, infrared data, social media data and so on) to detect, predict, and/or correct for subtle anomalies that may nevertheless impact the quality of a guest’s enjoyment of the experience. Due to the complex nature of interactions between the guest and the experience, in some embodiments, control instructions from an artificial intelligence (Al) or machine learning engine may coordinate sensor data, equipment performance and the like to detect and address (e.g., correct or mitigate the effects of) an anomaly. Further, numerous and variable iterative routines are included to gradually improve aspects of the monitoring and maintenance system.
[0018] Procedures, in accordance with the present disclosure (applicable to procedures illustrated in FIGs. 1-5), for monitoring an amusement park experience include various different steps and procedural aspects. Some of these steps or procedures may be performed in parallel or in varying different orders. Some steps may be processor-based operations and may involve controlled equipment (e.g., actuators). Further, some procedures may be iteratively performed to achieve a desired outcome. Accordingly, while various different procedural steps may be discussed in a particular order herein, the procedural steps may not necessarily be performed in the order of introduction, as set forth by the present disclosure. While some specific steps of an operation may necessarily occur before other specific steps (e.g., as dictated by logic), the listing of certain orders of operation are primarily provided to facilitate discussion. For example, indicating that a first step or a beginning step includes a particular operation is not intended to limit the scope of the disclosure to such initial steps. Rather, it should be understood that additional steps may be performed, certain steps may be omitted, referenced steps may be performed in an alternative order or in parallel where appropriate, and so forth. However, disclosed orders of operation may be limiting when indicated as such.
[0019] FIG. 1 is a block diagram of an anomaly detection and monitoring system (e.g., the system) 100, in accordance with the present disclosure. As illustrated, the system 100 includes an electronic device 102, which may be in the form of any suitable electronic computing device, such as a computer, laptop, personal computer, server, mobile device, smartphone, tablet, wearable device, and so on. The electronic device 102 may include a controller 104 that includes one or more processors 106 and one or more memory and/or storage devices 108. The one or more processors 106 (e.g., microprocessors) may execute software programs and/or instructions (e.g., stored in the memory 108) to facilitate determining likelihoods and/or occurrences of changes in amusement experiences (e.g., relative to a baseline experience) and adjusting control accordingly. Moreover, the one or more processors 106 may include multiple microprocessors, one or more “general- purpose” microprocessors, one or more special-purpose microprocessors, and/or one or more application specific integrated circuits (ASICs), or some combination thereof. For example, the one or more processors 106 may include one or more reduced instruction set (RISC) processors.
[0020] The one or more memory devices 108 may store information such as control software, look up tables, configuration data, and so on. In some embodiments, the one or more processors 106 and/or the one or more memory devices 108 may be external to the controller 104 and/or the electronic device 102. The one or more memory devices 108 may include a tangible, non-transitory, machine-readable-medium, such as a volatile memory (e.g., a random access memory (RAM)) and/or a nonvolatile memory (e.g., a read-only memory (ROM)). The one or more memory devices 108 may store a variety of information and may be used for various purposes. For example, the one or more memory devices 108 may store machine-readable and/or processor-executable instructions (e.g., firmware or software) for the one or more processors 106 to execute, such as instructions for determining a likelihood that an entertainment experience (e.g., automated positioning of an animated figure) will trend past a threshold or envelope of acceptable operation (e.g., due to wear, component failure, control degradation or the like) and adjusting operation or control features accordingly. The one or more memory devices 108 may include one or more storage devices (e.g., nonvolatile storage devices) that may include read-only memory (ROM), flash memory, a hard drive, or any other suitable optical, magnetic, or solid-state storage medium, or a combination thereof.
[0021] The electronic device 102 may also include an electronic display 110 that enables graphical and/or visual output to be displayed to a user. The electronic display 110 may use any suitable display technology, and may include an electroluminescent (ELD) display, liquid crystal (LCD) display, light-emitting diode (LED) display, organic LED (OLED) display, active-matrix OLED display, plasma display panel (PDP), quantum dot LED (QLED) display, and so on. [0022] As illustrated, the electronic device 102 may include a data acquisition system (DAQ) 112 operable to send and receive information to and from a sensor network 120. For example, the electronic device 102 may include a communication interface that enables the controller 104 to communicate with servers and/or other computing resources of the sensor network 120 via a communication network (e.g., a mobile communication network, a WiFi network, local area network (LAN), wide area network (WAN), the Internet, and the like). In some cases, at least a portion of the information received from the sensor network 120 may be downloaded and stored on the memory or storage devices 108 of the electronic device 102. The sensor network 120 may include ultraviolet sensors 122, infrared sensors 124, vibration sensors 126, imaging sensors 128, audio sensors 130, biometric sensors 132, and so on. While the preceding list of sensors is illustrated in FIG. 1, it should be noted that any other appropriate sensor, such as an accelerometer, a speed sensor, a gyrometer, a torque sensor, a location sensor, a pressure sensor, a humidity sensor, a light sensor, voltage sensor, current sensor, humidity sensor, particle sensor, and so on may be employed in the system 100. Further, the illustrated sensors in FIG. 1 should be understood to represent any sensor type that can be utilized in an attraction.
[0023] The sensor network 120 may enable the system 100 to collect an expansive amount of data regarding an amusement park attraction, such that the system 100 may obtain a fine-grain view of the various aspects of and components within the system 100. For example, the infrared sensor 124 may detect if the wheels or motor of a ride vehicle are emitting heat beyond a heat threshold, and if so may trigger a warning (e.g., via the controller 104) to technical operators, disable the ride vehicle, or take any other appropriate action. Combinations of input may also be assessed using algorithms, lookup tables, or artificial intelligence. For example, data from the infrared sensor 124 may be analyzed in combination with the vibration sensors 126 to detect a cause of heat related to misalignment of wheels of a ride vehicle, or the like. The sensor network 120 may enable the system 100 to address quantitative and qualitative issues in an amusement park attraction. As used herein, quantitative issues may be defined as issues that can be quantified, counted, or measured. For example, a temperature sensor detecting that the temperature in a room is 85 degrees Fahrenheit when the expected temperature is 75 degrees Fahrenheit. In contrast, qualitative issues may be defined as issues regarding the appearance, sound, or feel of the experience or components within the experience, particularly as they relate to a guest’s enjoyment of the experience.
[0024] For example, the sensor network 120 may enable the system 100 to address qualitative issues in the case that a lightbulb of a set of lightbulbs burns out during a show or scene, negatively impacting the quality of a user’s experience. However, if a camera or other imaging sensor (e.g., 130) detects a difference, such as a deviation from a predetermined profile, in the lighting of a scene, the electronic device 102 may cause the brightness or direction of other lightbulbs in the set of lightbulbs to adjust to compensate for the malfunctioning lightbulb. The predetermined profile may be established by utilizing a machine learning engine (e.g., 114), as will be discussed in greater detail below. Further, the sensor network 120 may enable the system 100 to detect qualitative deviations such as a guest being in an area that the guest is not expected to be in. Upon making this determination, the system 100 may alert a technical operator or take other corrective action (e.g., stopping or pausing operation of the experience) or automatically perform such actions.
[0025] In addition to the sensor network 120, the DAQ 112 may collect web data 134 using software applications such as web crawlers to obtain information regarding anomalies or other issues within an attraction. For example, if a scene experiences an issue that may not be easily detected by a sensor (e.g., an animated character’s wig falls off or the animated character experiences a costuming malfunction), a guest may notice the issue and post about the issue on social media. The DAQ 112 may detect, via the web crawler, the social media post and may trigger an alert.
[0026] Through utilization of one or more centralized processing servers (e.g., the electronic device 102), machine learning (ML) algorithms (e.g., executed via the ML engine 114) and anomaly detection algorithms, the system 100 may realize improvements in computer-related technologies by increasing the accuracy and efficiency of anomaly detection, predictive analysis, and intelligent maintenance in amusement park anomaly detection and maintenance systems. The sensor network 120 along with other data resources such as the web data 134 may provide a layered system of anomaly detection and maintenance. The sensors in the sensor network 120 may provide redundancy to ensure detection of anomalies that may go unnoticed in a conventional monitoring and maintenance system. For example, a spotlight in an attraction may be equipped with a current or voltage sensor allowing the system 100 to determine if an electrical issue is causing the spotlight to dim or turn off when considered in conjunction with other detected values (e.g., lighting level values in a staging area) provided to the system 100. An imaging sensor 128 may also be fixed upon an area illuminated by the light of the spotlight and the system 100 may be trained (e.g., via the ML engine 114) to detect luminance or brightness variations based on the imaging sensor 128. In this way, a lighting issue with the spotlight may be detected by the system 100 based on the current or voltage sensor, the imaging sensor 128, or combinations thereof.
[0027] In some embodiments, the electronic device 102 may include the (ML) engine 114. While the ML engine 114 may be implemented outside of the controller 104 as illustrated in FIG. 1, in other embodiments the ML engine 114 may be implemented in other circuitry (e.g., in the controller 104, or in the processor 106). Turning to FIG. 2, a flowchart of a method 200 for detecting an anomaly related to an experience and performing a corrective action is illustrated, according to an embodiment of the present disclosure. In process block 202, the method 200 receives multiple layers of sensor data (e.g., from the sensor network 120) indicative of characteristics of the experience. The sensor data may include quantitative information, such as timing information regarding the activating of lights, audio equipment, and movement of the ride vehicle. The sensor data may also include qualitative information such as level of ambient lighting of an attraction or experience. Such information may be acquired from an attraction or an experience, such as that illustrated in FIG. 3. [0028] FIG. 3 illustrates an attraction or an experience 300 in which the system 100 may be utilized, according to an embodiment of the present disclosure. The experience may feature a ride vehicle 310 equipped with a vibration sensor 126 and a temperature sensor 304, an animated figure 312, environmental components 314 (e.g., a fan to simulate wind blowing on the guests), audio equipment 306, set lights 302, an audio sensor 130, a temperature sensor 304, imaging sensors 128 (e.g., a camera or other imaging sensors), various DAQs 112 to collect and share data (e.g., over a network) with other sensors and electronic devices 102 (e.g., communication hubs), and so on. The animated figure 312 may have multiple actuators to enable movement of the animated figure 312. The actuators of the animated figure 312 may be equipped with various sensors such as torque sensors, pressure sensors, temperatures sensors, gyrometers, motion sensors, speed sensors, vibration sensors, infrared sensors, and so on. Additionally, the guests may wear biometric sensors 132 to permit monitoring and/or collecting of certain biometric data. While the sensors mentioned above are illustrated in the experience 300, it should be noted that any appropriate sensor may be included in the experience 300. For instance, the ride vehicle 310 may include any appropriate sensor (e.g., a motion sensor, a speed sensor, a weight sensor, a gyrometer, the temperature sensor 304, the vibration sensor 126, an accelerometer, or any combination thereof).
[0029] The electronic device 102, in particular the controller 104 of the electronic device 102, may receive data from various components (e.g., the ride vehicle 310, the animated character 312, the set lights 302, the environmental components 314, and so on) and/or may receive data from sensors of the sensor network 120 communicatively coupled to or otherwise monitoring the various components and/or conditions. The controller 104 may receive the data and adjust the actuators of the animated character 312, adjust lighting, adjust climate control, control effects, adjust airflow, control the ride vehicle 310, and the like (including combinations thereof). For example, the controller 104 may, based on the data received regarding the animated character 312, cause the actuators to adjust to make the animated character 312 move in a certain direction, look towards a particular guest or group of guests, or interact with elements of the experience 300. The controller 104 may, based on the data received regarding the ride vehicle 310, cause the ride vehicle 310 to speed up, slow down, stop, change direction, engage a hydraulic system of the ride vehicle 310, and so on. The controller 104 may adjust the set lights 302 (e.g., adjust position, brightness, color, and so on), control the environmental components 314, and so on.
[0030] In the process block 202 of the method 200, the experience 300 may undergo several cycles under normal operating conditions (e.g., as verified by a human technical operator) in order to obtain this data. For example, the cycling may include movement of a ride vehicle experience to determine proper triggering and response in order to verify that not only is the ride vehicle experience operating as per the original creative intent, but that the ride vehicle experience is responsive to the guest presence and/or precise location as intended. In this phase, the ML engine 114 may collect and train upon the data gathered through cycling to determine desired operating parameters of the experience. Upon collecting a sufficient amount of training data, the ML engine 114 may begin processing and labelling the data. In the processing and labelling phase, supervised and/or unsupervised machine learning will be performed by the ML engine 114. Based on the results of the processing and labelling phase, the ML engine 114 may determine and encode the parameters of a normally functioning experience.
[0031] The experience may be cycled repeatedly to enable the ML engine 114 to perform multiple layered processing passes, gradually learning additional aspects and parameters of the observed experience. For example, there may be a processing pass in which lighting color and intensity (e.g., detected via the imaging sensors 128 or features of the set lights 302) is learned and stored across the system 100. By sampling the experience 300 frame- by-frame over known time units, the system 100 may ascertain whether the set lights 302 are functioning properly to specification and are aimed as intended. The system 100 may detect anomalies such as flickering, outages, degradation or timing and triggering errors. Another processing pass may determine whether the animated figure 312, environmental components 314 and/or other action equipment are triggering properly, moving within expected motion profiles, and determine where and when motion is expected to appear under normal conditions.
[0032] Based on the information obtained in process block 202, the method 200 may, in process block 204, generate profiles of the experience 300 based on the received sensor data. The profiles may include a baseline and a threshold indicating an expected range of the characteristics or aspects of the experience 300. For example, a profile where the equipment in the experience 300 (e.g., the set lights 302, audio equipment 306, environmental components 314, and animated figure 312) is determined to operate according to specification and expectation may be designated as an A profile. However, a profile where the equipment in the experience 300 is determined to operate outside of specification and expectation, may be designated as a B profile. Additional profiles with different characteristics or operational aspects (e.g., a profile in warm weather or cold weather) may be designated as certain profile types (e.g., C profile, D profile, and so on) as well.
[0033] In query block 206, the method 200 may determine whether the experience 300 is operating properly according to specification and expectation. That is, data 207 from a current experience (e.g., essentially real time data from the experience 300) may be compared with established profiles established as part of the process block 204. Through the cycling and processing passes of the experience 300, profile thresholds may be determined outside of which the profile may receive a different designation, but within which a designation may remain even if deviation in the aspects and characteristics of the experience 300 are detected. For example, if during operation of the experience 300 the system 100 detects that the set lights 302 are dimmer than expected due to a technical error, the system 100 may determine this to be a deviation from the A profile, but not a deviation that exceeds the A profile threshold. Under these conditions, it may be determined in the query block 206 that the experience 300 is operating properly, and thus the method 200 may continue to receive sensor data indicative of characteristics of the experience 300 (e.g., in the process block 202). However, if the animated figure 312 experiences a technical issue that renders the animated figure 312 completely immobile, this may cause the profile of the experience 300 to be designated a B profile, C profile, and so on (e.g., beyond a threshold of the A profile and into a separate profile). Further, identification of the data 207 as corresponding to a non-optimal profile (e.g., B profile) may result in mere adjustment to operational aspects of the experience 300 while identification of the data 207 as corresponding to an unacceptable profile (e.g., profile X) may result is a shutdown of substantial operational aspects of the experience 300.
[0034] Depending on the parameters and boundaries designated by a user of the system 100, a B profile or C profile may be determined to be an improper operation of the experience 300, and thus, in process block 208, a corrective action may be determined and performed. The corrective action may include performing automated maintenance on malfunctioning equipment, such as having the controller 104, based on information received from the ML engine 114, send a command causing the experience 300 to adjust current to a malfunctioning set light 302 outputting less brightness than expected, or having the controller 104 send a command causing the experience 300 to adjust other equipment to account for malfunctioning equipment (e.g., adjusting the brightness of another set light 302 to compensate for the malfunctioning set light 302).
[0035] As previously discussed, the controller 104 may receive data from various components such as the ride vehicle 310 and/or the animated character 312 and adjust the actuators of the animated character 312 and/or control the ride vehicle 310 accordingly. Additionally or alternatively, the controller 104 may adjust the actuators of the animated character 312 and/or control the ride vehicle 310 based on the profile designation of the experience 300. For example, if the method 200 designates the profile of the experience 300 as an A profile, the controller 104 may cause the actuators to adjust to make the animated character 312 move in a certain direction, look towards a particular guest or group of guests, or interact with elements of the experience 300, and the controller may cause the ride vehicle 310 to accelerate at a particular location of the experience 300. However, if the method 200 designates the profile of the experience 300 as an X profile, the controller 104 may prevent the actuators from adjusting to move the animated character 312 and may prevent the ride vehicle 310 from accelerating — or may stop the ride vehicle 310 — at the particular location of the experience 300.
[0036] The corrective action may also include sending an alert (e.g., to an alert panel 308) informing the technical operators of the malfunction. If the malfunction requires urgent action, the ML engine 114 may send an urgent alert to the technical operators and cause the experience 300 to stop or pause operation.
[0037] Through the method 200, the ML engine 114 may learn to identify known and new anomalies as well as anticipate anomalies that may occur in the future. For example, if the ML engine 114 determines that the set lights 302 gradually experience a reduction in brightness prior to a bulb burning out, the ML engine 114 may trigger an alert to the technical operators if the ML engine 114 detects that a set light 302 has gradually experienced a reduction in brightness, and may include in the alert an estimation of how long it may take for the bulb of the set light 302 to burn out. In this way, the system 100 may enable predictive maintenance within the experience 300 and may permit the technical operators to take corrective action prior to a malfunction, preserving the quality of the experience 300.
[0038] The system 100, through the ML engine 114, may also be able to apply the machine learning performed on the experience 300 to a different experience. The learned data from the experience 300 may be extrapolated and retargeted to another sufficiently similar experience by implementing a deep learning process known as transfer learning. Transfer learning allows for a sufficiently trained ML model to be repurposed and reused to make observations or predictions about a different but related problem set. For example, using the data learned about the operating characteristics of the set lights 302 in the experience 300, the ML engine 114 may be able to identify and anticipate issues that may be experienced by set lights in a different experience. [0039] In certain embodiments, the system 100 and the experience 300 may utilize multiple localized sensor networks that are not linked together via a single network. FIG. 4 is a block diagram illustrating a system 400 of unlinked localized sensor networks, according to an embodiment of the present disclosure. Sensors 404A, 404B, 404C, and 404D (collectively referred to as the sensors 404) may be various sensors in the experience 300. These sensors may communicate data to a communication hub 402 (e.g., the electronic device 102). The communication hub 402 may receive sensor data (e.g., via the controller 104) and may analyze the sensor data from the sensors 404 (e.g., via the ML engine 114) and send commands (e.g., via the controller 104) to the sensors 404 or to other components within the experience 300. Additionally, the sensors 404 may communicate amongst themselves. The communication hub 402 and the sensors 404 may constitute a sensor network 406. Similarly, a communication hub 410 may receive signals from a device 412 (e.g., an electronic device (e.g., 102), and/or a controller (e.g., 104) communicatively coupled to the animated figure 312 or the ride vehicle 310) communicatively coupled to sensors 414A and 414B (collectively referred to as the sensors 414). The communication hub 10, the device 412, and the sensors 414 within the device 412 may constitute a sensor network 416. The communication hub 410 may communicate with the device 412 and the sensors 414, but may not communicate with the communication hub 402, thus the sensors network 416 may not communicate with the sensor network 406.
[0040] In contrast to FIG. 4, in certain embodiments the system 100 may utilize an interconnected system of localized sensor networks. As previously discussed, in certain embodiments the system 100 may utilize and report data to other systems within the experience 300. FIG. 5 is a block diagram illustrating a network 500 of interconnected localized sensors and devices, according to an embodiment of the present disclosure. In FIG. 5, a central communication hub 502 may collect data from and communicate with sensors 504 A, 504B, and 504C (collectively referred to as the sensors 504 or network sensors). The sensors 504 may also communicate with each other. Certain sensors (e.g., 504A) may act as smaller communication hubs for other sensors (e.g., 504B), collecting information and sending commands to the other sensors. A local communication hub 506 may also collect data from the sensors 504 as well as from a device 508 equipped with a sensor 510A, a sensor 501B, and a sensor 510C, collectively referred to as the sensors 510 (or the device sensors 510). In this way, the device 508 and the sensors 510 within the device 508 may communicate with each other and with the sensors 504. The central communication hub 502 may also communicate with the local communication hub 506 via wired or wireless communication (e.g., WiFi, via a cellular network, and so on). While only one local communication hub 506 is shown, it should be understood that there may be any number of local communication hubs in the system 100. Further, a local communication hub may be assigned to an area of the experience 300, particular equipment of the experience 300 (e.g., the animated character 312), or a subsystem of particular equipment (e.g., a set of actuators within the animated character 312).
[0041] Utilizing the network 500, an issue detected in one sensor (e.g., 504B) in the network 500 may be communicated to other sensors (e.g., 504A, 504C, 510) in the network 500 as well as to other equipment (e.g., the device 508) such as controllers in the animated figures 312, the environmental components 314, and so on. For example, if a rise in temperature above a threshold is detected by the temperature sensor 304, the temperature sensor 304 may communicate the rise in temperature over the network 500, and the ML engine 114 may, via the controller 104, cause one or more fans to activate, may adjust a setting of a central air-conditioning unit, or may reduce the output of certain heat-producing elements within the experience 300.
[0042] With respect to FIGs. 4 and 5, the illustrated communication hubs 402 and 502 may incorporate artificial intelligence, controls, diagnostics, algorithms, lookup tables and combinations thereof to analyze and control aspects of the experience 300. These control features (e.g., artificial intelligence or learning algorithms) of the communication hubs 402 and 502 may be trained based on actual or virtual data. In certain embodiments, training data may be augmented with three-dimensional (3D) (e.g., via a 3D modeling engine 116) information about the experience 300 being observed. By informing the system 100 of the global positions of sensors in the sensor network 120 relative to known computer-aided or temporally changing 3D data, repositioning, removing, or adding additional sensors, devices, and other equipment may be permitted without retraining from certain observation angles. For example, given the pixel data of an image sensor 128 and knowledge of where particular pixels align with the 3D set, the image sensor 128 may be retrained from a new angle, shortening or eliminating the time normally required to add, remove, move or remount new image sensors. Thus, by using an interconnected network of sensors, devices, and equipment such as the network 500 depicted in FIG. 5 along with the transfer learning processes previously discussed and the 3D modeling engine 116, the system 100 may reduce or eliminate the time and/or processing power normally required in adding, removing, moving, and/or reconfiguring sensors, devices, and equipment.
[0043] Additionally, using the 3D modeling engine 116, the system 100 may be able to detect and correct certain positional errors of the equipment or sensors. For example, if the system 100 (e.g., via the ML engine 114) detects unexpected readings from an image sensor 128, one or more other image sensors 128 may observe the position of the problematic image sensor 128, compare the position to an expected position based on the global 3D model, determine that the problematic image sensor 128 is misaligned, and send feedback to the system 100, thereby enabling the system 100 to correct the position of the problematic image sensor 128. Indeed, the system 100 can be trained based on 3D modeling and/or operate using 3D modeling as a baseline template for comparison to ongoing data (e.g., essentially real-time data from an attraction or experience).
[0044] While only certain features of the disclosure have been illustrated and described herein, many modifications and changes will occur to those skilled in the art. It is, therefore, to be understood that the appended claims are intended to cover all such modifications and changes as fall within the true spirit of the disclosure. It should be appreciated that any of the features illustrated or described with respect to the figures discussed above may be combined in any suitable manner.
[0045] The techniques presented and claimed herein are referenced and applied to material objects and concrete examples of a practical nature that demonstrably improve the present technical field and, as such, are not abstract, intangible or purely theoretical. Further, if any claims appended to the end of this specification contain one or more elements designated as “means for (perform)ing (a function)...” or “step for (perform)ing (a function). . it is intended that such elements are to be interpreted under 35 U.S.C. 112(f). However, for any claims containing elements designated in any other manner, it is intended that such elements are not to be interpreted under 35 U.S.C. 112(f).

Claims

CLAIMS:
1. A method for monitoring an amusement park experience, the method comprising: receiving, via a plurality of sensors, multiple layers of first sensor data indicative of characteristics of the experience; generating a profile of the experience based on the first sensor data, wherein the profile comprises a baseline and a threshold indicating an acceptable range of the characteristics; receiving second sensor data and third sensor data via the plurality of sensors; determining, in response to identifying characteristics of the second sensor data that deviate from the baseline but do not exceed the threshold, that the experience is operating properly; and in response to identifying characteristics of the third sensor data that deviate from the baseline and exceed the threshold, performing a corrective action.
2. The method of claim 1, wherein the plurality of sensors comprises a camera and/or other optical sensor, an audio sensor, a vibration sensor, an ultraviolet radiation sensor, a tactile sensor, a weight sensor, a motion sensor, a temperature sensor, a humidity sensor, or any combination thereof.
3. The method of claim 1, wherein the deviation comprises a qualitative deviation from the profile.
4. The method of claim 3, wherein the corrective action comprises triggering an alert notification.
5. The method of claim 3, wherein the corrective action comprises dynamically adjusting one or more components associated with the qualitative deviation in order to correct or mitigate a cause of the deviation.
6. The method of claim 1, wherein one or more sensors of the plurality of sensors are disposed on and/or within a ride vehicle of the experience.
7. The method of claim 1, wherein one or more sensors of the plurality of sensors are disposed on and/or within an animated figure associated with the experience.
8. The method of claim 1, comprising using machine learning on the sensor data to detect quantitative and qualitative deviations from the profile of the experience.
9. The method of claim 8, comprising applying the machine learning from the experience to a second experience.
10. The method of claim 8, wherein a machine learning engine is trained on a multidimensional model of the experience.
11. The method of claim 10, wherein the multi-dimensional model of the experience identifies a location of one or more sensors of the plurality of sensors within the experience.
12. A system, comprising: a network; one or more communication hubs communicatively coupled with one another via the network; a ride vehicle comprising one or more ride vehicle sensors, where the one or more ride vehicle sensors are communicatively coupled to at least one of the one or more communication hubs; an animated figure comprising an actuator and an actuator sensor, wherein the actuator sensor is communicatively coupled to at least one of the one or more communication hubs; and a controller configured to receive data from the one or more ride vehicle sensors and data from the actuator sensor via the one or more communication hubs, wherein the controller is further configured to adjust the actuator and control the ride vehicle based on a discrepancy between a baseline relative value and a currently detected relative value of the data from the one or more ride vehicle sensors and the data from the actuator sensor.
13. The system of claim 12, wherein the ride vehicle sensors comprise a vibration sensor, a temperature sensor, a motion sensor, a speed sensor, an accelerometer, or any combination thereof.
14. The system of claim 12, wherein the actuator sensor comprises a vibration sensor, a temperature sensor, a torque sensor, a pressure sensor, or any combination thereof.
15. The system of claim 12, wherein the one or more communication hubs are configured to, via the controller: collect sensor data from the ride vehicle sensors; analyze the ride vehicle sensor data; and based on the analyzed ride vehicle sensor data, transmit a command to the actuator sensor of the animated figure, transmit an alert notification, or both.
16. The system of claim 12, wherein the one or more communication hubs are configured to, via the controller: collect sensor data from the actuator sensor; analyze the actuator sensor data; and based on the analyzed actuator sensor data, transmit a command to the ride vehicle, transmit an alert notification, or both.
17. A tangible, non-transitory, computer-readable medium, comprising computer- readable instructions that, when executed by one or more processors of an electronic device, cause the electronic device to: receive, via a plurality of sensors, first sensor data indicative of characteristics of an amusement park experience; generate a profile of the amusement park experience based on the first sensor data, wherein the profile comprises a baseline and a threshold indicating an acceptable range of the characteristics; receive a second sensor data and third sensor data via the plurality of sensors; determine, in response to identifying characteristics of the second sensor data that deviate from the baseline but do not exceed the threshold, that the experience is operating properly; and in response to identifying characteristics of the third sensor data that deviate from the baseline and exceed the threshold, perform a corrective action.
18. The tangible, non-transitory, computer-readable medium of claim 17, wherein the corrective action comprises dynamically adjusting one or more components associated with a qualitative deviation in order to correct or mitigate a cause of the deviation.
19. The tangible, non-transitory, computer-readable medium of claim 17, comprising computer-readable instructions that, when executed by one or more processors of an electronic device, cause the electronic device to: use machine learning on the first sensor data, the second sensor data, the third sensor data, or any combination thereof to detect quantitative and qualitative deviations from the profile of the amusement park experience.
20. The tangible, non-transitory, computer-readable medium of claim 19, comprising computer-readable instructions that, when executed by one or more processors of an electronic device, cause the electronic device to: apply the machine learning from the amusement park experience to a second amusement park experience.
PCT/US2023/010131 2022-01-05 2023-01-04 Systems and methods for visual scene monitoring WO2023133151A1 (en)

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
US202263296733P 2022-01-05 2022-01-05
US63/296,733 2022-01-05
US18/090,932 US20230213925A1 (en) 2022-01-05 2022-12-29 Systems and methods for visual scene monitoring
US18/090,932 2022-12-29

Publications (1)

Publication Number Publication Date
WO2023133151A1 true WO2023133151A1 (en) 2023-07-13

Family

ID=85251663

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2023/010131 WO2023133151A1 (en) 2022-01-05 2023-01-04 Systems and methods for visual scene monitoring

Country Status (1)

Country Link
WO (1) WO2023133151A1 (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2015179298A1 (en) * 2014-05-21 2015-11-26 Universal City Studios Llc Virtual attraction controller
WO2020023669A1 (en) * 2018-07-25 2020-01-30 Light Field Lab, Inc. Light field display system based amusement park attraction
US20210041885A1 (en) * 2019-08-09 2021-02-11 Universal City Studios Llc Vehicle guidance via infrared projection

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2015179298A1 (en) * 2014-05-21 2015-11-26 Universal City Studios Llc Virtual attraction controller
WO2020023669A1 (en) * 2018-07-25 2020-01-30 Light Field Lab, Inc. Light field display system based amusement park attraction
US20210041885A1 (en) * 2019-08-09 2021-02-11 Universal City Studios Llc Vehicle guidance via infrared projection

Similar Documents

Publication Publication Date Title
US10510157B2 (en) Method and apparatus for real-time face-tracking and face-pose-selection on embedded vision systems
US8639666B2 (en) System and method for real-time environment tracking and coordination
WO2018013495A1 (en) Augmented reality methods and devices
US10334178B2 (en) Flash lamp, electronic device having the flash lamp and method for controlling the flash lamp
CN110035589B (en) Stage lighting control method
WO2018107503A1 (en) Method and system for simulating visual data
US9648699B2 (en) Automatic control of location-registered lighting according to a live reference lighting environment
Samak et al. Robust behavioral cloning for autonomous vehicles using end-to-end imitation learning
US20230213925A1 (en) Systems and methods for visual scene monitoring
WO2018166149A1 (en) Video monitoring apparatus and method for operating state of wave maker
CN204131634U (en) The robot surveillance of tool image identification and automatically patrol path setting
CN103969075A (en) Self-testing method and system of 3D printing device
WO2023133151A1 (en) Systems and methods for visual scene monitoring
CN112269416A (en) Monitoring and maintaining robot system and method applied to data center
US20240051155A1 (en) Ground based robot with an ogi camera module and cooling system
JP2022539559A (en) DAMAGE DETECTION DEVICE AND METHOD
TWI635319B (en) Virtual reality system and method
Li et al. Intelligent road tracking and real-time acceleration-deceleration for autonomous driving using modified convolutional neural networks
US20230007751A1 (en) Failure detection and correction for led arrays
CN113840125A (en) Projector control method, projector control device, projector, and storage medium
Morrissett et al. A physical testbed for intelligent transportation systems
JP6730074B2 (en) Air conditioning control device, air conditioning control system, air conditioning control method, and air conditioning control program
US20200257831A1 (en) Led lighting simulation system
CN114399478A (en) Security lock detection system, method, device, computer device and storage medium
CN111366072A (en) Data acquisition method for image deep learning

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 23705452

Country of ref document: EP

Kind code of ref document: A1